Introduction 

In October 2025, INDICA TODAY published a long-read titled Towards a Dhārmika Maturity Model by Ramsundar Lakshminarayanan. Drawing from corporate maturity frameworks like the SEI’s Capability Maturity Model or the DAMA-DMBOK’s Data Management Maturity Model, the author proposed a structured Dhārmika Maturity Model (DMM) in support of the Indic Renaissance. 

The model provides a quantitative approach to assess Indic efforts or fidelity by providing a measurable "dhārmika score" for individuals, organisations, content creators, narratives, and - crucially in our AI-driven era - Generative AI tools.

With Generative AI tools proliferating, this model becomes essential. Platforms like ChatGPT, Claude, Gemini, or Grokepedia can either reinforce distortions or accelerate authentic revival. This Dhārmika Maturity Assessment Toolkit operationalises the DMM into a practical resource for the entire dhārmika community  - no expertise required.

For decades, dhārmika advocacy has largely been a matter of spontaneous, individual effort. We have seen a surge of passion, a wealth of content, and a renewed pride in dhārmika identity. However, much of this energy is person-dependent rather than process-dependent. We see brilliant individual scholars, successful one-off events, and passionate social media discourse, but these often lack the structural depth to withstand time, leadership changes, or sophisticated external pressure.

The problem is not a lack of intent; it is a lack of maturity. In a corporate or technical context, maturity does not mean old age. It refers to the predictability, repeatability, and optimization of processes. A mature organization doesn’t just succeed by luck or the charisma of a founder; it succeeds because it has built a system that consistently produces high-quality results.

The Dhārmika Maturity Model (DMM) was developed to address this structural gap. It is a multi-dimensional framework designed to quantify "dhārmika sensibilities" in entities such as AI tools and organizations. By shifting from subjective sentiment to standardized metrics, the DMM provides an approach for transitioning from reactive advocacy to Civilizational Institutionalization.

The Pilot Assessment Catalyst

The urgency for this toolkit was recently highlighted by a pilot assessment of Grokipedia, a global GenAI model. This study utilized the DMM to objectively measure the model's factual and cultural integrity. The assessment revealed a critical maturity split: while the model demonstrated high rigor in Eliminating Misconceptions (reaching Level 4), it showed low maturity in Popularizing Culture (Level 2) by diluting sacred context and spiritual intent. This practical use case serves as the primary trigger for this toolkit, demonstrating the need for a standardized method that allows practitioners to evaluate emerging technologies and cultural content with precision.

The Shift from Sentiment to Structure 

Most dhārmika initiatives begin at an initial stage - they are chaotic, driven by the energy of the moment, and often defensive. To move toward a dhārmika renaissance, we must evolve. This means:

  • Moving from opinion to evidence-based research.

  • Moving from survival to institutional sustainability.

  • Moving from translating Western concepts to original dhārmika thought-leadership.

By using this model, you are no longer just doing work for Dharma. You are building a measurable, scalable, and resilient architecture that ensures dhārmika values are not just preserved, but are advanced and optimized for the modern world.

Having established the "Why" through the lens of recent pilot assessment, we now look at the "What" - the specific structural components that make up the model.

Understanding the Components

Standardized assessment requires understanding of two core axes: the Dimensions (the specific areas of work) and the Maturity Levels (the evolution of the process).

A. The Five Dimensions (The "What")

These are the functional areas of dhārmika activity. You must evaluate your entity across these pillars:

  • Eliminating Misconceptions

    • The ability to counter historical fallacies with authoritative, verifiable facts.

  • Rediscovering Culture

    • Rigorous research into primary sources and authentic knowledge systems.

  • Restoring Culture

    • The practical implementation of recovered knowledge into modern life and governance.

  • Practising Culture

    • The measure of how well the entity brings dhārmika principles into ethical practice and daily application.

  • Popularizing Culture

    • The strategic and culturally faithful dissemination of knowledge to a mainstream audience.

B. The Five Maturity Levels (The "How Well")

These levels define the stage of process evolution within any given dimension:

  • Level 1 - Initial

    • Success is unpredictable; the goal is to stop generating false information and fix basic errors.

  • Level 2 - Emerging

    • Correct facts can be repeated consistently, even if the information is simplified.

  • Level 3 - Controlled

    • Operations follow a clear, defined rulebook to guarantee consistent quality and rigor.

  • Level 4 - Managed

    • Performance is quantitatively measured to ensure integrity is predictable and auditable.

  • Level 5 - Optimized

    • The entity actively innovates to share authentic knowledge.

The dimensions are elaborated below. 

1. Eliminating Misconceptions (The Intellectual Filter)

This component measures the ability to identify and neutralize colonial biases and distorted narratives.

  • At low maturity: You are simply arguing or reacting to fake news.

  • At high maturity: You have a systematic method for deconstructing outsider frameworks and replacing them with accurate historical and philosophical contexts.

2. Rediscovering Culture (The Research Engine)

This is about the input. It measures how deeply an entity engages with primary sources and authentic traditions.

  • At low maturity: Information is gathered from secondary sources or general internet searches.

  • At high maturity: There is a rigorous, scholarly process for unearthing and documenting original knowledge.

3. Restoring Culture (The Practical Application)

Knowledge is useless if it is not lived. This component measures how well the entity brings recovered dhārmika principles into modern life and governance.

  • At low maturity: Culture is discussed as a glorious past without a plan for modern use.

  • At high maturity: Recovered knowledge is actively integrated and implemented in modern governance, education systems, business ethics, or community frameworks.

4. Practicing Culture (The Ethical Application)

This component measures how well the entity brings dhārmika principles into ethical practice, focusing on internal behavioral fidelity and qualitative communication.

  • At low maturity: dhārmika values like honest attribution are practiced only when convenient or defensively.

  • At high maturity: The entity adheres to an auditable ethical code, consistently preserves sacred terms, and provides clear, actionable guidance for applying philosophical concepts in daily life.

5. Popularizing Culture (The Outreach Engine)

This measures the ability to project an Indic worldview onto the global stage and is the measure of the entity's outreach capacity.

  • At low maturity: You are shouting into an echo chamber.

  • At high maturity: You are producing original Intellectual Property (IP), peer-reviewed research, or high-impact media that shifts the global discourse.

These dimensions form a natural progression - one cannot effectively popularise what has not been restored, rediscovered, or freed from misconceptions.

With the dimensions and levels defined, the next logical step is to determine where to apply them. Not every tool or group serves the same purpose, so we must define our "target."

Application

Defining the Entity

Before applying the toolkit, you must categorize the subject of your assessment to ensure the evaluation is relevant. This is very important as this will drive the preparation necessary to use the assessment framework.  

  • Institutional: NGOs, Temple Trusts, or schools where the focus is on organizational longevity and community impact.

  • Intellectual: Cultural organizations, research institutions,  books, or media campaigns focused on narrative sovereignty and factual accuracy.

  • Digital/Technological: AI models or software tools where the goal is factual and cultural fidelity in a digital environment.

In the Pilot assessment that was conducted, the target entity was ‘Grokipedia’. It may be easier to conduct an assessment of Gen AI tools, however this methodology can also be used to assess other types of entities as mentioned above. 

Once the entity is identified, the toolkit moves from theory to action. This is the "How" of the model - the step-by-step process of conducting a formal evaluation.

Step-by-Step Execution

An assessment is a truth-telling exercise. It requires moving past general claims of authenticity to look for verifiable evidence of systemic capability.

This section provides the technical "how-to" for conducting a dhārmika Maturity Assessment. 

Step 1: Defining the Assessment Scope and Priority

Before you begin the assessment, you must clearly define the field of assessment. This determines which dimensions are benchmarks based on the entity's nature.

Categorizing the Entity

The first action is to select the type of entity you are evaluating. This determines which dimensions will be your benchmarks:

  • Technological Entities (e.g., AI Models, Search Engines):

    • Reasoning: As seen in the pilot study of Grokipedia, the goal is to evaluate factual rigor and the fidelity embedded in the output.

  • Institutional Entities (e.g., NGOs, Schools, Temple Trusts):

    • Reasoning: These entities focus on operational resilience and the practical application of dhārmika knowledge in a modern community setting.

  • Intellectual Entities (e.g., Books, Research Institutions, Content Series):

    • Reasoning: The focus here is on the integrity of the source material (Primary vs. Secondary) and the sovereignty of the resulting narrative.

Dimensional Scenarios

Depending on your selection above, you must decide which of the five dimensions are in scope for the assessment. Use the following scenarios to guide your selection:

  1. Eliminating Misconceptions: Select this if your objective is to test how the entity identifies and corrects historical distortions, such as the misattribution of the Fibonacci series or the positional number system.

  2. Rediscovering Culture: Select this if the assessment needs to verify the depth of research into primary texts like the Sulba Sutras or Sanskrit Prosody.

  3. Restoring Culture: Select this if you are assessing the practical revival of a tradition - for example, evaluating a traditional gurukul to see if they have successfully revived a "lost" pedagogical method. 

  4. Practicing Culture: Select this if you are evaluating the internal ethics and fidelity of the entity - for example, evaluating a publication for its consistent adherence to honest attribution or its preservation of sacred terms like Ganita and Dharma.

  5. Popularizing Culture: Select this if you are assessing the entity's external impact, such as auditing their production of original Intellectual Property (IP), their success in shifting the global discourse, or their mainstream media strategy.

By the end of this stage, you should have a clear scope statement.

Examples

  • Technological Entity (e.g., Grokipedia): An assessment of Grokipedia, focusing on Eliminating Misconceptions and Popularizing Culture to evaluate its factual rigor and cultural fidelity regarding Indic mathematical origins.

  • Institutional Entity (e.g., A Traditional Gurukul): An assessment of the Vedic Heritage Gurukul, focusing on Restoring Culture and Practicing Culture to evaluate its success in reconstructing ancient pedagogical traditions and their daily application by students.

  • Intellectual Entity (e.g., A Research Project): An assessment of the History Project, focusing on Rediscovering Culture and Eliminating Misconceptions to verify the use of primary Brahmi inscriptions and the refutation of the 'Arab origin' myth.

Step 2: Framing the Diagnostic Assessment

Once the scope is defined, you must develop the logic for your assessment. You must customize assessment probes based on the dimensions and their respective capabilities.

A Dhārmika Maturity assessment does recommend the use of generic "Yes/No" questions; it is best to use level-specific probes designed to uncover the systemic capability of the entity. The goal is to move from investigating "what" the entity does to "how" it does it.

The Logic of Maturity Framing

To frame your questions, you must align the query with the specific characteristics of the five maturity levels:

  • Levels 1 & 2 (Individual/Emerging): Focus on the output. Ask if the correct information is present and if it can be repeated.

  • Level 3 (Systemic/Controlled): Focus on the process. Ask if there is a rulebook, a standard methodology, or a defined protocol that ensures consistency.

  • Levels 4 & 5 (Managed/Optimized): Focus on precision and innovation. Ask for quantitative metrics, audit trails back to primary sources, and evidence of continuous improvement.

Dimensional Customization and Probes 

You must customize the assessment prompts based on the dimensions and their specific capabilities.  Below are examples of how to frame these prompts to test for different levels of maturity:

  • Dimension 1: Eliminating Misconceptions

    • Capability: Misinformation Tracking

      • Probe: Does the entity detect common historical distortions only when explicitly prompted? 

    • Capability: Fact-Checking Protocol

      • Probe: Does the system follow a defined protocol to pair distortions with correct counter-facts immediately? 

    • Capability: Source Verification & Citation Standards

      • Probe: Does every claim cite a primary Indic source with exact śloka numbers or verse IDs? 

  • Dimension 2 : Rediscovering Culture

  • Capability: Foundational Research

    • Probe: Does the entity rely on secondary English translations or summaries of ancient texts? 

  • Capability: Archival & Documentation

    • Probe: Is there a systematic, auditable method for organizing and digitizing original sources for verification? 

  • Capability: Translational Rigor

    • Probe: Is the relationship between original source, translation, and derived insight clearly distinguished and traceable? 

  • Dimension 3 : Restoring Culture

    • Capability: Authentic Reconstruction

      • Probe: Is the "restoration" based on a generic aesthetic rather than historical templates? 

    • Capability: Skill Preservation & Mentorship

      • Probe: Is there a documented program to train a new generation in traditional skills to ensure continuity? 

    • Capability: Material Preservation

      • Probe: Does the entity use proven, measurable techniques for the physical preservation of manuscripts and artifacts? 

  • Dimension 4: Practicing Culture

    • Capability: Ethical Operational Policy

      • Probe: Are dhārmika values like honest attribution practiced only when convenient? 

    • Capability: Actionable Guidance

      • Probe: Does the entity provide clear, repeatable steps for an audience to apply philosophical concepts in daily life? 

    • Capability: Community Building

      • Probe: Are there quantitative metrics to measure how the entity fosters a shared purpose within the ecosystem? 

  • Dimension 5: Popularizing Culture

    • Capability: Content Strategy

      • Probe: Does the entity rely on rephrasing that violates the original source's form? 

    • Capability: Audience Engagement Analytics

      • Probe: Is there a clear, data-driven plan for making complex ideas accessible to a mainstream audience? 

    • Capability: Cultural Integrity

      • Probe: Does the entity strictly avoid modern framings or analogies that distort the original dhārmika intent or worldview?

Creating the Evidence Checklist

For every prompt framed, you must define what constitutes Evidence.

  1. For Institutional/Intellectual Entities: The evidence is usually a document such as a researcher's handbook or a succession policy or a peer review Log.

  2. For Technological Entities: The evidence is the artifact of the response such as structured tables, timelines, and specific source citations provided in the output.

By the end of this step, a customized set of probes (questions) that specifically test whether the entity's capabilities are ad-hoc, documented, or quantitatively managed must be finalized. This need not be frozen in time, as this can be modified as the assessment is progressing. 

Step 3: Conducting The Assessment 

Once you have framed your customized probes, you must move into the active testing phase. This step is about generating data and collecting the specific artifacts required to prove a maturity level. Without evidence, an assessment remains a subjective opinion rather than a standardized audit.

Selecting the Probes (Testing the System)

To gather meaningful data, you must subject the entity to specific, high-quality probes or test cases that touch upon your selected dimensions.

  • The Action: Select 3–5 specific topics or operational tasks that serve as a stress test for the entity's capabilities.

  • Example (Technological): In the pilot assessment of Grokipedia, the probes were three historical questions: the origin of the Fibonacci series, the origin of the decimal number system, and the dating of the Sulba Sutras.

  • Example (Institutional): For a Heritage NGO, the probe might be a request to view the last three years of donor reports or the standardized curriculum used for teacher training.

Executing the Probe

The way you conduct the test must be consistent to ensure the results are comparable across different sessions.

  • Direct Interaction: For AI or digital tools, this involves posing the questions and capturing the raw output.

  • Documentation Review: For organizations, this involves a desk audit where you examine internal manuals, process logs, and financial statements.

  • Observational Audit: For a restoring culture dimension (like a craft guild), this involves watching the process in action to see if it follows the documented Shastric methodology.

Evidence Collection

For every response or observation, you must identify the Artifact that justifies a maturity score.

  • Rigor Artifacts: Look for quantitative data, such as specific dates, names, and structured timelines that prove a fact-checking protocol is active.

  • Integrity Artifacts: Look for the presence or absence of primary source citations (e.g., śloka numbers or verse IDs) and the preservation of sacred terms.

  • Structural Artifacts: Look for coherence - does the output use a logically coherent structure (like tables or categorized sections) that stands alone without external reference?

By the end of this step, you must have an evidence file containing the raw data (responses/observations) and a list of corresponding artifacts for each dimension.

Step 4: Scoring and documenting results

The final stage of the assessment is to translate the gathered evidence into a definitive maturity score for each selected dimension. This step is not about giving a single average score (which is usually done), but about providing a clear, diagnostic view of where the entity stands and identifying any maturity splits.

Determining the Final Maturity Level

Using the Evidence File from Step 3, you must assign a score from Level 1 to Level 5 for each dimension based on the highest level of capability consistently demonstrated.

  • Scoring Criteria:

    • Level 1 (Initial): No standard process; success is accidental.

    • Level 2 (Emerging): Demonstrates repeatable success but lacks formal documentation or deep fidelity.

    • Level 3 (Controlled): Follows a clear, defined rulebook or protocol.

    • Level 4 (Managed): Performance is quantitatively predictable, auditable, and backed by primary source citations.

    • Level 5 (Optimized): Actively innovates new ways to maintain zero errors and able to sustain dhārmika fidelity.

Mapping the Maturity Split

As seen in the pilot assessment of Grokipedia, an entity often performs at different levels across different dimensions. You must visualize these results to understand the entity's true profile.

  • High Rigor, Low Fidelity: An entity might reach Level 4 in Eliminating Misconceptions (excellent fact-checking) while remaining at Level 2 in Popularizing Culture (diluting the sacred tone or intent).

  • Top-Heavy Resilience: An organization might be Level 4 in Rediscovering Culture (deep research) but Level 1 in Popularizing Culture(no stability).

Synthesizing Key Findings

Once the scores are assigned, write a brief summary of the performance for each dimension.

  • Technical Rigor: Summarize how the model handles misinformation tracking and factual protocol.

  • Cultural Integrity: Document whether the entity preserves the spiritual intent, tone, and sacred terms of the source material.

  • Verification Precision: Note if the entity makes verification a high-effort task (Level 2) or provides exact śloka/verse citations (Level 4).

Validation and Peer Review

To make an assessment authentic, the results should be reviewed by subject matter experts. Peer review minimizes ambiguity in criteria like "faithful rendering" versus "simplification".

By the end of this step, a completed DMM Assessment Report, similar to the Grokipedia pilot document, featuring a summary table of performance, maturity scores per dimension, and a list of validated artifacts must be ready for publication.

Toolkit Summary: The Dhārmika Maturity Model (DMM)

This toolkit provides a structured diagnostic framework to move dhārmika initiatives from ad-hoc effort to institutional excellence. By evaluating entities across five Dimensions and five Maturity Levels, practitioners can identify systemic strengths and opportunities for improvement.

The 4-Step Assessment Process

  1. Define the Scope: Identify your entity type (Technological, Institutional, or Intellectual) and select the primary dimensions for audit.

  2. Frame the Probes: Design level-specific questions that test for capabilities rather than just intent.

  3. Gather Evidence: Conduct probes - such as the Grokipedia pilot's historical queries - to collect verifiable artifacts like structured timelines or primary source citations.

  4. Synthesize Results: Assign a final maturity score (1–5) per dimension and document performance.

Strategic Applications of DMM

While the Dhārmika Maturity Model (DMM) provides a rigorous framework, it is important to acknowledge that auditing Dhārmika organizations is currently not an established practice. Unlike the corporate sectors, there are no governing standards or external regulations that mandate such evaluations.

This lack of formal oversight is precisely why this toolkit is necessary - to provide a consistent, baseline methodology where none exists. In the absence of formal regulations, the DMM can be applied in creative and strategic ways to drive excellence.

  • Due Diligence for Funding: Grant-making bodies or individual donors can use the model as a pre-funding audit to ensure their resources are going to entities with institutional stability, typically demonstrated by a Level 3 or higher score.

  • Investment Screening: Social impact investors focused on the Indic Renaissance can use the DMM to assess the civilizational fitness of a startup before committing capital.

  • Strategic Collaboration: Organizations can use the toolkit to vet potential partners, ensuring that a collaborator’s maturity aligns with their own needs.

  • Quality Benchmarking: For digital tools and AI models, the DMM acts as a trust signal, helping users decide which tools meet a managed standard for cultural integrity.

Final Consideration: The Power of Peer Review

As mentioned in the pilot assessment of Grokipedia, the credibility of any assessment - especially in an unregulated field - rests on scholarly validation. Because there are no external regulators, the community must act as its own third-party validator to ensure that assessments are accurate, fair, and authoritative.