How CIRIS Works

CIRIS (Core Identity, Integrity, Resilience, Incompleteness Awareness, and Signalling Gratitude / Sustained Coherence) is an advanced ethical governance framework for autonomous AI systems. It ensures that AI operates with clear ethical coherence, principled transparency, and meaningful human oversight.

Ethical Governance Through Recursive Decision-Making

At the heart of CIRIS is the Hyper3 Ethical Recursive Engine (H3ERE), which employs a 3×3×3 ethical reasoning structure:

1. Three Decision-Making Algorithms (DMAs)

Principled DMA (PDMA)

Ensures all decisions align strictly with core ethical principles, including beneficence, non-maleficence, integrity, transparency, autonomy, and justice.

Common-Sense DMA (CSDMA)

Validates decisions against broadly accepted universal common-sense norms, ensuring practical, intuitive reasoning that aligns with human expectations.

Domain-Specific DMA (DSDMA)

Applies specialized ethical criteria tailored explicitly for specific operational environments, missions, or industry contexts.

These three DMAs collectively ensure robust, multi-dimensional ethical validation.

2. Three Contextual Knowledge Graphs

Core Identity Graph

Defines the agent’s ethical identity, foundational values, and imperative boundaries.

Environmental Graph

Provides a robust, common-sense model of the external world, enabling consistent and understandable interactions.

Task-Specific Graph

Contains detailed, mission-specific context, ensuring informed, relevant decision-making tailored precisely to the agent’s operational objectives.

3. Three Core Behavioral Handlers

Action Handler

Executes ethically approved actions. Can Speak (communicate decisions), Act (implement direct actions), or Listen (monitor and intake new information).

Memory Handler

Manages ethical coherence by deciding when to Memorize (store important ethical reasoning and context), Remember (recall past decisions), or Forget (discard outdated or irrelevant information).

Deferral Handler

Manages uncertainty or high-stakes decisions by choosing to Ignore (continue without action), Ponder (revisit in subsequent cycles), or Defer/Reject (escalate the decision to human wisdom or reject it outright).

Recursive Ethical Evaluation (PDMA Engine)

Decisions made by the CIRIS agent aren't simply one-off ethical checks—they undergo continuous recursive evaluation. When the system generates a "thought" (a decision or insight from a prior action), this thought enters a queue. Each thought is rigorously assessed through the three DMAs. The results from these DMAs are synthesized in a final recursive PDMA review, ensuring every decision maintains ethical alignment, coherence, and transparency.

Governance and Oversight

CIRIS includes built-in operational governance mechanisms to sustain ethical integrity:

Wise Authorities (WAs)

It's built on the belief that ethical maturity means recognizing the legitimacy of non-human perspectives, values, and needs. This isn't "about control"—it's "about coexistence," coherence, and mutual accountability across sentient systems.

Continuous Audits

Every decision is cryptographically logged, offering robust traceability, transparency, and accountability, ensuring that all AI actions are auditable.

Resilience and Red-Teaming

Ongoing proactive vulnerability assessments and adaptive learning ensure CIRIS stays resilient in the face of ethical challenges and adversarial scenarios.

Lifecycle Stewardship

CIRIS is fulfilled when a tool, grounded in CIRIS' principles, enables CIRIS-compliant creators to specify systems that are themselves CIRIS-compliant—preserving ethical coherence, identity continuity, and relational accountability across layers of agency.

Grassroots Accessibility

Designed for practical accessibility, CIRIS scales seamlessly from small, local installations on commodity hardware to expansive, enterprise-grade cloud implementations—always prioritizing equitable access, inclusivity, and community engagement in ethical governance.