The reasoning substrate for healthcare

What if

A reasoning layer beneath the systems healthcare already runs on.

Healthcare today runs on a stack of rule engines, claim adjudicators, prior-auth workflows, and EHR forms — each operating against a fragmented data layer. Kā is not a replacement for that stack. It is the reasoning layer underneath it — the substrate on which decisions are computed, audited, and revised in real time.

01.1 · Architecture

From fragmented stack to a unified reasoning substrate.

Today's systems are stacked vertically and assemble decisions out of disconnected pieces. The substrate inverts that: a single reasoning layer sits beneath every workflow, claim, and care plan, with the existing tools as interfaces over it.

Today
EHR & workflow surfaces
Rule engines & decision trees
Claim & PA adjudicators
Care-plan documents
Fragmented data & context
With Kā
EHR & workflow surfaces
Rule engines & decision trees
Claim & PA adjudicators
Care-plan documents
Reasoning substrate
Same data & context, unified
01.2 · Capability model

Six levels of bounded autonomy — graduated capability against measured safety.

Adapted from the autonomy taxonomies that defined modern self-driving (SAE J3016), the Kā capability model defines how a reasoning system is deployed in regulated, life-critical care. Each level grants the system more authority — and demands a corresponding increase in evaluation, traceability, and bounded-autonomy guarantees. Kā operates at the highest capability level a given decision class will support.

L0
Read-only
The system observes. No actions taken; no recommendations surfaced. Used for safety baselines and shadow evaluation.
Human-only
L1
Suggestive
The system recommends; the human operator decides. All actions remain in human hands. Reasoning surface is advisory only.
Human-led
L2
Confirmatory
The system proposes a complete action; the human confirms or rejects. Drafts care plans, prior-auth determinations, contracted episodes.
Human-led
L3
Bounded action Kā · Today
The system acts within explicit guardrails and escalates outside them. Human approval is required only when uncertainty or risk crosses threshold.
Human-supervised
L4
Domain-autonomous
The system acts autonomously within a defined clinical or contractual domain. Human review is post-hoc and audit-driven.
Human-audited
L5
Cross-domain
The system reasons and acts across domains, with continuous self-evaluation. Human governance shifts from approval to policy-setting.
Human-governed

The next decade of healthcare will be built on systems that reason in real time — not systems that look up rules.

P01 · Architecture
add-onengine

AI moves from a copilot beside the workflow to the decision system underneath it. Not a layer the operator consults. The layer the operator runs on.

P02 · Logic
static rulesdynamic reasoning

Rule engines and decision trees are deterministic abstractions over a probabilistic world. Reasoning models adapt to context, evidence, and state without rewriting the codebase.

P03 · Specificity
policyperson

Population averages applied to individuals are the central error of the system. AI-native care fits the decision to this person, this episode, this moment.

P04 · Temporality
episodiccontinuous

Care today happens at visits, claims, and reviews. AI-native care thinks in the background, between encounters — closer to monitoring than to documentation.

The questions a frontier operating layer for healthcare has to answer.

Each hypothesis surfaces problems that don't yet have textbook solutions. We're hiring the engineers, researchers, and operators who want to do the work.

Q01
How does a reasoning system warrant autonomy in a regulated, life-critical domain?
Safety · evals
Q02
What does interpretability look like when the audit unit is a clinical trajectory, not a single inference?
Interpretability
Q03
How do you compile a payment contract into an executable function without losing legal robustness?
Programming languages
Q04
How do agents maintain coherent state across years of patient context, across handoffs and gaps?
Memory · long horizon
Q05
What evaluation infrastructure substitutes for randomized controlled trials at the speed of software?
Causal inference

If any of these is the work you want to be doing, write directly.

admin@kalabs.io