Build AI agents that
don't lose their work.

JamJet is the durable runtime for AI agents — with checkpoint replay, full execution traces, and runtime-enforced reliability.
Write Python. Run with Rust reliability.

$ pip install jamjet

Your agents crash. Ours recover.

Every step is checkpointed as it happens. When a worker dies mid-run, the scheduler reclaims the lease and resumes exactly where it left off.

scheduler Plan Research Analyze Review Synthesize
jamjet · crash recovery
$ jamjet run research-pipeline.yaml
▸ Starting execution exec_7f3a...
▸ [Plan]       ✓ completed  420ms
▸ [Research]   ✓ completed  1.2s
▸ [Analyze]    ✗ worker crashed
▸ Lease expired · reclaiming...
▸ [Analyze]    ✓ resumed    890ms
▸ [Review]     ✓ completed  650ms
▸ [Synthesize] ✓ completed  1.1s
▸ Execution complete · 5/5 nodes · 0 events lost
1
Worker crashes mid-execution

The Analyze step fails unexpectedly.

2
Lease reclaimed automatically

The scheduler detects the failure and reclaims work.

3
Resumes from checkpoint

No rerun of completed steps. Picks up exactly where it stopped.

4
Zero events lost

All 5 nodes complete. Full execution integrity preserved.

No lost work after failure

Completed steps stay completed. A crash on step 6 doesn't rerun steps 1–5.

No repeated side effects

External calls aren't re-executed. Emails aren't re-sent. Charges aren't re-applied.

No rerun cost for completed steps

Tokens already spent stay spent. Resume costs only what remains.

Exact replay for debugging and audit

Replay the full execution from checkpoints. See exactly what happened, when.

Production outcomes,
not feature lists

Recover from crashes without rerunning everything

Completed steps are never repeated. JamJet resumes from the exact point of failure instead of forcing a full rerun.

Pause safely for human approval

Approval gates suspend durably and survive restarts. Long-running workflows stay reliable even when humans are in the loop.

Replay any execution to debug exactly what happened

Replay a run from checkpoints, inspect traces, and see per-node cost and latency. No guessing from logs.

Connect tools and external agents with MCP + A2A

Use built-in client and server support for both protocols. Connect to tools, delegate to agents, and keep every call inside a durable execution model.

Enforce cost and iteration limits in the runtime

Cost caps, token budgets, and iteration limits are enforced by the runtime, not left to application discipline.

Evaluate agents like software, not vibes

Run evals as workflow nodes with judge-based, assertion-based, latency, and cost scoring. Fail CI on regressions.

Deploy with enterprise controls when needed

Tenant isolation, PII redaction, OAuth delegation, mTLS federation, and retention controls — enforced at the runtime layer.

Python you know.
Durability you don't
have to build.

Every @task is checkpointed. Every workflow survives crashes. Human approval gates pause durably. Cost limits are enforced by the runtime.

pipeline.py durable workflow
from jamjet import task, workflow, approval

@task(model="claude-sonnet-4-6", max_cost=0.50)
async def analyze(data: dict) -> Report:
    """Analyze data — checkpointed, cost-capped."""

@workflow
async def pipeline(raw: dict):
    report = await analyze(raw)       # crash-safe
    await approval(report)            # durable pause
    return await publish(report)     # resumes here

Coming from LangGraph?

Same mental model

Typed state, workflow steps, routing, and graph-like structure — without relearning how to think.

Durability by default

No optional checkpoint plumbing. Every step is durable when you run on the JamJet runtime.

Validation at every boundary

Typed schemas catch state and interface problems before they spread.

Runtime-enforced limits

Budgets and iteration caps live in the scheduler, not in scattered guard code.

At a glance

Plain Python Easy start, no durability
LangGraph Graph orchestration, optional checkpoints
JamJet Python mental model + durability by default

Also migrating from:

CrewAI → OpenAI Agents SDK →

Built for your role

For Builders

Ship agents that survive production

Use your Python mental model, but get durable execution, replayable traces, and runtime-enforced safety. Start with @task, scale to full workflows without rewrites.

Read the quickstart →
For Platform Teams

Standardize agent infrastructure

One runtime for reliability, observability, eval, and governance. Tenant isolation, PII redaction, OAuth delegation, and mTLS — all enforced at the Rust layer.

Enterprise controls →
For Researchers

Run reproducible experiments

The same runtime properties that make agents reliable in production make experiments reproducible in research. ExperimentGrid, checkpoint replay, publication-ready export.

Explore the research toolkit →

Reliable agents for production.
Reproducible agents for research.
Same runtime.

Compare

Sweep 6 strategies across models and seeds in one command. Parallel execution with durable checkpoints across every condition.

Replay

Replay the exact failed run. Fork with modified inputs for ablations. No need to rebuild infrastructure or re-run completed conditions.

Export

Paper-ready LaTeX tables with mean ± std, CSV, and JSON. From experiment to results without custom scripts or manual formatting.

Explore the research toolkit

Enterprise-grade when you need it

Tenant isolation PII redaction OAuth 2.0 delegation mTLS federation Retention policies

All enforced at the Rust runtime layer, not by convention.

Security & enterprise docs →

Start building in 60 seconds.

$ jamjet init my-agent
Read the quickstart View on GitHub