The Trust Crisis
A movement toward transparent, deterministic, and auditable AI workflows.
You're a developer. Your agent fails in production.
What do you get? A 100-line stack trace. No state. No inputs. No visibility.
This is the Black Box Tax.
The silent penalty paid when systems hide the truth.
The Cost of Obscurity
| Feature | "Black Box" Frameworks | Lár (The Glass Box) |
|---|---|---|
| Debugging | Guesswork. 100-line stack traces from inside a "magic" executor. | Precise. See the exact node, state, and error that caused the failure. |
| Auditability | Paid add-on. Requires external tools to trace execution. | Built-in. The "Flight Log" is the core output of the engine. |
| Control | Chaotic. Agents "chat" to pass data. Order is unpredictable. | Deterministic. You define the assembly line. Data flow is explicit. |
| Security | Cloud-Locked. Your data leaves the perimeter to hit their servers. | Zero Telemetry. The only egress is to your LLM provider. |
The Flight Log
Lár produces a Flight Log for every run. It's not just a debug tool; it's a legal record of your AI's decisions.
{
"steps": [
{
"step": 0,
"node": "LLMNode",
"state_before": {
"task": "What is the Lár Framework used for?"
},
"state_diff": {
"added": {
"category": "GENERAL"
},
"removed": {},
"modified": {}
},
"run_metadata": {
"prompt_tokens": 45,
"output_tokens": 449,
"total_tokens": 494,
"model": "gemini/gemini-2.5-pro"
},
"outcome": "success"
},
{
"step": 1,
"node": "LLMNode",
"state_before": {
"task": "What is the Lár Framework used for?",
"category": "GENERAL",
"__last_run_metadata": null
},
"state_diff": {
"added": {
"search_query": "Lár Framework use cases"
},
"removed": {
"__last_run_metadata": null
},
"modified": {}
},
"run_metadata": {
"prompt_tokens": 83,
"output_tokens": 1717,
"total_tokens": 1800,
"model": "gemini/gemini-2.5-pro"
},
"outcome": "success"
},
{
"step": 2,
"node": "ToolNode",
"state_before": {
"task": "What is the Lár Framework used for?",
"category": "GENERAL",
"__last_run_metadata": null,
"search_query": "Lár Framework use cases"
},
"state_diff": {
"added": {
"retrieved_context": "=== Lár Engine Knowledge Base ===\n\nProduct: lar-engine (The Open-Source Framework)\n\nTopic: General Questions\n\nQuestion: What is Lár?\n\nAnswer: Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time.\n\nQuestion: What is the \"Glass Box\" philosophy?\n\nAnswer: \"Glass Box\" means 100% auditability. Our lar engine's core output is a step-by-step log of every state change. Unlike \"black box\" frameworks that hide their logic, lar lets you see exactly why your agent failed, which node was responsible, and what data it was processing.\n\nQuestion: How do I get support?\n\nAnswer: Please check our GitHub repositories.\n\nTopic: Licensing\n\nQuestion: How much does lar cost?\n\nAnswer: The lar-engine is, and always will be, 100% free and open-source under an Apache 2.0 license."
},
"removed": {
"__last_run_metadata": null
},
"modified": {}
},
"run_metadata": null,
"outcome": "success"
},
{
"step": 3,
"node": "RouterNode",
"state_before": {
"task": "What is the Lár Framework used for?",
"category": "GENERAL",
"__last_run_metadata": null,
"search_query": "Lár Framework use cases",
"retrieved_context": "..."
},
"state_diff": {
"added": {},
"removed": {
"__last_run_metadata": null
},
"modified": {}
},
"run_metadata": null,
"outcome": "success"
},
{
"step": 4,
"node": "LLMNode",
"state_before": {
"task": "What is the Lár Framework used for?",
"category": "GENERAL",
"__last_run_metadata": null,
"search_query": "Lár Framework use cases",
"retrieved_context": "..."
},
"state_diff": {
"added": {
"agent_answer": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
},
"removed": {
"__last_run_metadata": null
},
"modified": {}
},
"run_metadata": {
"prompt_tokens": 689,
"output_tokens": 761,
"total_tokens": 1450,
"model": "gemini/gemini-2.5-pro"
},
"outcome": "success"
},
{
"step": 5,
"node": "AddValueNode",
"state_before": {
"task": "What is the Lár Framework used for?",
"category": "GENERAL",
"__last_run_metadata": null,
"search_query": "Lár Framework use cases",
"retrieved_context": "...",
"agent_answer": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
},
"state_diff": {
"added": {
"final_response": "Lár (Irish for \"core\") is our open-source, \"glass box\" agentic framework. It is the \"PyTorch for Agents.\" It's a simple, \"dumb\" engine that lets you build, run, and audit complex AI agents one step at a time."
},
"removed": {
"__last_run_metadata": null
},
"modified": {}
},
"run_metadata": null,
"outcome": "success"
}
],
"summary": {
"total_steps": 6,
"total_prompt_tokens": 817,
"total_completion_tokens": 2927,
"total_tokens": 3744
}
}Lár eliminates the Black Box Tax entirely.
Built in the Trenches
A Technical Note: "Is this just a wrapper?" No. Most platforms wrap APIs and call it an 'agent'. Lár is a deterministic engine built from scratch. It uses LiteLLM for universal model support (100+ providers), but the execution graph is pure, debuggable Python designed for total state observability.
Verify the Code on GitHub →The Pledge
We believe auditing should be free. No developer should ever have to answer "I don't know" when asked "Did the AI really do that?".
Lár is not just a framework; it's a commitment to radical transparency. Every run produces a complete, immutable record of every state change.
The 8 Lár Primitives
We rejected complexity. No magic. Just Python.
Lár is built on just 8 "Lego bricks" that you can combine to build any agent.
Time Travel Debugging
To scale, we don't log the entire state every time. We log State Diffs.
The GraphExecutor yields a lightweight step_log that shows exactly what changed. This allows for infinite scalability and perfect replayability.
Show Your Agents are Auditable
If you build an agent using the Lár Engine, you are building a dependable, verifiable system.
Help us spread the philosophy of the "Glass Box" by displaying the badge below in your project's README.
By adopting this badge, you signal to users and collaborators that your agent is built for production reliability and auditability.
Contribute to the Core
Lár is open source. We believe in building in public and refining the "Glass Box" philosophy together. If you want to help shape the future of agentic auditing, join us.
Read Contributing Guidelines