msgFlux
Dynamic AI Systems
uv add msgfluxpip install msgfluxmsgFlux is an open-source framework for building dynamic AI systems with composable modules. It treats prompts, signatures, tools, and message flow as explicit program structure instead of ad-hoc glue. Architecture, data flow, and prompting remain separate layers, so systems can evolve by changing contracts, modules, or routes without forcing everything to change together.
AI Systems not ML Systems
ML systems are systems for AI - they train, evaluate, and deploy models. AI systems are systems with AI - software where pretrained models operate as components inside a broader application. In that setting, the model is not the whole product; it is one building block among many. You are not optimizing weights - you are designing behavior, interfaces, and flow around a model. This is the space msgFlux occupies.
Declarative and Imperative
One of the core ideas in msgFlux is that interaction style is a module-level decision. A module should be able to behave like a regular callable or like a message-bound operator, depending on the role it plays in the system. msgFlux therefore supports two complementary modes, and both have native access to vars: runtime variables rendered into Jinja2 templates and optionally injected into tools.
-
Imperative: the module receives inputs and vars explicitly and returns outputs directly.
-
Declarative: the module declares where it reads data from a shared message object.
The agent receives input and vars directly — like calling any Python function:
varsflow into Jinja2 templates at runtime —{{ user_name }}renders into the instructions and{% if is_vip %}conditionally adds a priority note.- Output is returned explicitly — the caller receives the result immediately.
The agent reads input from msg.issue, pulls vars from msg.variables, and writes to msg.solution:
- Reads input from
msg.issueand reads vars frommsg.variables— the agent knows where to find its data. - Writes to
msg.solution— the result is placed back on the shared message. - Vars are extracted from the message and rendered into Jinja2 templates —
{{ user_name }}and{% if is_vip %}resolve automatically. - After execution, the result is available on the message — no return value needed.
In the imperative model, a module behaves like a regular Python callable. Inputs and vars are passed directly, execution is explicit, and outputs are immediately returned. This is ideal when the caller owns control flow and the composition should stay obvious at the call site - for example in scripts, local pipelines, or tightly scoped orchestration code.
In the declarative model, a module is configured with knowledge about the structure of the message it operates on. Instead of receiving arguments, it knows which fields to read and which fields to populate. This is especially useful once multiple agents, tools, or modalities are operating over shared state, because composition becomes a matter of declaring contracts instead of hand-wiring every edge in the flow.
Prompting and Programming
On top of this interaction model, msgFlux deliberately distinguishes between programming and prompting, treating them as complementary but separate responsibilities. This view comes primarily from a PyTorch-style way of building systems: modules, composition, explicit interfaces, and controlled data flow. msgFlux also adopts signatures as a useful abstraction for LM programming, because typed contracts are a strong way to describe what a component should consume and produce.
-
Prompting is where you define behavior expressively. Instead of embedding all behavior into code, you describe intent, instructions, roles, and constraints directly in natural language. These prompts are written explicitly and intentionally, but remain scoped by the signatures and modules that contain them.
-
Programming is where you define the system structurally. This includes defining modules, agents, routing, and especially signatures: typed, explicit contracts that describe inputs and outputs. In msgFlux, signatures are one tool within a larger module system. They formalize the behavior of a component and make it possible to reason about, validate, and optimize it at the code level.
Define behavior through a signature — a typed contract that specifies inputs and outputs. msgFlux generates the prompt and parses the structured result:
- The docstring of a
Signaturebecomes the agent's instructions — it tells the agent what to do.
Define behavior through natural language — system message, instructions, and expected output. You control exactly what the model sees:
In this model, prompts are not loose strings passed around arbitrarily. They are written artifacts that live inside well-defined modules, constrained by signatures, and executed within a programmed architecture.
Declarative signatures can also make systems more resilient to model updates, because the contract stays stable even when the underlying model changes. The center of gravity shifts from micromanaging how a prompt is phrased to specifying what the component must produce.
By combining imperative and declarative modules with a clear separation between programming (signatures, structure, and flow) and prompting (written intent), msgFlux brings software architecture discipline to LM-based development. The result is a system that scales from simple experiments to complex AI applications while remaining explicit, composable, and maintainable.
Modules
Setup your chat completion model (check dependencies)
Authenticate by setting the OPENAI_API_KEY env variable.
Authenticate by setting the GROQ_API_KEY env variable.
Install Ollama and pull your model first:
Authenticate by setting the OPENROUTER_API_KEY env variable.
Authenticate by setting the SAMBANOVA_API_KEY env variable.
Self-hosted with an OpenAI-compatible API:
Agent
Agents in msgFlux are flexible — prompt them directly, use signatures for typed I/O, bind to a shared message, inject tools and vars, or nest one agent inside another as a tool. Mix and match as needed.
Build Agents
Pass additional task context alongside the task — the agent grounds its answer on the provided information:
Pass PDFs directly to the agent — from a URL or a local file. The agent reads and reasons over the document content:
Possible Output:
Use signature to define inputs and outputs — msgFlux generates the prompt and parses structured output:
Possible Output:
Agents that reason step-by-step and use tools to find answers. WebFetch is a built-in tool that fetches web pages as Markdown:
The agent iterates: think → act (call tools) → observe → repeat until final_answer.
vars inject runtime context into the agent's Jinja2 templates and into tools via inject_vars. The model never sees injected vars directly, they flow through the system behind the scenes.
customer_name renders into the instructions template. customer_id is injected into get_balance via kwargs["vars"] — invisible to the model, but available to the tool.
An agent can serve as a tool for another agent. Pass the class to tools. Use @tool_config when you need extra behavior, like routing the tool result directly to the caller:
- When an agent is used as a tool, the docstring becomes its description — this is what the parent agent sees when deciding which tool to call.
return_direct=Truemeans the Orchestrator returns the list of tool calls and their results directly, instead of passing them back to the model for a final response.
Bind inputs and outputs to fields on a shared Message — the preferred approach inside pipelines:
The agent reads from msg.review, extracts structured data into a Sentiment schema, and writes to msg.sentiment. This makes modules easy to compose and reorder.
Pass an image and let the agent reason step-by-step about what it sees:
Possible Output:
Other Modules
Beyond nn.Agent, msgFlux provides specialized modules for different modalities:
Built-in modules
All modules support message_fields and response_mode — configure once, then just pass the message through:
Speech-to-text transcription:
Text-to-speech synthesis:
Text embeddings for semantic search and similarity:
Image and video generation:
Compose Modules into Programs
A composition of modules is a program — each module handles one responsibility, and they work together naturally.
Compose modules into programs
Combine Transcriber, Agent, and Speaker in a single pipeline — audio in, audio out:
Why a PyTorch-like API?
Millions of developers already know PyTorch's patterns: nn.Module, forward(), submodule registration, state_dict(). By adopting the same conventions, msgFlux lets you transfer your existing mental model to AI system design.
If you've built neural networks with PyTorch, you already know how to build AI programs with msgFlux.
Inline
Inline is a lightweight DSL for declaring entire pipelines as a single expression. Sequential steps (->), parallel branches ([a, b]), conditionals ({cond ? a, b}), and loops (@{cond}: a;) — all in one readable string. Every module reads from and writes to a shared dotdict message. This is the flux — the dynamic flow that gives the library its name.
Orchestrate agents with a single expression
The Router agent classifies the intent at runtime, and Inline conditionally routes to the right expert — the pipeline adapts to the input. No if/else in Python, just a declarative expression.
Acknowledgements
msgFlux is built around a select set of exceptional libraries that make the whole thing possible:
- msgspec — ultra-fast serialization and validation that underpins all data contracts in msgFlux
- Jinja2 — the templating engine powering prompt composition, vars injection, and pipeline expressions
- Tenacity — reliable retry logic with exponential backoff for resilient model calls
- OpenTelemetry — the observability standard behind msgFlux's built-in tracing and telemetry
We are grateful to the authors and maintainers of these projects.