Thirteen crates. One workspace. Everything you need to go from "I should build an agent" to actually having one.
Most agent frameworks give you the loop and wish you luck. We shipped the loop, the adapters, the memory, the eval harness, the policy engine, MCP integration, artifact storage, OAuth2 auth, multi-agent orchestration patterns, a web plugin, proc macros, on-device inference, and a terminal UI. All in one Cargo workspace. All #![forbid(unsafe_code)].
We may have gotten carried away. We're not sorry.
The core. Agent loop, tool dispatch, streaming interface, lifecycle events, retry strategies, model catalog, agent registry, inter-agent messaging, and the policy slot system. This is the engine room.
Nine ready-made StreamFn implementations. Anthropic, OpenAI, Gemini, Ollama, AWS Bedrock, Azure, Mistral, xAI, and a generic proxy adapter. Each one feature-gated — only compile what you use.
Ten policy implementations built against the core trait API. Budget caps, sandboxing, deny lists, loop detection, prompt injection guards, PII redaction, content filtering, audit logging, and more. Each feature-gated independently.
Session persistence in JSONL format. Summarization-aware context compaction so your agent doesn't blow through context windows on long conversations. Pluggable storage via the SessionStore trait. Async and blocking APIs.
On-device inference. SmolLM3-3B (GGUF Q4_K_M, ~1.92 GB) for text and tool generation — Gemma 4 available as an opt-in gemma4 feature. EmbeddingGemma-300M (~200 MB) for embeddings. Models download lazily from HuggingFace on first use. No API key. No internet after first run.
Trajectory tracing and golden path verification. Built-in evaluators for trajectory matching, budget compliance, response quality, and efficiency scoring. SHA-256 hash chain for audit trails. YAML eval specs. CI/CD gating ready.
A full interactive terminal interface built with ratatui and crossterm. Streaming conversation rendering with markdown and syntax highlighting, a live tool panel, session persistence, cross-platform keychain credential storage, and a first-run setup wizard. It's an agent you can talk to in your terminal. Right now.
Versioned artifact storage. Filesystem and in-memory backends ship out of the box — pluggable via trait for anything else. Store, retrieve, and version the outputs your agents produce.
OAuth2 credential management and token refresh. Pluggable provider support so your agents can authenticate against external services without rolling your own token lifecycle.
Model Context Protocol integration over stdio and SSE transports. Connect any MCP server to any agent in the harness — tools, resources, and prompts all flow through the same event system.
Pre-built multi-agent orchestration topologies. Pipeline (chain agents in sequence), parallel fan-out (run agents concurrently and merge results), and loop (iterate until a condition is met). Compose them freely.
Web browsing and search plugin. Give your agent eyes on the internet without writing a scraper yourself. HTTP fetching, search integration, and structured content extraction ready to wire up as tools.
Proc macros that eliminate tool boilerplate. #[derive(ToolSchema)] generates JSON Schema from your structs at compile time. #[tool] annotates handler functions. Zero runtime overhead. Zero handwritten schema.
Feature gates everywhere. Each adapter, each policy, each optional subsystem is behind a Cargo feature flag. Your binary only includes what you actually use. Need just Anthropic and the budget policy? That's ~4 crates compiled. Need everything? Also fine. We don't judge your dependency graph.
Okay, we judge a little. But lovingly.
One cargo add away from having opinions about agent architecture.
github.com/SuperSwinkAI/Swink-Agent
← Back to Swink Agent