Your LLM asked for three things at once. Why would you do them one at a time?
When the model returns multiple tool calls in a single turn, Swink Agent spawns them all concurrently via tokio::spawn. Each tool gets its own cancellation token. Results come back as they finish.
No configuration needed. No "enable parallel mode" flag. Concurrency is the default because waiting is not a feature.
Sequential execution is available if you really, truly want it. We won't judge. Much.
Steering. After each tool completes, the harness polls for steering input. If a human (or another system) sends a redirect, every remaining in-flight tool gets cancelled via its CancellationToken. Cancelled tools get error results injected automatically so the context stays consistent.
Think of it as a pilot overriding autopilot. The plane doesn't crash — it just changes course.
Every tool call passes through four stages before it actually runs. Any stage can say no.
tokio::spawn. The tool runs. Results flow back. The world keeps spinning.
Concurrent is the default, but you have options.
| Policy | Behavior |
|---|---|
| Concurrent | All tools spawn at once. The fast path. The default. |
| Sequential | One at a time, in model order. For when order matters. |
| Priority | Sort by priority, then concurrent within each group. The fancy path. |
| Custom | Implement ToolExecutionStrategy. You partition, we execute. Full control. |
FnTool lets you build a tool from a closure in about five lines. Schema derived from a struct. No boilerplate. No ceremony.
Need more control? Implement AgentTool on a struct. Connection pools, caches, setup/teardown — all yours. And ToolMiddleware lets you wrap any tool with timeouts, logging, or whatever else you dream up — without modifying the original.
Fast by default. Interruptible when it matters.
github.com/SuperSwinkAI/Swink-Agent
← Back to Swink Agent