You have opinions about which LLM to use. We have opinions about not caring which one you pick.
Everything flows through StreamFn. It takes a model name, a conversation context, and some options. It returns a stream of events. That's it. That's the whole interface between your agent and whatever LLM is on the other end.
The harness never touches provider credentials. It never imports a provider SDK. It doesn't know if you're talking to Claude, GPT, Gemini, or a model running on your laptop. It just consumes the stream.
Blissful ignorance, but make it architecture.
Each one normalizes a different provider's wire format into the same Stream<AssistantMessageEvent> protocol. SSE, NDJSON, AWS SigV4 — we handled the paperwork so you don't have to.
Plus LocalStreamFn in the local-llm crate for fully on-device inference. No API key required. No internet required. Just vibes and a GPU.
If your provider isn't on the list, implement StreamFn yourself. It's one async method. You get full control over auth, transport, and error handling.
The MessageConverter trait handles serializing harness messages into your provider's expected format, so adding a new provider is just two traits and zero forking.
tracing crate. Debug, warn, error — consistent observability across providers.
Pick your model. We'll handle the plumbing.
github.com/SuperSwinkAI/Swink-Agent
← Back to Swink Agent