Ollama
Run large language models locally.
Run large language models locally.
Open Source OpenAI alternative, self-host AI models.
LangChainGo is a framework for developing applications powered by language models.
Embeddable vector database for Go with Chroma-like interface and zero third-party dependencies. In-memory with optional persistence.
A Go library for building stateful, multi-actor applications with LLMs, built on the concept of LangGraph,with a lot of builtin Agent architectures.
Manage, load-balance, and failover packs of Ollamas.
Go SDK for building AI applications. One SDK, 20+ providers. Inspired by Vercel AI SDK.
A Go toolkit for building AI agents and applications across multiple providers with unified LLM, embeddings, tool calling, and MCP integration.
YAML-driven multi-agent AI runtime for Go with Erlang-style supervision, MCP tool server support, and a CLI.
AI gateway for routing, securing, and monitoring LLM traffic across 10+ providers. OpenAI-compatible API, WASM policy plugins, canary rollouts, real-time dashboard.
OpenTelemetry-native LLM observability and budget guardrails for cost-constrained production environments.
AI Agent execution runtime with event sourcing, checkpoint recovery, and At-Most-Once execution guarantee. Written in Go.
Go SDK for building durable AI agents on Temporal with support for tools, MCP, human approvals, and sub-agent delegation.
Local compatibility proxy for the Gemini and OpenAI APIs. Run one container locally and test both SDK protocol shapes on the same port without API keys or network access.
AI Agent runtime engine with long-lived sessions for Claude Code, OpenCode, pi-mono and other CLI AI tools. Provides full-duplex streaming, multi-platform integrations, and secure sandbox.
The simplest but powerful way to use large language models (LLMs) in Go.