Icon for Langfuse

Langfuse

Open Source Free Trial

Traces, evals, prompt management and metrics to debug and improve your LLM application.

Langfuse is an open-source platform offering observability and analytics for Language Learning Model applications. It provides detailed production traces, quality, cost, and latency metrics. Features include debugging for complex LLM apps, cost calculation, tracking non-LLM actions, and native integrations with various models and frameworks. Langfuse offers prebuilt dashboards for cost, quality, and latency analytics, and a public API for custom features. It supports Python and JS/TS SDKs, Langchain integration, and is suitable for any LLM app.

Pricing: Monthly / Yearly

Hosting Cloud + Self-hosted
Pricing Freemium, from $29/mo
HQ 🇩🇪 Germany
Founded 2023
License MIT
GitHub 23,000 stars
Compliance SOC 2 · HIPAA · GDPR · SSO
Screenshot of Langfuse webpage

What is Langfuse?

Langfuse is an open-source LLM engineering platform for tracing, evaluation, and prompt management. It gives teams structured observability into their LLM applications, capturing every call, retrieval step, and tool invocation as nested traces. The core platform is MIT-licensed and can be self-hosted with no usage limits.

How It Works

Langfuse captures application-level traces asynchronously in the background, so there is no latency impact on your application. Each trace contains observations (individual LLM calls, retrieval steps, tool executions) nested to show causal relationships. Traces can be grouped into sessions for multi-turn conversations. The platform is built on OpenTelemetry for compatibility across stacks.

Key Features

Tracing captures all LLM and non-LLM operations with full input/output visibility, token counts, latency, and cost tracking. The evaluation system supports LLM-as-a-judge, human annotation queues, and custom evaluators that run against both test datasets and production traces. Prompt management lets teams version and deploy prompts across environments with an interactive playground for testing. Langfuse integrates with 50+ frameworks including LangChain, LlamaIndex, OpenAI SDK, Vercel AI SDK, Pydantic AI, and CrewAI.

Pricing

The free Hobby plan includes 50,000 units per month with 30-day data retention and 2 seats. The Core plan starts at $29/month with 100,000 units and 90-day retention. Pro ($199/month) adds SOC 2 Type II, ISO 27001, and HIPAA compliance along with 3-year data retention. Self-hosted deployments of the MIT-licensed core are free with no usage limits. Enterprise self-hosted features (RBAC, SCIM, protected prompts) require a $500/month license.

Who Should Use It

Langfuse is a good fit for teams that want open-source observability for their LLM applications, especially those who prefer self-hosting for data sovereignty or cost control. It works with any LLM framework and any model provider. Teams already using LangChain who want a non-proprietary alternative to LangSmith often end up here.

Is your product missing?

Add it here →