Langfuse is the most popular open source LLMOps platform. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be self-hosted in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use Langfuse Prompt Management, its open APIs, and testing and evaluation pipelines to improve the quality of their applications. Product managers can analyze, evaluate, and debug AI products by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring humans in the loop by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to monitor security risks through security framework and evaluation pipelines. Langfuse enables non-technical team members to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is open source and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!" title="" class="btn" data-container="body" data-html="true" data-id="240572" data-placement="top" data-toggle="popover" data-trigger="focus" style="color:#0077b5" tabindex="0" data-original-title="Langfuse (YC W23)"> 5,421
Activities
Technologies
Entity types
Location
San Francisco, CA 94103, USA
San Francisco
United States of America
Employees
Scale: 2-10
Estimated: 8
Engaged corporates
1Added in Motherbase
1 year, 9 months agoOpen Source LLM Engineering Platform
Langfuse is the most popular open source LLMOps platform. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
Langfuse can be self-hosted in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance.
Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use Langfuse Prompt Management, its open APIs, and testing and evaluation pipelines to improve the quality of their applications.
Product managers can analyze, evaluate, and debug AI products by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring humans in the loop by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to monitor security risks through security framework and evaluation pipelines.
Langfuse enables non-technical team members to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing.
Langfuse is open source and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!
Langfuse, Large Language Models, Observability, Prompt Management, Evaluations, Testing, Open Source, LLM, AI, Analytics, Open Source, and Artificial Intelligence
Traces, evals, prompt management and metrics to debug and improve your LLM application. Integrates with Langchain, OpenAI, LlamaIndex, LiteLLM, and more.
Corporate | Type | Tweets | Articles | |
---|---|---|---|---|
![]() Amazon Web Services (AWS) IT services, IT Services and IT Consulting | Amazon Web Services (AWS) IT services, IT Services and IT Consulting | Other 2 Jul 2024 | |