Docs
Scores & Evaluation
User Feedback

User Feedback in LLM apps

User feedback is a great source to evaluate the quality of an LLM app's output. In Langfuse, feedback is collected as a score and attached to an execution trace or an individual LLM generation.

Types of Feedback

Depending on the type of the application, there are different types of feedback that can be collected that vary in quality, detail, and quantity.

  • Explicit Feedback: Directly prompt the user to give feedback, this can be a rating, a like, a dislike, a scale or a comment. While it is simple to implement, quality and quantity of the feedback is often low.
  • Implicit Feedback: Measure the user's behavior, e.g., time spent on a page, click-through rate, accepting/rejecting a model-generated output. This type of feedback is more difficult to implement but is often more frequent and reliable.

Demo

We implemented collection of user feedback into the Q&A chatbot for the Langfuse docs.

User feedback collection in Langfuse

In this example you see the following steps:

  1. Collection of feedback using the Langfuse Web SDK

    Negative, Langchain not included in response

  2. Browsing of feedback
  3. Identification of the root cause of the low-quality response

    Docs on Langchain integration are not included in embedding similarity search

Try the demo yourself and browse the collected feedback in Langfuse

Example using LangfuseWeb

The easiest way to collect user feedback is via the Langfuse Web SDK. Thereby you can ingest scores directly from the browser. See the Web SDK documentation for more details.

User feedback on individual responses

Chat application

Integration

UserFeedbackComponent.tsx
import { LangfuseWeb } from "langfuse";
 
export function UserFeedbackComponent(props: { traceId: string }) {
  const langfuseWeb = new LangfuseWeb({
    publicKey: env.NEXT_PUBLIC_LANGFUSE_PUBLIC_KEY,
  });
 
  const handleUserFeedback = async (value: number) =>
    await langfuseWeb.score({
      traceId: props.traceId,
      name: "user_feedback",
      value,
    });
 
  return (
    <div>
      <button onClick={() => handleUserFeedback(1)}>👍</button>
      <button onClick={() => handleUserFeedback(0)}>👎</button>
    </div>
  );
}

Preview

Loading...

Alternatively, you can ingest feedback as scores server-side via the SDKs for Python and JS/TS, or via the HTTP API.

Was this page useful?

Questions? We're here to help

Subscribe to updates