Configure Evaluators
In this guide will show you how to configure evaluators for your LLM application.
What are evaluators?
Evaluators are functions that assess the output of an LLM application.
Evaluators typically take as input:
The output of the LLM application
(Optional) The reference answer (i.e., expected output or ground truth)
(Optional) The inputs to the LLM application
Any other relevant data, such as context
Evaluators return either a float or a boolean value.

Configuring evaluators
To create a new evaluator, click on the Configure Evaluators
button in the Evaluations
view.

Selecting evaluators
Lexica offers a growing list of pre-built evaluators suitable for most use cases. We also provide options for creating custom evaluators (by writing your own Python function) or using webhooks for evaluation.

Evaluators' settings
Each evaluator comes with it's unique settings. For instance in the screen below, the JSON field match evaluator requires you to specify which field in the output JSON you need to consider for evaluation. You'll find detailed information about these parameters on each evaluator's documentation page.

Mappings evaluator's inputs to the LLM data
Evaluators need to know which parts of the data contain the output and the reference answer. Most evaluators allow you to configure this mapping, typically by specifying the name of the column in the test set that contains the reference answer
.
For more sophisticated evaluators, such as RAG evaluators
(available only in cloud and enterprise versions), you need to define more complex mappings (see figure below).

Configuring the evaluator is done by mapping the evaluator inputs to the generation data:

Last updated