Integrating with Lexica

Lexica easily integrates with your workflow, allowing you to use the latest version of the deployed prompt in your application. With Lexica, you can update prompts directly from the web interface without modifying your code each time.

Here are the two ways you can use the prompts from Lexica in your code:

USING OBSERVABILITY

In addition to prompt management, Lexica provides observability features.

If you're using Lexica as a proxy, all your calls are traced automatically without any additional setup. However, if you're using Lexica as prompt management system (i.e. only fetching the prompts), you need to integrate observability manually into your code base. You can learn how to do this here.

In this approach, prompts are managed and stored in the Lexica backend. You use the Agenta SDK to fetch the latest deployed version of your prompt and use it in your application.

1.As a prompt management system:

In this approach, prompts are managed and stored in the Lexica backend. You use the Lexica SDK to fetch the latest deployed version of your prompt and use it in your application.

Advantages:

  • Lexica operates outside your application's critical path.

  • Allows you to fetch and cache the latest prompt version for zero latency usage.

Considerations:

A sequence diagram showing how to integrate with Lexica as a prompt management system

2. As a middleware / model proxy:

In this setup, Lexica provides you with an endpoint that forwards requests to the LLM on your behalf.

Advantages:

  • Simplified deployment.

  • Automatic tracing without any changes to your code.

Considerations:

  • Adds a slight latency to the response (approximately 0.3 seconds).

  • Currently, we don't support streaming for these endpoints.

Overall, this approach is best suited for applications where latency isn't critical.

A sequence diagram showing how to integrate with Lexica as a proxy

Last updated