Dynamic Context
Dynamic Context: Enhance Your AI Agent or Project Folder with Live Data
When creating an AI agent or setting up a project, Dynamic Context enables you to pull in real-time information from an API and integrate it into your agent's or project's system prompt. This feature is perfect for adding live data or leveraging Retrieval-Augmented Generation (RAG) from your own databases (e.g., vector store).
Use Cases:
Integrate with a vector store: Use this feature to pull the most relevant content from your database and inject it into the AI's context, improving response accuracy and relevance.
Dynamic content injection: Add up-to-date information like the latest newsletter, blog posts, or even social media updates (e.g., pulling in the last 10 tweets from your account).
How It Works:
For AI Agents:
Go to the AI Agents section and either create or edit an existing agent.
Set up Dynamic Context by connecting to your API for live data retrieval.
For Project Folders:
Create a new project folder.
Go to Project Settings and set up Dynamic Context to link to your API.
Process:
Once set up, whenever the user interacts with the AI agent or starts a conversation in a project, the API will be called.
The API’s response will be injected into the agent’s or project’s context and instructions, enhancing the AI's ability to provide more relevant and timely responses.
Dynamic Context vs. Knowledge Base:
Unlike a knowledge base, which requires the AI to perform a lookup to retrieve relevant information, Dynamic Context allows the AI to have instant access to live data without delay.
Pros: Instant access to contextual information at all times.
Cons: The context length will increase with the additional data.
Flexibility and Customization:
Customize the request headers and body to include variables like chat ID, AI agent ID, and the last user message.
Set up cache policies to avoid repeated API calls for every message.
You can configure the Dynamic Context endpoint to point to a private server or any API you have access to.
Limitations:
The API response is added directly to the system prompt, which can increase the context length. Ensure your API responses are concise.
The maximum allowed API response length is 15% of the model’s token context limit. Responses exceeding this limit will be truncated.
Recommended response formats: JSON (formatted, not minified) or Markdown.
Troubleshooting:
macOS users: For security reasons, the API endpoint must be served over HTTPS. If you're using a local setup, consider using an SSL proxy to handle the request.
CORS issues: Ensure your API endpoint allows requests from Trueseek's official web and app sources. You may need to specify allowed origins based on your platform (e.g., web app, macOS app, self-hosted version).
Available Variables:
These variables can be used in the request header or body and will be replaced dynamically when the API is called:
{chatID}: Unique ID for the current chat.
{characterID}: Unique ID for the AI agent the user is interacting with.
{lastUserMessage}: The most recent message from the user.
{userID}: (For Trueseek Custom) The ID of the currently logged-in user.
Last updated