
OpenAI
Your Tars AI agent can spin up specialized OpenAI assistants, generate text embeddings for semantic search, manage conversation threads, and orchestrate multi-step AI workflows, all during live customer interactions. Build compound AI systems where your front-line chatbot delegates complex reasoning to purpose-built OpenAI models.




Your agent becomes an AI conductor, creating assistants, running inference, generating embeddings, and managing intelligent workflows through the OpenAI API during every customer interaction.
OpenAI
See how businesses layer OpenAI's model capabilities into their Tars AI agents, creating multi-model workflows that handle everything from simple FAQs to complex analytical requests.
A customer asks a nuanced question about your product that does not match any FAQ keyword. Your AI Agent generates an embedding of their question using OpenAI's text-embedding-3-small model, searches your vector database for the closest matching documentation, and synthesizes a precise answer. The customer gets an accurate response even when they phrase things in unexpected ways.
A developer asks a complex API troubleshooting question. Your front-line Tars agent creates a thread with an OpenAI assistant specialized in your API documentation, runs the query with code interpreter enabled, and returns a detailed answer with code examples. The developer gets expert-level support without waiting for a human engineer.
Your AI Agent triages every incoming question. Simple greetings and FAQs route to a lightweight model. Complex technical questions, billing disputes, or ambiguous requests escalate to GPT-4o through an OpenAI assistant thread. Each conversation uses the right model for the job. Your AI costs drop significantly while quality stays high across every interaction type.

OpenAI
FAQs
The agent creates an OpenAI assistant with specialized instructions and tools, then creates a thread, adds the customer's message, and runs the assistant. Once the run completes, the agent retrieves the response from the thread and delivers it to the customer. This allows domain-specific AI models to handle complex questions within the conversation flow.
The Tars agent is your front-line conversational interface that handles customer interactions across channels. It uses OpenAI assistants as specialized backend workers for tasks that need custom instructions, file search, or code execution. Think of it as a manager delegating specific analysis to expert workers.
Tars requires a standard OpenAI API key from your account at platform.openai.com. The key should have access to the models, assistants, embeddings, and files endpoints you plan to use. You control spending limits through your OpenAI account settings. API costs are billed directly by OpenAI to your account.
Tars sends customer messages to OpenAI's API as needed during conversations. OpenAI's data retention policies apply to those API calls. Tars itself does not maintain a separate cache of the data sent to or received from OpenAI. Review OpenAI's API data usage policy for details on how they handle API inputs.
Yes. The agent's Create Assistant and Create Run endpoints accept any model ID available in your OpenAI account, including fine-tuned models. If you have trained a custom model on your domain data, the agent can use it for specialized conversations. The List Models endpoint lets the agent discover all available models.
ChatGPT is a standalone chat interface. Tars integrates OpenAI models into a business workflow that connects to your CRM, helpdesk, e-commerce platform, and other tools. The agent can look up a customer's order in Shopify, then use an OpenAI assistant to analyze their issue, then create a ticket in Zendesk, all in one conversation.
The agent can generate embeddings for text during conversations using OpenAI's embeddings endpoint. For bulk embedding of your entire knowledge base, you would typically run a batch process outside of conversations. The agent then uses these pre-computed embeddings for real-time semantic search when customers ask questions.
The agent monitors run status through the Retrieve Run endpoint, checking for completed, failed, cancelled, or expired states. If a run fails, the agent can retry with adjusted parameters, fall back to a simpler model, or escalate to a human agent. The customer receives a clear response regardless of the outcome.
Don't limit your AI Agent to basic conversations. Watch how to configure and add powerful tools making your agent smarter and more functional.

Privacy & Security
At Tars, we take privacy and security very seriously. We are compliant with GDPR, ISO, SOC 2, and HIPAA.