
Langbase
Your Tars AI agent taps into Langbase's serverless AI infrastructure to create pipes, search memory stores, and manage conversation threads. Build AI-on-AI workflows where your customer-facing agent orchestrates Langbase's composable backend in real time.




Pipes, memory, threads, and document retrieval, your AI agent orchestrates Langbase's entire serverless platform through natural conversation.
Langbase
See how development teams use Tars AI agents to orchestrate Langbase pipes, memory, and threads without switching to a separate dashboard.
A developer asks the agent to spin up a new GPT-4 pipe for processing customer feedback. Your AI Agent calls Langbase's Pipe Create API with the specified model, temperature, and content moderation settings, then returns the pipe's API key and endpoint URL. The developer starts integrating immediately. No dashboard navigation required.
A customer asks a technical question about your product. Your AI Agent searches Langbase's memory stores, finds the relevant document chunks with semantic similarity, and delivers a precise answer drawn from your internal knowledge base. The customer gets expert-level support instantly. Your documentation investment pays off in every conversation.
A customer follows up on a complex issue discussed last week. Your AI Agent retrieves the original Langbase thread by ID, pulls the full message history, and summarizes the previous context. The support conversation picks up seamlessly without asking the customer to repeat themselves.

Langbase
FAQs
The agent calls Langbase's Pipe Create API with parameters you specify, including model provider, temperature, max tokens, and moderation settings. It returns the new pipe's name, API key, and endpoint URL. You can configure advanced options like frequency penalty, tool choice, and streaming directly through the conversation.
Yes. The agent lists all memory objects and their documents using Langbase's Memory and Document List APIs. It can retrieve document metadata and optionally include vector embeddings for semantic search. This enables RAG-powered answers drawn directly from your knowledge base during conversations.
Tars requires your Langbase API key, available in your Langbase account settings. This key grants access to pipes, memory, threads, and documents. You can regenerate or restrict the key at any time without affecting other Langbase integrations.
No. Tars queries Langbase APIs in real time. Document content, memory objects, and thread messages are fetched live during each conversation and are not cached or stored outside that interaction context.
Yes. The agent creates new threads and retrieves existing ones by thread ID using Langbase's Thread APIs. It can list all messages within a thread with pagination support, making it easy to maintain context across multi-session customer interactions.
Langbase's chunker splits large text into smaller segments for embedding and retrieval. The agent can chunk content semantically (preserving sentence boundaries) or by fixed character size, with configurable overlap. This is useful for preparing documents before adding them to memory stores.
The Langbase dashboard requires manual navigation for every action. Through Tars, your team or customers can create pipes, query memories, and manage threads via natural language. This is especially powerful for non-technical users who need AI infrastructure access without learning the platform.
Langbase supports over 250 LLMs including OpenAI, Anthropic, and Google models. When creating a pipe, the agent accepts a model parameter in 'provider:model_id' format, such as 'openai:gpt-4' or 'anthropic:claude-3'. You specify the model and the agent handles the rest.
Don't limit your AI Agent to basic conversations. Watch how to configure and add powerful tools making your agent smarter and more functional.

Privacy & Security
At Tars, we take privacy and security very seriously. We are compliant with GDPR, ISO, SOC 2, and HIPAA.