
GroqCloud
Groq delivers LLM responses at 300+ tokens per second, ten times faster than GPU-based alternatives. Your Tars AI agent leverages that speed to answer complex customer questions in milliseconds, turning sluggish chatbot interactions into truly real-time conversations.




From chat completions to audio translation, your agent calls Groq's inference endpoints at speeds that make customer wait times disappear.
GroqCloud
See how businesses leverage Groq's LPU speed to deliver instant AI-powered answers, translate multilingual voice messages, and explore model capabilities.
A developer asks your AI Agent a complex question about your API's rate limiting behavior. The agent sends the full conversation context to GroqCloud's chat completion endpoint running Llama on Groq's LPU hardware. The response arrives in under 500 milliseconds. The developer gets a detailed, accurate answer faster than they could type a follow-up question. Your support team handles fewer escalated technical tickets.
A customer sends a WhatsApp voice note in Portuguese asking about their subscription renewal. Your AI Agent sends the audio to Groq's Whisper translation endpoint, receives the English transcript instantly, formulates a response, and replies in both English and Portuguese. The customer feels understood regardless of language, and your team does not need multilingual staff on every shift.
Your AI Agent receives a question that requires deep reasoning rather than a quick factual answer. It queries GroqCloud's model list, identifies the best model for the task based on context window and capabilities, and routes the request accordingly. Simple questions use a smaller, faster model. Complex questions get the more capable one. Cost stays optimized while answer quality stays high.

GroqCloud
FAQs
Groq uses a custom Language Processing Unit (LPU) chip designed specifically for LLM inference. Unlike GPUs that share memory bandwidth across tasks, Groq's LPU stores model weights in on-chip SRAM with deterministic execution, achieving 300+ tokens per second. This translates to sub-second response times your customers actually notice.
GroqCloud hosts models including Llama 4 Scout, Llama 3 70B, Mixtral 8x7B, and Whisper for audio processing. Your agent can call the list models endpoint to see the current catalog at any time. Groq regularly adds new models as they become available.
The audio translation endpoint uses Whisper Large V3, which supports dozens of source languages. Your agent sends the audio file, and Groq returns an English transcript. For best results with less common languages, you can provide an optional prompt to guide the translation context.
No. Tars sends conversation context to GroqCloud in real time and uses the generated response only within the current conversation. Neither the prompts nor the completions are persisted by Tars after the interaction ends. Groq's own data retention policies apply on their side.
When streaming is enabled, GroqCloud returns tokens incrementally via server-sent events. Your Tars agent relays these tokens to the customer in real time, so they see the response appear progressively. This eliminates the perceived delay on longer answers and keeps the conversation feeling responsive.
GroqCloud's primary advantage is raw inference speed from custom LPU hardware. Where GPU-based providers return responses in 2-5 seconds, Groq often returns them in under 500 milliseconds. If your use case requires the lowest possible latency for customer-facing conversations, Groq is purpose-built for that.
Yes. The chat completion endpoint accepts temperature (0 to 2), top_p, max_completion_tokens, and stop sequences. Your agent can adjust these parameters per conversation context, using low temperature for factual answers and higher values for creative suggestions.
GroqCloud operates on a pay-as-you-go model based on tokens processed. Any account with a valid API key works with Tars. You are billed by Groq for token usage. Check GroqCloud's pricing page for current per-model rates to estimate costs based on your conversation volume.
Don't limit your AI Agent to basic conversations. Watch how to configure and add powerful tools making your agent smarter and more functional.

Privacy & Security
At Tars, we take privacy and security very seriously. We are compliant with GDPR, ISO, SOC 2, and HIPAA.