Databricks

Your AI agent queries Databricks lakehouse data while customers wait

Customers ask complex questions about their accounts, usage, and analytics. Your AI agent runs SQL statements against Databricks warehouses, retrieves job results, and pulls catalog data in real time. Insights that used to require a data team request now arrive in the conversation within seconds.

Chosen by 800+ global brands across industries

Lakehouse intelligence inside every conversation

Your AI agent executes SQL queries, monitors job pipelines, and accesses Unity Catalog data through Databricks, turning your data lakehouse into a conversational interface.

Databricks

Use Cases

Data answers delivered conversationally

Explore how AI agents make Databricks lakehouse data accessible to anyone who can type a question, eliminating the gap between business users and data teams.

Customer Account Metrics on Demand

An account manager asks 'What's the monthly usage for Acme Corp?' Your AI Agent executes a SQL query against the Databricks SQL warehouse, joining the customers and usage_events tables filtered by the account name and date range. The results come back in seconds with total usage, peak activity periods, and month-over-month changes. The account manager has the data for their call without filing a single analytics request.

Pipeline Failure Alerts Resolved in Chat

An engineer messages 'Why did the data pipeline fail last night?' Your AI Agent checks the latest Databricks job runs, identifies the failed run, retrieves the error logs and task-level details, and presents the root cause. The engineer gets diagnostic information immediately and can trigger a rerun from the same conversation. Mean time to resolution drops from hours to minutes.

Self-Service Data Discovery for Business Teams

A marketing analyst asks 'What customer data tables are available for campaign analysis?' Your AI Agent queries Databricks Unity Catalog, retrieves the relevant schemas and table descriptions, lists available columns with their data types, and suggests which tables would answer their campaign performance questions. Business teams explore data independently without waiting for a data engineer to respond.

Try
Databricks

Databricks

FAQs

Frequently Asked Questions

How does the AI agent execute SQL queries against my Databricks warehouse?

The agent uses Databricks' SQL Statement Execution API to run queries against your SQL warehouse endpoints. It submits the SQL statement, waits for execution, and retrieves the result set. Your warehouse's compute resources handle the processing, and results return through the API. The agent translates natural language questions into appropriate SQL based on your schema.

What Databricks credentials does Tars need to connect?

Tars requires your Databricks workspace URL and a Personal Access Token (PAT). You generate the PAT from User Settings > Developer > Access tokens in your Databricks workspace. The token inherits your user-level permissions, so the agent can only access resources you have permission to use. You can create a dedicated service account with restricted access for the integration.

Can I restrict which tables and schemas the AI agent can query?

Yes. Databricks Unity Catalog enforces table-level and column-level access control. Create a service account with read-only access to specific schemas and tables, generate a PAT for that account, and use it for the Tars connection. The agent inherits those restrictions and cannot query tables outside the permitted scope.

Does Tars store query results or data from my Databricks lakehouse?

No. Tars executes queries against your Databricks warehouse in real time and uses the results only to formulate the conversation response. Query results, table data, and catalog metadata are not cached or stored by Tars. Your data governance policies remain fully intact.

Can the agent monitor and manage Databricks job pipelines?

Yes. The agent can list job runs, check run status, view task-level results, cancel running jobs, and trigger new runs through Databricks' Jobs API. This is particularly useful for data engineering teams who want quick pipeline status checks and the ability to rerun failed jobs without opening the Databricks workspace.

What happens if a SQL query takes too long to execute?

The agent handles long-running queries by checking execution status periodically. If a query exceeds a configurable timeout, the agent informs the user that the result is still processing and can check back. You can also set guardrails on query complexity to prevent expensive full-table scans from being triggered through conversation.

How is this different from Databricks' built-in AI Assistant?

Databricks Assistant works within the Databricks workspace UI, helping users write SQL and navigate notebooks. Tars brings Databricks data access to external channels like your website, Slack, WhatsApp, or internal tools. People who never log into Databricks can still ask data questions and get answers powered by your lakehouse.

Can the agent access both SQL warehouses and all-purpose clusters?

Yes. The agent can interact with SQL warehouses for query execution and all-purpose clusters for broader compute tasks. It can also start and stop clusters, check cluster state, and manage cluster permissions. The specific compute resource used depends on the type of operation, with SQL warehouses optimized for analytical queries and clusters for processing workloads.

How to add Tools to your AI Agent

Supercharge your AI Agent with Tool Integrations

Don't limit your AI Agent to basic conversations. Watch how to configure and add powerful tools making your agent smarter and more functional.

Privacy & Security

We’ll never let you lose sleep over privacy and security concerns

At Tars, we take privacy and security very seriously. We are compliant with GDPR, ISO, SOC 2, and HIPAA.

GDPR
ISO
SOC 2
HIPAA

Still scrolling? We both know you're interested.

Let's chat about AI Agents the old-fashioned way. Get a demo tailored to your requirements.

Schedule a Demo