Agent accuracy

Your team asks questions. They get the right answers.

A semantic layer tells agents exactly what to query.

>

The reason most AI queries return wrong answers is the agent doesn't understand your schema. The semantic layer gives it measures, dimensions, and descriptions. So every answer is grounded in your actual data.

Capabilities

What the semantic layer gives you.

Measures, dimensions, descriptions. In plain English.

Five MCP tools on deploy. list_tables, describe_table, column_stats, search, query.

Self-correcting. Wrong table triggers helpful errors.

Agents query but never modify.

Raw data always preserved.

loony describe
loony describe

Tables:
  raw_stripe_charges       12,847 rows   append
  raw_zendesk_tickets       3,291 rows   append
  stg_revenue_support         892 rows   replace

Views:
  mv_account_health           892 rows

Measures:
  total_revenue    sum(amount)     "Total revenue in USD"
  ticket_count     count(id)       "Number of open tickets"

Dimensions:
  account          string          "Customer account name"
  month            date            "Calendar month"
  status           string          "healthy, watch, or alert"

Teams

How teams use it.

A PM asks “what's our churn rate this quarter?” and the semantic layer ensures the agent queries the correct defined measure.

Two analysts in different teams ask about revenue and get the same answer because it's defined once in the semantic layer.

Agents discover available tables, understand the schema, and write correct queries through five MCP tools that ship with every deploy.

loony query
tool: query
input: "revenue by account, flag accounts where tickets > 10"

Resolved: total_revenue (sum), ticket_count (count)
Grouped: account
Filter: ticket_count > 10

account          revenue      tickets   status
Initech          $8,200       14        alert
Globex Corp      $4,100       12        alert
Umbrella Inc     $2,900       18        alert

3 rows · 8ms · read-only

Agents that answer correctly.