Pipelines

Data pipelines your agent can build.

Describe what you need. Your AI agent writes the scripts and SQL. Loony validates, deploys, and runs it on a schedule.

>

Anyone can ask an agent to build a pipeline. The question is whether your org can let them, safely, without the data team reviewing every line.

Capabilities

What Loony adds on top of what your agent builds.

Agent-guided scaffolding. loony init ships skills and rules that teach your agent how to write correct dlt scripts and SQL transforms.

Schema contracts. Types, keys, and sync modes validated on every deploy. If the agent's output doesn't match the contract, it doesn't ship.

SQL transforms. Clean, join, and aggregate raw data into query-ready views. Your agent writes them; Loony runs them in order after every sync.

Scheduled runs. Cron on Loony infrastructure. Your pipeline runs whether you're watching or not.

REST + MCP endpoints. Every deploy is instantly queryable. Your team asks questions through Claude; your APIs serve the answers.

loony deploy
loony deploy

 Validated . 2 scripts, 4 tables
 stripe_charges.py . 12,847 rows
 zendesk_tickets.py . 3,291 rows
 transforms/001_staged.sql
 transforms/002_account_health.sql
 Schedule registered . every 4 hours
 REST + MCP endpoints live

Deploy complete (24s)

Teams

How teams use it.

The sales team needs revenue data joined with support tickets, refreshed every four hours. Their agent builds the pipeline and Loony keeps it running.

Finance wants billing reconciliation across Stripe and the internal database. They describe what they need and it's live by end of day.

The CEO asks for a weekly executive dashboard. It runs every Monday morning without anyone thinking about it.

loony describe
loony describe

Tables:
  raw_stripe_charges       12,847 rows
  raw_zendesk_tickets       3,291 rows
  stg_account_health          892 rows

Schedule: every 4 hours
Next run: 14:00 UTC

Your agent already writes code. Give it infrastructure it can trust.