Andjel Boskovic

Systems Builder — AI Workflows & Developer Tools

I build data pipelines, developer tools, and the automation that connects them. Most of my work now involves AI agents running against live systems.

I was employee #9 at a marketplace that grew to $20M+/year. I built the systems that brought customers in — ad automation, market expansion tooling, internal dashboards. We covered hundreds of markets with five people doing what was scoped for twenty.

Then I joined a blockchain infrastructure company and owned the developer CLI — got it to 1,000+ teams onboarding themselves. Now I'm co-founding a dental tourism marketplace, building the data layer and using AI agents for work I used to do by hand.

Most AI coding tools generate code in isolation. The useful ones loop back into the systems they modify.

Systems

  1. Autonomous SQL Pipeline

    BookingDentist

    I pointed Claude Code at our BigQuery warehouse and let it discover the schema, write the queries, test them, and deploy. 13 production views in about 30 minutes. The trick was giving it live access to the actual tables — no docs, no guessing.

    Impact

    • Website revenue 0→20% of total
    • CAC −56%, revenue +55% YoY
    • Conversion funnel rebuilt from behavioral data
    • Claude Code
    • MCP
    • BigQuery
    • Python
  2. Self-Serve Developer CLI

    Streamflow Finance

    Built a CLI so teams could find, set up, and deploy token vesting contracts on their own — no sales call needed. I owned the whole thing: what commands exist, how onboarding works, what happens when things go wrong.

    Impact

    • 0→1,000+ teams self-serve
    • $0→$200k/month MRR
    • Ecosystem + $3M raised
    • Product
    • CLI
    • Developer Tools
  3. Automated Growth Infrastructure

    FishingBooker

    Employee #9. I built the internal tools that took the marketplace from a handful of daily bookings to over 1,000. A Google-to-Bing ad mirroring tool, a system that spins up new markets automatically — stuff that meant a team of 2–5 could do what was originally scoped for 20.

    Impact

    • Bing revenue $10k→$300k/yr
    • New market setup months→1 week
    • Team of 20 planned, 2–5 ran it
    • Growth
    • Automation
    • Internal Tools

Projects

Tools

AI & Agents

Claude Code

Where I do most of my work now. I write a PRD with full schema context, the agent writes production code. The thing that actually matters is giving it live access to your database via MCP — once it can see real tables and run real queries, it stops guessing and starts being useful.

MCP

This is what makes the whole setup work. I have a BigQuery MCP server so Claude can run describe_table and execute_query against production in real time. Every SQL view in my analytics layer was written this way — the agent sees the actual schema while it works.

Claude API

For when I need an agent to run on its own without me in the loop. Batch jobs, scheduled pipelines, anything that should just work without interactive prompting.

Cursor

When I want quick inline edits instead of handing the whole task to an agent.

Data & Analytics

BigQuery

Where all the data lives. 13 production views across two datasets — funnel analytics, attribution, spend, revenue. All written and maintained through Claude Code with live schema access.

Looker Studio

Team dashboards, connected straight to BigQuery. I set up the semantic layer so non-technical people can explore the data without breaking anything or needing me.

n8n

Webhooks, scheduled syncs, Slack alerts when something looks off. The glue between systems.

Development

Next.js + TypeScript

For any web surface, including this site.

Python

Data scripts, agent orchestration, anything that talks to BigQuery outside of SQL.

Vercel

For deploys. Zero config for Next.js projects.

Working Notes

  1. I built a Claude Code plugin. Here's where the platform breaks.

    Built a plan-progress tracker — hooks that sync TodoWrite state to a JSON file, a statusline script, worktree support. The hook architecture, skill triggers, and file system all worked on first try. That's the good news. The bad news is everything users actually touch. Installation requires understanding marketplace internals. The dev loop has no hot-reload — edit, bump version, manually copy files to cache, restart. Statusline config is undocumented and Desktop doesn't render it at all. And the real gap: the one moment users need progress context — the permission prompt, where you're deciding whether to approve a step — is the one place plugins can't render anything. Strong bones, weak surface area. Most developer platforms ship it the other way around.

  2. Review replaces planning when implementation is free

    I spin up three agents in separate worktrees, each taking a different approach to the same problem. Then I look at all three and pick the one that actually handles our edge cases. I don't need to spend an hour arguing about architecture anymore — I can just look at three working implementations and decide. Planning doesn't go away. It just moves to after the build, where you're comparing real code instead of hypotheticals.

  3. LLMs as progressive coaches

    I set up Claude for a CEO who was making budget decisions on gut feel. System prompt: tell the truth, push back, challenge assumptions. Fed it real attribution data. He started by asking it to confirm what he already believed. It didn't. He argued with it. It held its ground because the data was right there. A month later he walked into a team meeting and agreed with the numbers without a fight. His questions to Claude had changed — he'd gotten better at asking them. The model didn't just mirror his thinking back at him. It changed how he thinks. The system prompt set the tone, the data kept it honest, and repetition did the rest.

  4. AI amplifies the organization — with the right structure

    Same model, two outcomes. My marketing manager queries the warehouse in plain English and makes real budget calls. No SQL, no waiting on me. My CEO stopped overriding the data and started trusting it. Both happened on the same Claude setup. What made it work wasn't the model — it was what's underneath. A semantic layer that makes it hard to write bad queries. A system prompt that pushes back instead of agreeing. Attribution data clean enough that the AI can't bullshit around it. Without that structure, AI just amplifies whoever talks loudest. With it, the whole team gets sharper. Someone has to build the scaffolding first. That's the actual job.