Notes from the 15th dbt Meetup in Amsterdam

By Darko Monzio Compagnoni on 1 Apr, 2026

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Notes from the 15th dbt Meetup in Amsterdam</span>

I left my desk at 17:30, walked into Amsterdam's signature spring hailstorm, and showed up to the 15th dbt Meetup soaking wet.

Worth it.

Three speakers. Three Claude Code users. One idea that kept surfacing in different forms: the boring, repetitive work is already leaving. The question is whether you're ready for what comes next.

 

Thomasjpg

Event host Thomas in't Veld presenting the speakers.

Photo by Tasman Analytics on LinkedIn

 


The demo that stuck with me

Oliver Ramsay from Lightdash showed up with a live demo. His team built an internal tool that provisions demo environments for prospects.

The setup: an agent takes a client brief, generates a schema file, creates synthetic data using Python Faker, writes YML for Lightdash's semantic layer, builds a full dbt project, and deploys it to a Lightdash instance. What used to take a developer a day or more now runs on its own.

The detail I couldn't stop thinking about: the agent checks its own CSV output before moving on. In one example, it noticed it had generated a football tournament dataset with a Latin American audience, flagged the demographic anomaly, and corrected it. The agent caught something a rushed human would have missed.

That's not automation replacing judgment. That's automation creating space for better judgment.


Right now, AI agents are the worst they'll ever be

The best practices aren't set yet. The tooling is immature. That's uncomfortable if you want a playbook to follow. It's an opportunity if you're willing to figure things out as you go.

One concrete example: persona prompts. You've probably seen advice to open your system prompt with something like "You are a senior data engineer with 10 years of experience." This pattern doesn't reliably improve output and may increase hallucinations.I've been doing this in my own AI setups. Time to test alternatives.


Markdown is the new import statement

The most technically interesting thing from the evening came from Padraic's walkthrough of Claude Code's configuration files.

In Claude Code, a CLAUDE.md file acts as the agent's instruction set: context, constraints, preferred patterns, things it should never do. You can also write separate .md files for specific skills or commands and import them into other markdown files, the same way you'd import a Python function or a library.

The analogy is exact. A skill file defines a reusable capability. A command file defines a specific, declarative task with arguments. A skill can call a command. You compose them.

What this means in practice: you can build a library of agent behaviours, version-controlled, composable, testable. Padraic's team uses evals to test whether their skills do what they're supposed to, the same way you'd write unit tests for code. dbt Labs has published their own set of open-source agent skills at github.com/dbt-labs/dbt-agent-skills.

He also mentioned a miscellaneous file for logging things the agent did that you didn't want it to do. His framing: every health and safety guideline is written in someone's blood. The same will be true for agent guardrails.


On the job security question

Someone in the room asked it directly: are analytics engineers going to have jobs?

The answer from the panel was consistent. AEs sit at the intersection of technical and business context. That combination is hard to automate because the business side requires judgment that changes with every company, every stakeholder, every quarter. What changes is the day-to-day. The repetitive, rule-based work moves to agents. The work that remains requires more context, more communication, more decisions.

That's a reasonable deal if you're paying attention. Less tedious work, more meaningful work. The risk is for people who stay in the tedious work and assume it'll always be there.


What I'm taking back

Three concrete things I'm changing after Tuesday:

First, I'm testing my persona prompts. If they're increasing hallucinations rather than improving output, I want to know.

Second, I'm spending time with Claude Code's skill and command structure. The composability they showed is exactly how I want to work: reusable behaviors, tested, not rebuilt from scratch every session.

Third, I'm paying attention to what the agents catch that I would have missed. Oliver's demo was a good reminder that the value isn't just speed. Sometimes it's the thing you would have shipped without noticing.

The meetup community is genuinely useful for this. You don't see how other people actually use tools until you're in a room with them. Reading documentation is not the same thing.

Group photo 15th dbt Meetup


Group photo at the end. I am in between the S and W in "Ticketswap"

Photo by David Bienvenue