1-Day Hands-On Workshop · Remote or On-site
AI agents are easy to demo, but much harder to make reliable in real-world applications. In this hands-on workshop, Davy walks you through the core building blocks behind modern agent systems — tools, loops, context, human-in-the-loop — and how to design them to be more robust, predictable, and production-minded.
We connect theory to practice: how agents use tools, how loops behave in the wild, how to manage context and memory, when to add human approvals, and how to think about evaluation and system design as you move beyond simple LLM calls.
This workshop is for software developers, technical leads, and builders who want to understand how modern AI agents work and how to design them more reliably. It is especially relevant if you are already experimenting with LLMs and want to move toward more advanced, agentic workflows.

Founder & Software Mentor @ Hackages
Davy helps teams ship software with confidence — including AI-assisted development and agentic systems. This workshop distills practical patterns for reliability, evaluation, and design that go beyond polished demos.
How models are invoked in agent workflows, constraints, and failure modes you need to design for.
Patterns for calling external APIs, validating outputs, and keeping agents bounded and observable.
Practical approaches to measuring behavior, regressions, and when to add human oversight.
The cost to train your team is a fraction of the cost to replace them.
* excl VAT
* Up to 15 participants · pricing may vary by format and location
Chances are at this stage you have a couple questions. Here are a few questions that are often asked.
No. The workshop assumes solid general engineering skills and some API familiarity. We build intuition from first principles and focus on patterns that apply across stacks.