PMs, DPOs, data engineers, and analysts run the full AI-augmented data lifecycle together — on a real dataset, in your stack. No slides. Your team ships an ADO backlog, a medallion dbt model, a Streamlit data agent, and a live ticket-to-PR loop by 5pm.
AI changed what a competent data team can produce in a week, and what a leadership team can ask of its data without waiting two weeks for an answer. Most organizations are still operating at pre-AI speed. The workshop closes that gap in one day, with the AI Governance scaffolding to keep it defensible as you scale.
Read the spec. Find the right sources. Write the model, the tests, the docs. Open the PR. Handle review. Merge. Deploy. On a real week that takes four to five days. With AI agents wired into the workflow — conventions, tests, warehouse access — the same loop closes in a half day.
Most analytics teams spend their hours maintaining dashboards and answering ad-hoc data requests. With AI agents handling the plumbing, your team has the bandwidth to engage with business owners, hear that complaints from key customers are spiking, and ship a chatbot that surfaces those complaints the same morning.
We run Data AI in a Day inside a self-contained environment we ship as docker-compose — Claude Code, MCP servers, dbt, DuckDB, Azure DevOps, and Streamlit pre-wired and pre-loaded. Your team starts working at 09:00 after the install half-day is already done.
The day mirrors the AI-augmented data lifecycle end-to-end: requirement, engineering, analytics, agentic loop. PMs, DPOs, data engineers, and analysts each lead their module — the rest review and learn how the work hands off.
Modern data stack — warehouse, dbt, BI, Python orchestration, Jira-equivalent, Git — pre-configured with realistic data, agents, and integrations. NordicMart Q2 RCA dataset loaded by default.
Your team carries one realistic case through every step: stakeholder transcript, backlog, medallion model, talking dashboard, agentic PR. Each step shows where AI agents accelerate the work and where human judgment stays load-bearing.
Markdown knowledge base, prompt library, ticket-writer skill, one-page runbook, Plan/Code/Ask reference card, and a measured before-and-after — the artefacts you need to bring the pattern back into your own stack.
Each module is one stage of the AI-augmented data lifecycle. Each module feeds the next. The full team is in the room all day; the lead chair rotates to match the work — and every module ships concrete artefacts.
findings.mdCoffee 11:30–11:45 · Lunch 13:45–14:30 · Hour-by-hour schedule available in the workshop brief PDF.
For a project-scale knowledge base — 10–20 cross-linked markdown files capturing decisions, requirements, findings — we don’t use vector indexes. We use Claude Code reading the files it needs, when it needs them, with full fidelity. This isn’t dogma. It’s leverage.
RAG has its place — at company-wide corpus scale. For the workshop, and for most data team projects after, the markdown approach wins on speed, transparency, and audit trail.
A modern, opinionated stack pre-loaded with realistic data. The team works with the same tools we use at our clients — no “toy” environments, no shortcuts.
CLI agent that operates on your filesystem, runs commands, and lives inside the repo. The driver for the day.
Filesystem, DuckDB, and Azure DevOps — three MCP servers wired to Claude Code, your portable interface to the stack.
Embedded analytical engine. The medallion runs locally on the participant’s laptop — no cloud bill, no security review.
The transformation layer. Bronze, silver, gold with tests and docs generated as part of the build, not after.
The backlog target. Epics, stories, and the live PR loop at the end of the day. Jira swap on request.
For the production-shape DAG patterns. Reviewed in context, not built from scratch in one day.
The “talking dashboard” front-end. PII guardrails, visible SQL, save-as-finding pattern baked in.
The whole stack as docker-compose. One make up brings it live. Reproducible, self-contained, off the laptop in one command.
We don’t ask anyone to take “AI makes teams faster” on faith. Run the numbers. If they don’t come in above 4× ROI on your business, we tell you not to buy the workshop.
Industry data shows roughly 40% of a data team’s time goes to model authoring, tests, and PR drafting. We see ~60% reduction on that work after the pattern is wired in. At a loaded cost of €180,000 per engineer per year, an 8-person team recovers around €345,000 in annualized capacity. The workshop pays back on the second sprint.
There’s a moment every leader knows. You ask a simple question — “what are our margins this quarter?” — and two weeks later three departments send three different answers.
I’ve spent 15 years inside complex organizations across Europe and beyond, and watched the same loop repeat: data scattered, numbers inconsistent, AI pilots that never reach production. So I co-founded Astral Forest with one mission — make data work for the people running the business, not the other way around.
We don’t run open enrolments. Every workshop is private, purchased by one company, scoped to your stack and your data before any commitment.
Up to 12 participants from one team. Fixed fee, fixed scope, fixed date.
We walk your stack, talk through the work your team actually does, and agree on the case mirror to load into the workshop environment.
Fixed fee, fixed scope, fixed date. Standard MSA available; we also sign yours.
We tune the workshop environment to your case, calibrate the agents, and send your team a 15-minute pre-read.
Five modules. Pattern running by 5pm. Runbook in your team’s hands.
The workshop is a contained engagement. About 60% of teams continue with us into one of two delivery shapes; 40% run the pattern themselves. We’re explicit about all three because pretending otherwise insults your procurement team.
Your team takes the runbook and the prompt library and runs the pattern internally. We stay available over Teams for 30 days. About 40% of clients choose this.
We embed with your team for 90 days, migrate your top-priority data domain onto agent-augmented workflows, and hand you a measured before-and-after.
An ongoing capacity pack: a named Astral Forest engineer runs your agent-augmented pipeline work alongside your team. Scales up and down with your backlog.
All three paths start with the same scoping call.
Because the behavior change happens in one day and the rest is practice. We’ve run longer formats and seen the same outcome. If your team needs more structured support afterwards, we sell that as a 90-day implementation sprint.
No. Above 12, each participant stops getting hands-on time with the agent on a real build, and the mechanism breaks down. If you need to train more than 12, we run the workshop twice.
On-site only — at your office. Co-locating the leadership and data team in one room is part of how the day works. Travel and one prep day are billed at cost outside the EU and UK.
English, Polish, or French. Michał delivers in all three.
No. For the workshop day, Astral Forest provides every license you need — Claude Code, MCP servers, dbt, the DuckDB warehouse, ADO / Jira sandbox, and Streamlit. Your team focuses on the work, not on procurement. After the workshop, we hand you a sourcing one-pager for replicating the stack at home.
Yes. Default stack is Claude Code + Azure DevOps + DuckDB + dbt + Streamlit. Common adaptations: Jira instead of ADO, GitHub Copilot alongside Claude Code, Snowflake / BigQuery instead of DuckDB, Power BI / Looker for the final dashboard. We confirm tool swaps in the scoping call.
Yes — we encourage it. Bring an anonymized or synthetic mirror of your real data and we shape the modules around it. The transcript, the backlog, the dbt models, the talking dashboard — all built on your case. Sharper learning, faster transfer back home. We default to NordicMart Q2 RCA when you’d rather not share data; this also works.
No. The workshop runs inside an environment maintained by Astral Forest, loaded with synthetic or fully anonymized data. Your production code and data stay outside the engagement. The artifacts your team takes home — agent templates, prompts, runbook — apply back into your stack on your terms, with your security and platform teams in the loop.
You do. The configured agent templates, prompts, conventions documentation, and runbook are yours, transferred under the engagement.
No. Foundation models from Anthropic and OpenAI both offer enterprise modes where inputs and outputs aren’t used for training. We configure agents to use those modes by default.
Yes — sent on request, signed under NDA. Covers data handling, access model, agent scoping, and our subprocessor list.
That’s what the runbook and the 30-day Teams channel are for. If the pattern hasn’t stuck after 30 days, we’ll say so — and we’ll usually recommend the 90-day implementation sprint rather than another workshop.
About 60% of clients continue into a 90-day sprint or an analytics agent subscription. The other 40% run the pattern themselves. We’re upfront about both, and we walk away clean if neither is the right fit.
We write the success criteria into the SOW: by end of day, the team will have shipped at least two agent-drafted artifacts to production. If we don’t hit that, we deliver a second session at no additional cost.
Yes. We set one up after the scoping call, matched to your industry and stack.
20-minute call. We walk your stack, agree on the case mirror to load into the workshop environment, and send a fixed-fee proposal within 48 hours.