Delivery Patterns, by DataMartIn
Most of what is written about AI in data work falls into two buckets. The first says AI will replace developers and analysts within two years. The second says AI is glorified autocomplete and not worth the subscription. Both are wrong, and the gap between them is where the actual answer lives.
We have been using AI coding assistants on our Microsoft Fabric work for some time now. The specific tool I have spent the most time with is Claude Code running inside VS Code, but the same patterns hold across the broader category of AI coding assistants. The conclusion I have come to after months of daily use is straightforward, and worth saying clearly because the industry has not said it well enough:
AI is not a people replacement. It is a power multiplier. And like any multiplier, it cuts both ways.
If you know what you are doing, AI gives you superpowers. You move faster, think more clearly, build cleaner solutions, and have technical conversations with the AI that genuinely sharpen your judgment. If you do not know what you are doing, AI lets you build a giant mess at the same accelerated pace. The same tool that lets a senior engineer ship a clean Fabric platform in three weeks instead of six lets an inexperienced one ship a fragile platform in three weeks that would have taken six to fail on its own.
This is the part the buyer conversation is missing. The right question for a leader evaluating AI tooling for a data team is not “will this make us more productive.” It is “will this make us more productive in the directions we want to go, or will it accelerate us toward problems we cannot see yet.”
What AI is genuinely good at in Fabric work
There are workflows where AI has changed how I work in ways I would not give up. Six of them, broadly applicable across the Fabric stack.
Drafting KQL queries against unfamiliar schemas. Workspace Monitoring’s Eventhouse schema, capacity metrics tables, the various system tables that ship with Fabric. AI can produce a working first draft from a natural language description faster than I can write it from scratch. The query is almost never the final version, but it gets me to “edit the structure” rather than “stare at the documentation,” which is a meaningful productivity shift.
Explaining and refactoring DAX measures. A complex measure inherited from a previous engagement, or one a junior team member wrote that needs cleanup. AI reads DAX well, can explain what a measure is actually doing in plain language, and can propose refactors that are usually closer to right than wrong. This used to be the slowest part of a Power BI handover audit. It is not anymore.
Translating between languages and platforms. Spark SQL to T-SQL, T-SQL to KQL, PySpark to Pandas. The translations are not always production-ready but they get you ninety percent of the way there. For a consultancy working across multiple Microsoft data platforms in a single engagement, this compresses what used to be hours of context-switching into minutes.
Enforcing naming conventions, deterministically. This one I will admit to having an emotional reaction to. Twenty years of inheriting projects with Sheet1, Customers_v2_FINAL, tbl_CustomerDataNew, dim_Customer_temp_DELETE_ME, and factOrdres (misspelled in production for nine years) has left a mark. AI is the first tool that makes consistent naming actually achievable at scale. Define the conventions once, let the AI apply them across tables, columns, measures, semantic models, pipelines, and notebooks. Misspellings disappear. Inconsistencies between developers disappear. Drift between projects disappears. It is one of the most underrated quality-of-life shifts in modern data work, and the platform-level benefits compound for years.
Writing documentation while you write code. This is the workflow most teams underrate. I describe what I just built, the AI drafts the documentation, I correct it. The documentation is written at the same time as the code instead of three weeks later when nobody can remember the reasoning. Senior practitioners have always known this is how documentation should work. AI is the first tool that makes it not painful.
Pair-thinking on architecture. This one is the most subtle and the most valuable. I do not treat AI’s first answer as truth. I treat it as an idea, and then I argue with it. When I think it is wrong, I tell it. When I think the approach it proposed is horrible, I say so, and ask it to defend the choice or propose something better. The conversations that come out of this are genuinely different from working alone. The AI surfaces tradeoffs I had not considered, and being willing to push back on it sharpens my own thinking in the process. It is not as good as a sharp colleague, but it is available at 2am and never tired. For solo practitioners and small teams, this changes the quality of the thinking, not just the speed of the typing.
The common thread across all six is that the value compounds with the quality of the human input. The better the question, the better the output. The clearer the design intent in the prompt, the closer the result is to production-ready. AI rewards practitioners who have already built the judgment to direct it.
Where it goes wrong, and why it goes wrong fast
The double edge cuts the other way in three patterns we keep seeing.
The first is the plausible-but-wrong failure. AI produces code, queries, or configurations that look reasonable, run cleanly, and are subtly incorrect in ways that only show up later. A KQL query that returns numbers that look right but quietly excludes a relevant time window. A semantic model relationship that produces correct totals but breaks at the line-item grain. A pipeline that succeeds but processes the wrong file. A senior practitioner spots these quickly because they have the pattern recognition. A junior practitioner ships them, and they enter production, and three weeks later somebody is trying to figure out why the executive dashboard disagrees with finance.
This is not an AI problem. It is a verification problem. AI is faster than the human verification loops most teams have in place, which means the rate at which plausible-but-wrong output enters codebases is now higher than the rate at which it gets caught. The fix is not better AI, it is stronger review, and the irony is that AI makes review more necessary, not less.
The second is the silent technical debt accumulation. AI is willing to give you any code you ask for. A junior practitioner who has been told to “make it work” will iterate with the AI until something works, regardless of whether the path was clean. Six months of this produces a Fabric workspace where every artifact runs but nothing fits together. Naming conventions drift. Patterns multiply. The same problem is solved three different ways in three different notebooks because each session of AI assistance was scoped to one task. The team thinks they are productive because things ship. The platform thinks otherwise, and shows it eighteen months later when a small change cascades into a week of fixes.
The third is the lost reasoning trail. When a developer writes code from first principles, they remember why each decision was made. When the AI writes most of the code, the developer remembers what was asked for and approximately what came back. The “why” lives in the conversation, not in the code, and the conversation is rarely saved. Six months later, when somebody needs to modify the logic, there is no institutional knowledge to reach for. This is solvable with discipline (committing the reasoning to comments, ADRs, or commit messages) but the discipline is rarely there.
The senior practitioner’s job gets more important, not less
The naive AI productivity argument runs: AI lets one person do the work of two, so we need half the headcount. The actual math is closer to: AI lets a senior practitioner do the work of three, lets a mid-level practitioner do the work of one and a half, and lets a junior practitioner accumulate technical debt approximately 50% faster than before.
The implication for a data leader is that the composition of the team matters more, not the size. A team of mostly seniors with AI is dramatically more capable than the same team was two years ago. A team of mostly juniors with AI is dramatically more dangerous than the same team was two years ago, because the failure mode is now hidden inside artifacts that look correct.
This is not a popular conclusion in either direction. It is uncomfortable for senior practitioners who feel like AI is encroaching on their craft. It is uncomfortable for leaders who hoped AI would solve their talent acquisition problems. It is uncomfortable for junior practitioners who hoped AI would compress their path to seniority. But it is what we are seeing in practice, across engagements, repeatedly.
The senior practitioner’s role is shifting from “writes the code” to “directs and verifies the code.” Both of those have always been part of senior work. The ratio is changing. The judgment is what AI cannot replicate, and the judgment is exactly what becomes more valuable as the rate of code production accelerates.
How we use AI on our engagements
A few principles we have settled on through experience.
We review for results and architecture, not line by line. I do not read every line of AI-generated code with a microscope. That would defeat the productivity gain. What I do verify is whether the result is correct and whether the architecture is sound. If the output produces the right answer through a sensible design, the implementation details are not what I am spending my attention on. This works precisely because I have enough experience to recognize when a result or an architecture is off, even at a glance. It would not work for someone without that pattern recognition.
We treat AI output as an idea, not the truth. The first response is rarely the final answer. We argue with it, push back on it, ask it to defend or reconsider, and iterate until the design is genuinely good. The willingness to disagree with the AI is what separates productive use from rubber-stamp use.
We commit the reasoning, not just the result. When AI helps shape a non-obvious design decision, we write down why, in comments, in commit messages, or in architecture decision records. The “why” is the part that is hardest to recover later.
We use AI more for analysis than for generation in unfamiliar areas. If we are working in territory we know well, generation is fine because we can verify quickly. If we are working in territory we know less well, we lean on AI to explain, summarize, and ask questions of the documentation rather than to produce code we cannot fully evaluate.
We expect the team to be able to maintain the work without AI. If a piece of code is so AI-dependent that nobody on the team can debug or modify it without re-engaging the AI, that is a signal something has gone wrong. AI should accelerate work the team understands, not produce work the team cannot own.
In practice
AI in Fabric development is real, and it is changing the daily work meaningfully. We use it every day. We are more productive with it than without it. None of that is in dispute.
What is worth being honest about is that the technology is amplifying whatever is already there. If a team is disciplined, AI makes the discipline pay off faster. If a team is sloppy, AI makes the sloppiness compound faster. The productivity numbers in either case look similar in the short term. The platforms diverge over the eighteen months that follow.
For practitioners, the takeaway is that the investment in fundamentals is more valuable now, not less. Understanding KQL, DAX, dimensional modeling, semantic model design, capacity behavior, these are the judgment muscles AI does not replace, and the judgment is what makes AI safe to wield.
For leaders, the takeaway is that team composition and review discipline are now the binding constraints on how much value AI can deliver. A small team of strong practitioners with AI will outperform a large team of average practitioners with AI, by margins that surprise people.
We are entering a period where the gap between high-judgment teams and low-judgment teams widens, fast. AI is the accelerant. The direction is whatever was already there.
Want to talk about how to deploy AI in your Fabric work without accelerating in the wrong direction? Book a discovery call.
-Martin Rojze