Companion piece to the “Will AI Replace Us?” SQLBits 2026 panel. My answers, in full, to the audience questions we ran out of time for. Will AI Replace Us? The SQLBits Questions We Didn’t Get To
When I chaired the “Will AI Replace Us?” panel at SQLBits Newport on Friday, the audience submitted twenty-four questions through the QR code on screen. We answered a handful on the day, but the clock beat us and the rest went unanswered.
That bothered me. The questions were good — sharper, in places, than the set questions I’d prepared for the panel — and the people who took the trouble to type them in deserve answers. So this post is exactly that: every audience question, grouped by theme, answered in my voice.
My panellists — Andy Cutler, Eugene Meidinger, Deepthi Goguri, Tori Tompkins and Johnny Winter — may well disagree with some of what follows, and they’re welcome to say so. These are my answers, not theirs.
1. The junior pipeline problem
If AI is absorbing the entry-level work, where do the medium and senior experts of the next decade come from — and whose responsibility is it to grow them?
This is the most important question on the list, and I think the industry is currently getting it badly wrong.
The dynamic I’m watching play out across professional services is straightforward and short-sighted. Firms are using AI to absorb the work juniors used to do. That’s a rational decision at the level of any individual quarter — the work gets done, the cost line shrinks, the partner’s margin holds. It is a catastrophic decision at the level of the next decade.
Senior expertise isn’t born. It’s grown. And it’s grown by doing the work that AI now does in thirty seconds. The first-cut analysis. The meeting notes that force you to actually understand what was said. The bid response that teaches you how to read a client’s real requirement underneath the stated one. The unglamorous middle of consulting work was never just throughput. It was the apprenticeship that turned a graduate into someone who could be sent into a client meeting alone five years later.
If you stop hiring juniors, or you hire them and put them on “oversee the AI” duty, you’ve hollowed out the apprenticeship. The senior people of 2032 don’t just appear because the market needs them. They get built one engagement at a time, and that build process is what AI is now eating.
Whose responsibility is it? This is where I’d push hard against the comforting answer. It is not a market problem the market will solve. The market will quite happily produce a hollowed-out talent pyramid for a decade, and only correct when senior salaries spiral so far that someone has to do something about it. By then the damage is generational.
This is an industry responsibility problem. Firms must invest in training pathways that look different from the ones they had. That probably means: juniors do more of the judgement work earlier, with AI doing the production work; structured exposure to client conversations from week one rather than year three; deliberate, named mentorship rather than learn-by-osmosis; and yes, slightly worse short-term margins so the long-term capability survives.
The firms that do this will own the senior talent of 2032. The firms that don’t will be paying enormous money to hire from the firms that did, or they’ll discover that the senior pipeline they assumed would refill itself simply hasn’t. I think this is the single biggest commercial risk facing professional services right now, and almost nobody is treating it with the urgency it deserves.
2. Hallucinations, non-determinism and trust
In business contexts where output needs to be repeatable — documentation, code, analysis — how do you handle the fact that AI output isn’t deterministic, and what do you actually do to prevent hallucinations?
The short answer: you don’t make the model deterministic. You make the process around it deterministic.
People who haven’t worked with these tools at scale tend to assume the engineering challenge is at the model layer — better models, lower temperature, cleverer prompts. It isn’t. The engineering challenge is in the workflow that wraps the model. The model is the messy bit; the system around it is what makes the output reliable.
In practical terms, here is what I push clients toward when the use case demands repeatable output:
First, a prompt library with versioning. Every prompt that goes into production is named, version-controlled, and changeable only through a defined process — not by whoever happens to be at the keyboard that morning. If you can’t tell me which version of which prompt produced last Tuesday’s output, you can’t debug a quality problem when one shows up.
Second, an evaluation rubric. “Good” output has to be defined in advance, against criteria that can be scored. I tend to use five dimensions — accuracy, relevance, format, tone, actionability — with a 1-to-5 score on each. A prompt is only signed off when it scores 4 or above on all five across ten consecutive runs. Not one lucky run. Ten.
Third, retrieval grounding wherever it’s feasible. If the model is answering questions about your company’s policies, point it at your company’s policies through a retrieval layer rather than asking it to rely on its training. The hallucination rate on grounded retrieval is dramatically lower than on free generation, because the model is constrained to the source material rather than improvising from a blurry general memory.
Fourth, mandatory citations on factual claims. If the output makes a claim about a specific number, date, or rule, it must point to where that claim came from. Claims without citations get treated as suspect by default.
Fifth, a human review gate proportionate to the cost of error. AI-drafted internal meeting notes? Light review. AI-drafted client-facing analysis with regulatory implications? Two-person sign-off, every time, no exceptions. The depth of the review should scale with the consequences of being wrong.
On hallucinations specifically: I treat them like off-by-one errors in code. You don’t prevent them by hoping. You prevent them by structure. The teams that get burned are the teams that asked an LLM what year a regulation came in and didn’t check. The teams that don’t get burned are the ones who designed the workflow on the assumption that the model will be wrong sometimes, and built verification into the process rather than the prayer.
3. AI-generated output mistaken for expertise
Where have you seen organisations confuse AI output with genuine expertise, and what controls should analytics and consulting teams put in place before that causes real damage?
Everywhere. This is the failure mode I see most consistently in real client engagements, and it’s rarely the dramatic kind that ends up in the press.
The pattern goes like this. Someone runs a question through Copilot or ChatGPT. The output looks competent. It uses the right vocabulary, has the right shape, hits the right length. It gets pasted into a board paper, an RFP response, a strategy document, a client deliverable. Nobody reads it carefully because it looks plausible. Six weeks later, someone notices the numbers don’t reconcile, or the recommendation contradicts the company’s actual position, or the cited regulation doesn’t exist. By then it’s in front of the board, the client, or the regulator.
The reason this is dangerous — more dangerous than equivalent human error, in my view — is that competence-shaped output without competence behind it short-circuits the review process that would have caught a bad human draft. A junior’s draft has the tells of inexperience: shaky framing, missed angles, stilted prose. A senior reads it and instinctively reaches for the red pen. AI output has none of those tells. It reads like it was produced by someone who knew what they were doing, which means it doesn’t trigger the scrutiny it actually deserves.
The controls I’d put in place, in priority order:
A review gate where AI output enters the workflow at the same point a junior’s draft would. As a starting point, not a finished artefact. The senior signs off the substance, not just the formatting. If your process treats AI output as “already reviewed because the model is smart,” you have a quality problem regardless of how good the model is.
A disclosure expectation inside the team. People should be able to say, without embarrassment, “I used AI to draft this section — here’s what I checked and what I haven’t.” The teams I see getting into trouble are the ones where AI use has gone underground because of a vague sense that admitting to it looks lazy. The opposite is true. Concealed AI use is the dangerous kind.
A factual verification step on any output that makes specific claims about numbers, regulations, named individuals or named organisations. This is the cheapest control to implement and the one most consistently skipped.
And a cultural rule that I think is underrated: nobody presents AI output to a client or a board if they couldn’t defend it themselves under questioning. The test for whether something should leave the building isn’t whether the model produced it. It’s whether the human signing the email can stand behind every claim in it.
4. The billing model shift
As AI compresses the time to resolve issues, are you seeing a real move from billing time to billing value — and what does that mean for how consultancies price work?
Yes — in the firms that are paying attention, and not at all in the firms that aren’t. The split is becoming visible, and the gap between the two camps is widening fast.
Here’s the maths that’s forcing the conversation. If your offer is “we’ll spend 40 days doing a thing,” and AI lets your team do that thing in 12, you have three options. You can drop your fee proportionally and watch your margin collapse. You can keep your fee and quietly let your competition undercut you. Or you can change what you’re selling — from days to outcomes, from production to expertise, from project work to retained advisory.
Only the third option is sustainable, and it requires changing the commercial model rather than the delivery model. That’s harder than it sounds, because partner economics, billing systems, account planning, even how junior staff are evaluated, are all built around hours. Moving away from time-and-materials isn’t a pricing change. It’s a business model change.
What I’m seeing work in practice:
Outcome-based engagements where the price is fixed against a defined deliverable — a Copilot rollout to a defined population, a Fabric migration of a defined scope, a data quality programme with a defined target. The client gets cost certainty; the consultancy gets the upside of any AI-driven productivity gain. That upside funds the investment in better tooling and better people. It’s a virtuous loop.
Retained advisory at a fixed monthly fee, where the client buys access to expertise rather than blocks of time. This works particularly well for the kinds of conversations clients want to have repeatedly — “how should we think about this Microsoft licensing change,” “what’s the right way to position this Copilot rollout to the board,” “is our data estate ready for what we want to build.” Those questions don’t fit a project shape. They fit a relationship shape.
Productised offerings — a workshop, an assessment, a maturity benchmark — priced as products rather than projects. AI makes the back-office economics of these dramatically better, because the underlying analysis can be partly automated even if the front-of-room delivery isn’t.
What’s not working: holding the line on day rates while quietly using AI to deliver the same work in less time, and hoping the client doesn’t notice. They notice. And the moment they do, the conversation about your rates becomes a conversation about their AI procurement strategy, which is not where you want it to go.
5. AI vs automation
How often do clients actually need plain old automation rather than AI to solve the use case in front of them?
More often than the conversation in the market would have you believe. I’d put it at six or seven times out of ten.
This is one of the quieter ways AI has distorted client thinking. Two years ago, a finance team with a manual reconciliation problem would have asked for an automation — a Power Automate flow, a scheduled SQL job, a structured ETL pipeline. Today the same team asks for an AI agent. Sometimes that’s the right answer. Often it isn’t. The use case is rules-based, deterministic, and benefits from being predictable; an LLM is the wrong tool because it introduces variability where none was wanted.
My rough rule of thumb: if you can describe the rule in advance and it doesn’t change, it’s an automation problem. If the rule has to be inferred from context that’s ambiguous and varies case-by-case, it’s probably an AI problem. Most of what business operations actually want is the former dressed up in the language of the latter.
Where this matters commercially: AI projects are more expensive to build, more expensive to run, harder to govern, and harder to assure than automation projects. Choosing AI when automation would do is a tax the client pays for the rest of the system’s life. A consultant who can tell a client “you don’t need AI for this, you need a properly designed Power Automate flow” is genuinely valuable, and rare.
The hybrid pattern — deterministic automation for the predictable parts, AI for the genuinely ambiguous edges — is where most well-built systems are landing. The skill isn’t “AI versus automation.” It’s knowing where one ends and the other begins, and being honest with the client about which they need.
6. The longer arc — careers, CS degrees, and AGI
Looking 10–15 years out: will there still be senior consultants? Is a CS degree still worth it for a young person today? And if AGI-class systems do arrive, how do you see the human–AI working relationship evolving?
Three questions in one, and they deserve separate answers.
Will there still be senior consultants?
Yes, and there’ll be fewer of them, and the ones who are left will be doing different work. The bits of senior consulting that depend on judgement, relationships, and pattern recognition across many engagements are not going anywhere — if anything, they become more valuable as the production tier gets cheaper. The bits that depend on access to information, frameworks, or analysis that anyone can now generate from a model are going to compress hard.
The shape of the senior consultant’s week changes. Less time on producing materials, more time in the room with clients. Less time briefing a junior team, more time directly engaging with the model and the data. The job becomes more concentrated and more demanding, not less.
Is a CS degree still worth it?
Yes, but for different reasons than it used to be.
Five years ago you could argue a CS degree was worth it because it taught you to write code and there was a market for people who could write code. That argument is weakening. AI writes serviceable code now, and will write better code over the next few years.
The argument that holds up is different. A CS degree is worth it because it teaches you to think structurally about problems, to reason about systems that have hidden state, to debug your way through ambiguity, and to read code well enough to know when the AI got it wrong. None of those skills depreciate when the model gets better. They appreciate.
My advice to a young person today would be: yes to the CS degree, but don’t mistake the credential for the skill. Pair it with a domain — finance, healthcare, energy, biology — because pure technical skill is being commoditised, and applied technical skill in a meaningful problem space is not.
And if AGI shows up?
Honest answer: I don’t know, and I’m sceptical of anyone who tells you they do.
What I’m fairly confident about is that the path between here and “AGI” — if it exists at all — will be longer, messier and more contested than the loudest voices on either side suggest. The hype merchants want it next year because their valuation depends on it. The dismissers want it never because their worldview depends on that. Neither tribe has a great track record.
If something genuinely AGI-shaped does arrive, the human–AI relationship probably looks less like a tool and more like a colleague — with the genuine ambiguities about agency, accountability and trust that come with that. That’s a different conversation, and I don’t think we’re close enough to it for my speculation to be worth much.
In the meantime, what I tell clients is this: build for the AI you actually have, not the AI someone’s pitching at you. The current generation of models is genuinely useful in specific, narrow ways and unreliable in others. Designing your processes around what they can actually do today — with room to expand as they get better — is a far more durable strategy than betting your business model on a hypothetical AGI in 2030.
A final thought
If there’s a thread running through all six of these answers, it’s this: AI doesn’t change the fundamentals of professional work. It changes the cost structure. The people who win in this transition are the ones who recognise which parts of their work were always more production than judgement, redesign those parts to take advantage of the new economics, and reinvest the time saved into the parts that genuinely require a human.
That’s a raise, not a replace, for almost everyone willing to do the work to adapt. The work, however, is real.
Thanks again to everyone who submitted a question on the day. The post you’re reading exists because you took the time to type something into your phone in a darkened conference hall, and the questions you wrote were better than I had any right to expect. I owe you the answers, and now you have them.
Gethyn Ellis is a Microsoft Certified Trainer and Data & AI consultant. Find more at gethynellis.com.
PakarPBN
A Private Blog Network (PBN) is a collection of websites that are controlled by a single individual or organization and used primarily to build backlinks to a “money site” in order to influence its ranking in search engines such as Google. The core idea behind a PBN is based on the importance of backlinks in Google’s ranking algorithm. Since Google views backlinks as signals of authority and trust, some website owners attempt to artificially create these signals through a controlled network of sites.
In a typical PBN setup, the owner acquires expired or aged domains that already have existing authority, backlinks, and history. These domains are rebuilt with new content and hosted separately, often using different IP addresses, hosting providers, themes, and ownership details to make them appear unrelated. Within the content published on these sites, links are strategically placed that point to the main website the owner wants to rank higher. By doing this, the owner attempts to pass link equity (also known as “link juice”) from the PBN sites to the target website.
The purpose of a PBN is to give the impression that the target website is naturally earning links from multiple independent sources. If done effectively, this can temporarily improve keyword rankings, increase organic visibility, and drive more traffic from search results.