Most commentary on AI in professional services sits at one of two poles: uncritical enthusiasm or reflexive scepticism. Neither is useful. I build AI tools alongside advisory work, which means I have a grounded view of both what AI can do and where it breaks down.

AI genuinely compresses research timelines

A task that previously took an analyst two days, gathering industry context, competitor profiles, and market structure from disparate sources, can now be done in two to four hours with a well-structured workflow. The bottleneck shifts from gathering information to evaluating it. That is a meaningful change in how project time is allocated.

Large document synthesis is a second genuine capability. On a typical CDD, the data room contains hundreds of files. AI can read, tag, and surface patterns across those documents faster than any team. Concentration issues buried across multiple contract files, pricing inconsistencies in a rate card history: these become visible in hours rather than days.

AI cannot form a point of view

AI can summarise evidence and identify patterns. It cannot weigh ambiguous, conflicting signals and make a call under uncertainty, which is most of what commercial strategy actually is. A market where two plausible theses are both supported by evidence requires a judgment. AI will present both theses with equal confidence. That is precisely the opposite of what a deal team needs.

AI also cannot replace customer interviews. The value of an interview is not only the information provided. It is the judgment formed in real time: the hesitation before an answer, the unprompted mention of a competitor, the gap between what a customer says and what they mean. That requires a human in the conversation.

The Last-Mile Rule

AI accelerates everything up to the point where a judgment call is required. Forming a view, weighing conflicting evidence, making the recommendation: those remain human tasks. Workflow design should reflect that explicitly, not assume it will self-correct.

Confident-sounding output is not the same as correct output

AI-generated analysis can be wrong in ways that are invisible from the output. The prose is fluent, the structure is logical, the citations look credible. But the synthesis may have missed a critical nuance or weighted a weak signal too heavily. This risk is specific and manageable, but only if teams are designed to catch it.

Use AI early and heavily; keep humans on the conclusions

Use AI for research compression and document synthesis. The output becomes an input to human analysis, not the analysis itself. Do not use AI to write the conclusions. The verdict on revenue quality, competitive position, or management credibility should come from a person who has done the interviews and formed a view.

Teams using AI also need senior oversight earlier in the process. If a junior analyst is using AI to accelerate research, errors in the output can propagate into the analytical layer before a senior reviewer catches them. Build the workflow to prevent that, not to assume it corrects itself.