If youāve ever tried something new, youāll know it rarely ā actually never ā goes exactly according to plan.
Thatās as true for learning to ride a bike (yes, here we go again with the bike analogy) as it is for transforming a business.
A typical āAI transformation journeyā ā including several Iāve personally been part of ā usually looks something like this:
š In PowerPoint:
Data ā AI ā Automation ā Value.
š In reality:
Data ā Confusion ā Alignment ā Learning ā Rework ā Partial Value ā Culture Shift ā Real Value (maybe).
The reason? Change is hard.
Executives often underestimate the gravitational pull of āhow things have always been done.ā
They overestimate how ready their data, processes, and people are. And they almost always hit a wall where the technology is no longer the problem ā the organization is.
š¾ It starts with data⦠and humility
Most āAI journeysā begin with a painful realization: your data is worse than you thought.
Not because people have done a bad job ā but because retail data was built for reporting, not for reasoning.
You find product life cycles that donāt make sense, duplicate SKUs that no one noticed, inconsistent hierarchies, multiple ātruthsā for the same product, and systems that speak entirely different languages.
And then thereās the data that simply isnāt used ā and therefore never checked.
But hereās the first important point: thatās not failure ā thatās the starting point.
The smart move isnāt to wait for perfect data. Itās to design systems that improve data as they learn.
š Progress beats purity, every time.
š„ Then come the people
The second wave of resistance isnāt technical ā itās emotional.
When AI enters the room, roles shift (or at least, they should):
- PlannersĀ become validators.Ā
- AnalystsĀ become interpreters.Ā
- ManagersĀ become facilitators.Ā
And hereās where many companies miss the target entirely.
If you blindly chase AI for its own sake, youāll miss the mark.
But if you forget about the people, youāll miss it by even more.
Just to illustrate: how many of you, in your last technology discussion, said something like āIt needs to have AIā?
Iāve personally seen plenty of RFX documents that simply ask, āDoes your solution have AI?ā ā as if that single checkbox were the measure of intelligence.
But what does that even mean?
And when you finally get this āmagical AI,ā what happens?
Your users hesitate to use it ā because it feels like a black box, spitting out answers they donāt fully understand or control.
Thatās not innovation; thatās alienation.
Itās not about building mysterious systems.
Itās about using AI to enhance human reasoning ā helping teams draw conclusions faster and connect dots they couldnāt see before.
Just as you trust your Product Manager or Logistics Manager to make a call you donāt fully grasp, you need to extend that same trust to AI ā within clear guardrails and a shared understanding of the goal.
This is where it often gets uncomfortable.
But itās also where real transformation happens ā when organizations stop asking, āCan we trust the machine?ā and start asking, āCan the machine trust us?ā
- Because the quality of AI outcomes depends entirely on theĀ quality of human inputĀ ā the signals, context, and guardrails we provide.Ā
And hereās a practical step forward:
š Start asking your technology providers how they leverage AI to support users, instead of simply, āDoes it have AI?ā.
That one shift in thinking will put you on the right path.
āļø Process before prediction
Hereās another truth: most AI failures are process failures in disguise.
If pricing, promotions, and supply decisions are still made in silos, it doesnāt matter how smart your models are.
Even though AI can overcome certain gaps – AI isnāt a patch for bad process ā itās an amplifier.It scales whatever you already have, good or bad.
Example:
If replenishment runs on daily human overrides because pricing updates come too late, your āAI optimizationā will simply automate inefficiency.
Fix the timing, connect the data, and suddenly the same model delivers value.
Thatās why the āAI revolutionā in retail wonāt be won by data scientists alone.
Itāll be won by companies that rethink their operating model around decisions, not departments. The next blog in this series will deep dive into this topic.

š§ Embrace the ugly phase
The messy middle is where the real magic happens:
⢠When the pilot that looked perfect in one category falls apart in another ā and you realize the issue isnāt the model, itās the assortment logic.
ā Solution: Build adaptive models per category instead of a one-size-fits-all rollout.
⢠When a planner overrules an AI recommendation ā and the team takes the time to understand why.
ā Solution: Treat overrides as insights, not errors. Feed that reasoning back into the system.
⢠When IT, operations, and merchandising finally realize theyāre solving the same problem from different angles.
ā Solution: Establish shared KPIs around outcomes (availability, margin, waste) instead of function-specific metrics.
Thatās the phase most organizations try to skip ā the one you have to survive before you scale.
The difference between those who win with AI and those who donāt isnāt technology maturity.
Itās organizational stamina.
š” The real question
Everyone wants to know, āHow fast can we get value?ā
The smarter follow-up question is:
āHow fast can we learn?ā
Because every failed pilot, every integration headache, and every uncomfortable meeting is part of the learning curve that separates hype from impact.
The messy middle isnāt a detour.
It is the journey.
š Final thought
AI in retail isnāt a straight line from data to ROI.
Itās a loop ā a constant evolution of data, trust, and process.
The winners will be those who stop trying to skip the mess ā and start mastering it.

Leave a comment