By Dean Waye & Alan Gonsenhauser
AI will soon be able to simulate any public company’s next moves.
The only thing it can’t decode is a genuinely non-obvious idea.
For 3 years, the fear around generative AI has been homogenization. The idea that if every company uses the same models, they’ll end up sounding the same, building the same products, and blending into a beige, algorithmic mush.
That fear is valid. But it’s not the real danger.
The deeper threat to individual companies isn’t sameness. It’s unavoidable transparency.
AI isn’t just making us average. It’s making us predictable.
Adopt the same output from the same tools as competitors, and you start thinking the same way. Similar outputs = the same logic. The same instincts. The same blind spots. Your strategic playbook becomes a glass playbook visible to anyone who knows how to look.
The Logic of the Machine: Predictable by Design
Generative AI is built to produce the probable. It isn’t trying to invent new paths. It’s trying to guess what comes next, based on what it’s seen before.
When you prompt an LLM, it doesn’t create it pattern-matches. It assembles the most common, most statistically likely response from everything it’s been trained on.
This feeds the Homogenization Flywheel:
1. AI trains on existing content.
2. Businesses use it and generate new content that mimics the old.
3. That content becomes the training data for the next model.
4. Round and round it goes, accelerating the collapse into consensus.
But why do so many leaders accept this? Because predictability feels safe.
A bold, human-led idea is risky. It requires conviction. It invites scrutiny. But an AI-generated strategy comes wrapped in data, charts, and probability. It’s a ready-made justification for playing it safe. It feels logical. It looks defensible. And that’s exactly how strategic mediocrity becomes institutionalized.
From Tactical Convergence to Strategic Blindness
This plays out across every function:
• Marketing: Teams prompt the AI for campaign ideas and get the same familiar tropes: emotional families, aspirational voiceovers, seasonal offers. Every brand’s ad looks the same.
• Product: Teams ask for new features and get the same greatest-hits list from Reddit and review sites: dashboards, integrations, AI summaries. Feature parity becomes the goal. Innovation disappears.
But these are surface symptoms. The deeper danger is business model convergence.
An AI trained on Blockbuster’s 2004 data would have told them to optimize their late fees and store footprint, not burn it all down and build Netflix. Kodak wouldn’t have been told to kill the film business. AI doesn’t challenge the model it’s been trained on. It reinforces it.
AI looks backward. Vision looks forward.
The Glass Playbook: When Everyone Knows Your Next Move
Once you’re predictable, you’re exposed.
To customers, you become a commodity. You lose edge, voice, and meaning. The algorithm-approved strategy that promised broad appeal ends up creating no appeal at all. Your brand becomes invisible not because it failed, but because it succeeded in being “safe.”
To competitors, you become legible. If they can guess what the AI told you, they can intercept it:
• Beat you to market.
• Hijack your ad channels.
• Acquire your target partner.
This isn’t theory. With the right prompt, your rivals can simulate your strategy before you execute it. Strategic convergence becomes strategic vulnerability. And the race to stand out turns into a race to the bottom where the only lever left is price.
The Centaur Strategy: Partnering With the Machine, Not Obeying It
The way out isn’t to abandon AI. It’s to use it properly.
In a Centaur Strategy, (from chess, where the first AI+human teams formed) AI handles convergence. You handle divergence. AI analyzes, accelerates, and drafts. You question, reframe, and invent. That’s the split.
Your advantage lies in Cognitive Alpha, the human ability to think sideways, to spark originality, to say, “what if we didn’t do any of that?”
The key is in the prompt. Compare these:
“Write a tagline for our coffee brand.”
vs.
“Write a tagline in the voice of a 19th-century existentialist.”
“Develop a launch plan for our SaaS app.”
vs.
“Build a go-to-market strategy based on Mongol military tactics.”
The second version in each pair pulls the AI out of its ruts. You’re no longer asking for the most probable answer… you’re forcing novelty, guided by taste and judgment. You’re directing, not outsourcing, the thinking.
But here’s the harder truth:
Who in your company can write that kind of prompt? And then write the next step?
If your company culture is built for efficiency and fit, you’ve probably filtered out the one thing AI can’t provide: divergent thinking. Neurodivergent minds. Contrarian minds. Cognitive misfits. The humans who zig when everyone else zags.
No Centaur strategy works without human contrast. Without friction, you get no spark.
Final Warning? Unconventional Is the Only Moat Left
The AI era won’t reward companies for using tools the same way everyone else does. It will punish them for it.
When your strategy becomes predictable, it becomes obsolete.
When your voice becomes average, it becomes ignored.
When your thinking becomes transparent, it becomes exploitable.
AI will soon be able to simulate any public company’s next move.
The only thing it can’t decode is a genuinely non-obvious idea.
Imagination is the last asymmetric advantage.
Your predictability is the biggest risk you’re not tracking.
Don’t think harder.
Think differently.