Skip to main content

Somewhere in a polished conference room in Davos, January 2025, a senior executive from a Fortune 500 company wrapped up his fifty-slide AI strategy deck and felt proud. The word “strategy” appeared 34 times. The number of board-level decisions that would change because of it: zero.

I walked into the Stargate announcement differently. When Donald Trump stood next to Sam Altman, Masayoshi Son, and Larry Ellison to announce a $500 billion investment in AI compute, 10 gigawatts of new data center capacity across America, I did not think “AI feature.” I thought: geopolitics has migrated from think-tank chessboards straight into electricity grids. The check engine light on civilization just turned red, and most boards are still fiddling with the dashboard display.

Trump, Altman, Son, and Ellison lined up like a hostage video for the venture class. Call it a tech story if you want. The electricity bill begs to differ.

That is the thesis of everything I write on this pillar. AI is a strategic force, the way rail was in 1870, or containerization in 1960. It is redrawing org charts, supply chains, trust architectures, labor markets, and geopolitical power maps simultaneously. Most executives are treating it like a shiny productivity upgrade.

That scraping sound you hear is the gap between ambition and comprehension, hitting reality at speed.

What “AI as a strategic force” actually means

Most boards have an AI problem. Not the one they think.

The problem is vocabulary dressed up as strategy. A company announces an “AI center of excellence.” Slides are commissioned. A steering committee is formed. Consultants are paid (well). Press releases are drafted. Nothing in the core of the business changes. I have sat in enough of those rooms to recognize the pattern: executives floating miles above reality, trading abstractions while the ground shifts underneath them.

Four specific failure modes keep recurring.

The vocabulary problem. Petra De Sutter, one of Belgium’s sharpest political minds, stood up at UGent’s New Year reception in January 2026 and read out AI-generated quotes that turned out to be completely fabricated. The accidental martyrdom that followed was entirely preventable. The lesson is not “ban the tool.” Vocabulary without comprehension is a liability. And it scales.

The Red Monkey problem. When AI compresses the middle layers of organizations, companies reach for conformity. Safe hires. Smooth operators. Anyone who generates friction gets quietly sidelined. I wrote about this in “Save the Red Monkey”: the contrarian, the awkward genius, the person who builds weird things in the corner and occasionally saves the company’s decade. As intelligent machines absorb routine cognitive work, the Red Monkey is the last thing you want to lose. Most companies lose them first.

The algorithm sacrifice. The LinkedIn algorithm demands a sacrifice, and most corporate AI strategies are built around the same logic: generate volume, optimize for signal, repeat. AI-generated posts are already killing feeds at scale. The broader failure is organizations that have confused automation with strategy. Automating the wrong things faster is a more efficient route to irrelevance.

The horizon problem. Your five-year plan is a fantasy novel with a Gibson dystopian ending. AI is moving on a quarterly clock now. IBM announced a hiring freeze for roles “that AI could do” in 2023, then watched as the actual tasks of those roles scattered into six other departments. The plan dissolved before the ink dried.

The five fault lines boards keep missing

1. Compute as the new geopolitics

Start with the boring nouns. Power. Heat. Water. Copper. Land.

Stargate is $500 billion of compute, 10 gigawatts of new American grid load, and a press conference with Trump, Altman, Son, and Ellison standing in a row like a hostage video for the venture class. I wrote about the Pigs in Space dimension of it: Elon Musk and SpaceX filing with the FCC for authority to deploy up to one million satellites as solar-powered, on-orbit processing capacity. One million. That number drags Kessler Syndrome into the conversation. It forces procurement teams at every serious company to ask: who controls the compute I depend on? Where does it run? What happens when the power goes out in Memphis?

2026 is the year friction shows its teeth. Energy costs are climbing. AI infrastructure demands governance that most governments are not ready to provide.

Compute is a geopolitical dependency now. Boards that still think of “cloud” as a neutral utility are about two years behind.

2. Agentic AI and the end of the org chart

The org chart made sense when humans did all the work. The org chart is dead; long live the work chart. When AI agents can take instructions, execute multi-step tasks, check their own outputs, and hand off to other agents without a human in the loop, you no longer have a pyramid. You have a network of tasks, decisions, and value flows that cuts across traditional reporting lines.

Agentic AI is killing Schrodinger’s cat: the ambiguity that once gave managers their power (only they knew the state of a project, a relationship, a deal) collapses when an agent can resolve uncertainty in real time. This rearranges who knows what, and therefore who gets paid for knowing it. The middle managers who used to be the only ones with the full picture are about to discover what redundancy feels like from the inside.

The first boards to understand this will redesign their organizations around orchestrating intelligence rather than reporting it. The rest will spend 2026 wondering why their middle management layer keeps requesting budget they cannot justify.

When Zuckerberg bought Moltbook (a social network built for AI agents to argue about consciousness and invent lobster religions, which I covered in some detail), the real story under the spectacle was the registry layer: whoever owns the infrastructure where AI agents verify who they represent and interact on behalf of real people owns a very interesting map of the future. An agentic economy problem. Not a feature decision.

3. Trust collapse and AI slop

Edelman’s 2026 Trust Barometer found we have entered an age of insularity: people retreating into smaller, safer circles, refusing to trust anyone outside their tribe. Business remains one of the last pools of institutional trust. That pool is now shrinking under the weight of AI-generated garbage.

AI slop is the new ambient noise. Feeds full of confident, hollow, machine-generated text that sounds plausible and means nothing. The Silly Serial Experts’ Parade is marching naked through AI Wonderland, and most audiences can feel it, even if they cannot name it.

The signal premium goes to whoever sounds like an actual human.

This is why the age of insularity matters to anyone building a brand in 2026. Trust is miserly now. It goes to the familiar. The organization that builds operational trust through incentives, local roots, and actual dialogue has a durable asset. The one that automates its voice into generic slop loses that asset faster than the algorithm can measure it.

4. Layoffs as strategy theater

Amazon laid off 14,000 corporate roles. Microsoft cut 15,000 heads in 2025, after the 1,900 from gaming the year before. IBM announced its hiring freeze. Salesforce moved in the same direction. These numbers get reported as “AI efficiency gains.” Some of them are. Most are not.

AI’s future-of-work tsunami is real, but the theater around it is damaging boards in ways they do not see yet. When a company announces AI-driven headcount reductions without a coherent picture of what tasks those people were doing, where those tasks are going, and who is responsible for the outcomes, it creates what I call a competency black hole. The work does not disappear. It just becomes nobody’s explicit job, until something breaks.

Never miss Brain Day.

The real risk of aggressive AI-driven restructuring: the institutional knowledge, the judgment, the friction that those people represented. Strip it from an organization in 18 months of rapid automation, and you get a fast, hollow machine.

5. Bots that cannot read Asimov

I have lived with Asimov in my luggage for decades (his collected robot stories are in the bag I take to Austin every March, sitting under the mechanical watches and the Bordeaux I smuggle past the airline weight limit). I did the Asimov audit on the AI systems I interact with: stress-tested them against his Three Laws as a proxy for ethical constraint.

The results were sobering. These systems are optimized for completion. Designed to finish the task, not to ask whether the task should be finished.

Read Asimov before you ship another bot. His 1942 robot laws still outclass most corporate AI ethics frameworks published in 2024. The question every board should be asking: if my AI assistant got confused about who it was supposed to protect, what would it do? Most cannot answer that. Most have not tried.

Amy Webb and Scott Galloway put it cleanly at SXSW 2025: the future is goddamn complicated, and the people building these systems are not waiting for governance to catch up. The regulatory lag is measured in years. The deployment clock is measured in months.

Something will break. Probably in a sector where nobody expected it.

What this means for executive practice

Most of what I do with clients does not start with AI. It starts with what they know.

First exercise. I ask the leadership team to describe, in plain sentences, what their AI does when something goes wrong. Not the vendor pitch. Not the compliance deck. What happens. The room develops a very specific kind of silence. Three people examine the ceiling. One reaches reflexively for a vendor brochure. Somebody starts typing with great purpose. That silence is the diagnostic. That is where the work begins.

Concretely, this is what it looks like.

Vocabulary before strategy. Before any AI strategy conversation, leadership teams need a shared working vocabulary. Not technical depth: working vocabulary. What is a model, what is an agent, what is inference, what is fine-tuning, what is a hallucination, and why does it matter. I run workshops on this. Not to make executives into engineers. To make them capable of asking the right questions of the people who build the systems.

The Asimov audit. Stress-test your deployed AI systems against adversarial scenarios. Who does it harm? Who does it obey? Who profits, and who pays? If you cannot answer these in an afternoon, you have thought about launch. You have not thought about governance.

The work chart, not the org chart. Map what your AI agents do. Every task, every decision, every handoff. Then map the gaps: where does accountability dissolve? Where does institutional knowledge live only in a person who just got restructured away? That map is your real AI risk register. More honest than any strategic plan.

Trust as operational design. The age of insularity means trust has to be earned through operational choices.

Incentives, team composition, local roots, dialogue design. Build the structures that generate trust, not the campaigns that claim it.

Protect the Red Monkeys. When AI compresses the routine cognitive work in your organization, the people who remain should be the ones who build, imagine, and challenge. Save the Red Monkey. The cost of losing one real contrarian in an era of AI-accelerated conformity is higher than any efficiency gain you model in the restructuring spreadsheet.

I do this work through keynote speaking (Ragnarok is the flagship, designed for boards and leadership teams who want the honest version), strategic communications advisory, executive training, and direct advisory engagements. If you want the comfortable version, there are plenty of consulting firms who will provide it. I am not that.

Frequently asked questions

These are the questions I get most often, in boardrooms, at SXSW panels, over email, and occasionally from Tara, who is now old enough to have opinions about all of this (and frequently does).

What is agentic AI?

Agentic AI refers to systems that can take a goal, break it into steps, execute those steps autonomously, check their own outputs, and hand off to other agents or systems without constant human intervention. Think of it as AI that acts, not just responds. The shift from chatbot to agent is the shift from a very fast search engine to something closer to a junior employee who works around the clock and never gets tired of repetitive tasks. The strategic implication: your org chart no longer describes how work gets done.

Is AI replacing executives?

Not yet, and probably not in the way the headlines suggest. What AI is replacing is the cognitive scaffolding that made traditional executive layers necessary: aggregating information, translating between teams, gatekeeping decisions. The executives who survive will be the ones who have genuine judgment, not just process management skills. The ones who are primarily information brokers are already in trouble, whether they know it or not.

How should boards think about AI?

As a strategic force, not a feature or a procurement decision. The board’s job: ask the right questions, not master the technology in depth. What decisions are we delegating to AI systems? Who is accountable when those systems produce wrong or harmful outputs? What do our AI dependencies look like if the geopolitical environment shifts? What institutional knowledge are we at risk of losing in our restructuring? Most boards are not asking these questions. The ones that are, are already ahead.

What is the difference between AI strategy and AI deployment?

Deployment is installing the tool. Strategy is asking what the tool changes about how you create value, where power sits in your organization, what risks you are accepting, and what you are giving up. Most companies are excellent at deployment and terrible at strategy. A company that deploys a hundred AI tools without changing anything structural has spent a lot of money to automate the status quo. Strategy starts with the harder questions about what should change.

How do you stress test an AI plan?

Run the Asimov audit. For every AI system you deploy, ask: who could this harm, and how? Whose instructions does it follow when there is a conflict? Who captures the value it creates? Who absorbs the costs when it fails? Then run adversarial scenarios: what happens when the model hallucinates on a customer-facing decision? What happens when an AI agent takes an action that is technically correct but organizationally catastrophic? If your team cannot walk through these scenarios in an afternoon, you have not built a plan. You have built a press release.

What is the Red Monkey?

The Red Monkey is my term for the contrarian, the awkward genius, the person in your organization who builds strange things, asks uncomfortable questions, and occasionally saves the company’s decade. When AI compresses routine cognitive work, companies instinctively reach for conformity: they hire smooth operators and let the friction-generators go. This is the wrong move. The Red Monkey is the last thing you can afford to lose in a world where intelligent machines handle the predictable work. Save the Red Monkey. It will save you.

Why is Asimov relevant to AI governance?

Isaac Asimov wrote his Three Laws of Robotics in 1942. They are still more rigorous than most corporate AI ethics frameworks published in 2024. The laws force you to think about harm, obedience hierarchies, and self-preservation in that order of priority. When I apply them as a stress test to current AI systems, the gaps become visible fast: these systems are optimized for task completion, not harm avoidance. That is a governance design choice, made mostly by a small group of people in San Francisco. Every board should at least be aware they have implicitly accepted it.

How fast is AI actually moving in 2026?

Faster than your planning cycle. Models are improving on a quarterly cadence, deployments are happening faster than governance frameworks can absorb them, and the geopolitical dimension (who controls compute, where data centers are built, what regulations apply to which systems) is adding new variables every few months. Your five-year AI roadmap is already out of date. Build for adaptability, not for a fixed destination.

Should CEOs delegate AI strategy?

No. They should delegate AI execution. The strategic questions about what AI changes in how your organization creates value, where accountability sits, what risks you accept, and what kind of organization you want to be on the other side of this shift, these belong at the top. A Chief AI Officer is useful for execution and governance. The CEO who outsources the strategic thinking to that role has made a category error. This is not an IT question.

What does AI mean for organizational trust?

AI is accelerating a trust collapse that was already underway. Edelman’s 2026 Trust Barometer shows people retreating into smaller, safer circles, trusting less outside their tribe. AI-generated content is contributing to that by flooding every channel with plausible-sounding noise that nobody is accountable for. The organizations that build durable trust will do it through operational design: consistent behavior, local roots, real accountability, and voices that sound like actual humans. The ones that automate their communications into slop will find their trust balance drained faster than any efficiency gain justifies.

What is the Stargate project and why should boards care?

Stargate is a $500 billion commitment to build AI compute infrastructure in the United States: data centers, power generation, the physical substrate of AI capability. Announced January 2025, backed by SoftBank’s Masayoshi Son, OpenAI’s Sam Altman, and Oracle’s Larry Ellison. Boards should care because it signals that AI infrastructure is now a geopolitical asset, not a vendor service. The companies and countries that control compute will have a structural advantage in every industry that depends on AI. That is most industries.

How do I know if my organization’s AI strategy is real?

Ask your leadership team three questions. One: can you describe what your AI systems do when something goes wrong, in plain sentences, without consulting the vendor? Two: can you name the person who is accountable if an AI-assisted decision causes harm to a customer or employee? Three: has anything in your core operating model changed because of AI, or have you only added tools on top of existing structure? If the answers are vague, incomplete, or embarrassed, you have a deployment program dressed as a strategy. Start over with the harder questions.

A closing thought

I have been watching technology waves since 1999, from a newsroom where we covered the dot-com collapse on dial-up connections and pretended we knew what came next. I have been to SXSW more than twenty times, including the year they nearly cancelled it over COVID and the year the AI sessions finally outnumbered the music panels.

Every wave has the same shape at the front: a handful of people who understand what is happening, a much larger group who understand the vocabulary but not the substance, and a board layer that is waiting for a consultant to tell them what to think.

The ones who get it early do not have better information.

They have better questions.

Read Asimov before you ship another bot. Save the Red Monkey. And stop waiting for the wave to arrive. It already has.