Skip to main content

Last week, while half the internet was still arguing about whether GPT got dumber, OpenAI quietly launched a company most people didn’t notice. It is called the OpenAI Deployment Company, DeployCo to its friends (every new entity needs a nickname now, apparently), and it landed like a good humored, well fed baby in this cruel world with four billion dollars in the trunk and nineteen partners on the cap table.

I admit: it sounds boring. On first contact, it absolutely is. It has the emotional voltage of a procurement category and the poetic charm of a rusty loading dock. The full name of the beast, the OpenAI Deployment Company, does not help either. But from the safe and mildly over-caffeinated heights of Devriendt Towers, it deserves a very close look. The facts are fascinating: four billion, nineteen partners. A new entity, majority-owned and controlled by OpenAI, whose entire job is to walk into your office and rewire how your company uses AI, from within. You did not see DeployCo trending, did you? There are no demo videos, no flashy launch keynote, no Sam Altman tweet thread to argue about. That alone should make you nervous.

A silent killer announcement

DeployCo is a services company, and OpenAI is its majority owner. The headline backers are TPG as lead partner, with Advent, Bain Capital and Brookfield as co-lead founding partners. Around them sits the broader ring of nineteen firms, including names such as Goldman Sachs, McKinsey and Capgemini. The committed capital is north of four billion dollars at launch, with reporting pointing to a ten-billion-dollar pre-money valuation. Not too shabby for something that sounds like a SharePoint folder.

To staff it on day one, OpenAI agreed to acquire Tomoro, a London-based AI consulting boutique founded in 2023 as an explicit OpenAI alliance partner. Tomoro brings roughly 150 engineers and deployment specialists, plus a client list that includes Mattel, Red Bull, Tesco and Virgin Atlantic. So DeployCo opens its doors with a bench, a market, and the kind of client credibility most new companies spend years trying to fake. The job description is simple. OpenAI calls them “Forward Deployed Engineers”. (I would have thought that at least some of those four billion would have been invested in clever marketing). They embed inside your organization, sit next to your own people, study how the work actually gets done, and then build systems that fit into your workflows instead of floating on top of them. The perfect textbook strategy of a Trojan horse, indeed. Homer and Odysseus would be proud, possibly also slightly jealous.

Denise Dresser, OpenAI’s Chief Revenue Officer, framed it oh-so-carefully: “AI is becoming capable of doing increasingly meaningful work inside organizations. The challenge now is helping companies integrate these systems into the infrastructure and workflows that power their businesses.” DeployCo, she says, exists to “turn AI capability into real operational impact.”

That is the polite way of saying the demos are over, and the money is now in the plumbing. OpenAI’s plumbers arrive dressed as friendly consultants, smiling politely while asking for access to the pipes inside your precious safety walls. Which raises the small, ancient, farmyard question: why invite the wolf into the sheepfold?

Why OpenAI is doing this

Two reasons, and neither of them is “to help you.” The first is competitive. Anthropic has been eating OpenAI’s lunch in the enterprise segment for the last eighteen months. Claude is the model coders reach for. Claude is the model big banks and consultancies have quietly standardized on. Earlier this year, OpenAI narrowed its focus to coding tools and enterprise customers, and Fidji Simo, OpenAI’s CEO of applications, told staff at an all-hands meeting that Anthropic’s gains should be “a wake-up call.” Her line was sharp: “We cannot miss this moment because we are distracted by side quests.”

DeployCo is the answer to that wake-up call. If you cannot beat the rival on raw model quality, you compete on getting deeper inside the customer. You stop being a vendor and start being a tenant in the operating model.

The second reason is structural. AI does not deploy itself. Anyone who has actually shipped a model into a real enterprise knows the bottleneck is never the model. The bottleneck is the data that is fragmented and poorly structured, the legacy system written in 2003 by a contractor who has long retired to Crete, the procurement office, the works council, the auditor (always the auditor), the country manager who refuses to give up his spreadsheet. That is the layer where most AI projects die a quiet death somewhere between Phase 2 and Phase Forgotten.

Aaron Levie, the CEO of Box (an enterprise cloud content-management company): “That’s an insane amount of technical and domain-specific process work to be done to make this all happen. Huge opportunity for new service providers.” Translation: the people who own the integration layer own the customer.

OpenAI just bought a deep stake in that layer. A controlling one.

The shape of the problem

Yes, I see you can see clearly now: it gets uncomfortable fast.

The OpenAI Foundation now holds a 26 percent equity stake in OpenAI Group PBC, reportedly worth around $130 billion. The foundation and the for-profit company also share much of the same board architecture. The charitable guardian and the commercial machine are not separated by a firewall. At best, there is a bead curtain.

AI watchdogs have been waving this red flag for months. Nick Moës of The Future Society put it bluntly: “The structure of the foundation remains a problem, in particular its de facto control by the for-profit entity interests.”

If we add DeployCo to that diagram, the picture gets suddenly rather cozy.

You have one corporate organism that trains the frontier models, sells access to those models through ChatGPT and the API, and now wants to embed Forward Deployed Engineers inside your company to decide where and how AI gets used. Around it sits a familiar cast of private equity firms and consulting giants, all fluent in the ancient corporate dialect of transformation decks, integration programs, change management, AI readiness scans and other invoices with adjectives.

Interesting, right? The model maker is no longer just making the model. It is moving downstream into implementation. The product vendor is no longer waiting outside procurement with a badge and a nervous sales director. It is entering the operating model. The consultant is no longer plausibly (or even remotely) neutral. The systems integrator is no longer merely connecting tools. The governance layer is no longer a distant charitable halo floating above the machine.

Everything starts to bend toward the same loop. Call it an ecosystem if you want. Ecosystem sounds natural, open, fertile, full of biodiversity and happy frogs. This looks more like a closed loop circuit.

OpenAI builds the model. OpenAI sells the model. OpenAI helps decide where the model enters the client. OpenAI’s deployment company embeds the people who make that happen. Its partners help sell the strategy that makes the deployment feel inevitable. And the nonprofit structure that is supposed to keep the mission on course sits awkwardly above it all, holding enormous economic exposure to the success of the very machine it is meant to supervise.

That is not necessarily corruption. Corruption is too cinematic. Brown envelopes, villains, doors closing softly in expensive restaurants. This is duller, cleaner, more defensible, and therefore more dangerous. It is incentive gravity. And gravity does not need to be evil to pull everything toward the center.

Black-and-white portrait of a figure in a fedora and round sunglasses, suggesting the embedded insider

Why it is dangerous

I have spent enough time inside large companies to know how this plays out. A McKinsey partner walks into your boardroom with a deck. The deck says you need to “embed AI into your operating model” (it always says that). The recommended path forward is a phased program. Phase one: a strategic AI assessment. Phase two: a pilot. Phase three: deployment at scale. Each phase is staffed by people whose firm has a financial stake in DeployCo, who will deploy a model from a company whose stake DeployCo’s parent owns (the same parent that paid for the deck). Your procurement team will not see this on a single slide. They will see four different invoices from four different brands and assume that means four different opinions.

Four different invoices. One commercial interest. Yep.

The deployment decisions, where AI goes, what work it absorbs, which jobs become “optimization candidates”, which suppliers get cut, which countries get hollowed out, are not neutral technical choices. They are political and economic ones. We used to at least pretend those decisions belonged to the customer, with advisors in advisory roles. DeployCo collapses that distance. The advisor and the vendor are now the same room with the same financial incentive, sitting next to your people, watching how the work gets done, “studying” your workflows.

You know what that is in any other industry. It is a conflict of interest so obvious it would not survive a junior compliance officer. In this industry, it gets a press release.

The energy and capacity catch

There is one more piece, and it is the one nobody wants to put on a slide. AI deployment at this scale runs on compute, and compute runs on electricity and on memory chips that do not exist yet. The International Energy Agency already projects data-center electricity consumption to roughly double from 485 TWh in 2025 to 950 TWh in 2030. High-bandwidth memory production is sold out for years. ASML cannot make enough EUV machines. The grid in half of Europe cannot keep up with what is already announced, never mind what gets sold once a thousand Forward Deployed Engineers start writing deployment proposals inside a thousand client offices.

DeployCo is also a demand engine. Every embedded engineer is a person whose career depends on finding more places to put the model. The structure does not reward restraint. It rewards expansion.

This is fine for OpenAI and its nineteen partners. It is less fine for the planet and the procurement officer.

What I would do if I were on your board

I would ask three questions, out loud, in the room, before signing anything.

One: when this consultant tells me to use this AI model from this vendor, who am I actually paying, and who profits from the recommendation? Get the cap-table answer on paper. If your advisor is also a DeployCo partner, you are not getting advice, you are getting a sales call wearing a tie.

Two: who owns the workflow knowledge that a Forward Deployed Engineer extracts during deployment? Where does the deployment IP live? OpenAI says these engineers “study how the work gets done.” Fine. In whose database does that study live afterwards? If the answer involves any URL that ends in openai.com, your competitive moat has just been logged into someone else’s training set.

Three: what is the exit cost? If DeployCo embeds, scales, refactors and rewires for two years, and then doubles the price, what is your switching cost on day 731? If you cannot answer in dollars, you do not have a vendor relationship. You have a hostage situation dressed up as partnership… a bit like most SaaS models you run now.

The longer story

I have written before that the chip war is the real war underneath the AI war. The Trump-Xi handshake was about Taiwan, rare earths, memory, and capacity. DeployCo is the same fight, one layer up. The capacity layer decides what models can exist. The deployment layer decides where those models go and whose work they replace. Whoever controls both layers controls the entire transmission system of this technology, from the silicon to the spreadsheet.

OpenAI just made a credible bid for both.

The headlines will keep being about model benchmarks, GPT-this and Claude-that. The actual game has moved. The interesting question is no longer which model is smartest. The interesting question is who owns the engineer sitting two desks over from your CFO, taking notes.

That engineer used to work for you. Soon, they will work for someone whose business card has four logos on the back and one boss. Think about that, the next time someone tells you AI is “just a tool.”

It is a tenant. And the lease just got signed without you reading it.

Danny Devriendt: Founder, Heliade. Keynote speaker. Technologist, futurist. Also Managing Director at OmnicomMedia SpecOps and CEO at The Eye of Horus. Based between Aalter and Trouville-la-Haule. More about Danny →

Discover more from Heliade

Subscribe now to keep reading and get access to the full archive.

Continue reading