Happy Saturday Power Shifters. I had plans for a weekend of craft beer tastings in Yakima Valley. Instead, I got hit with the worst flu I’ve had in a decade—hallucinations and all. So if what follows makes no sense, I blame the fever.
It’s barely been a week, yet it feels like an eternity since DeepSeek instantly climbed the App Store charts, and sent several AI and energy stocks plummeting. One meme sums it up:
It’s all blown up
A week ago, we had a picture of what the future economic structure would be for AI in 2025. Today, that structure has been blown up.
Sure, stocks have started to stabilize as investors took a deep breath to assess the real damage, but there was some reason to that initial selloff: DeepSeek’s R1 “reasoning” model, proved to be on par with OpenAI’s o1 model in performance, while maintaining a significantly lower cost structure.
It’s a killer because it pioneered techniques in “sparsity” and memory compression, allowing for more efficient model training and operation. If we take DeepSeek at its word, it developed its R1 model for less than $6 million, a fraction of what other models cost to develop. DeepSeek is passing those efficiency gains on to users, slashing API costs—up to 30x cheaper than OpenAI.
While that $6 million number is in dispute, the big takeaway is not: DeepSeek has done what Chinese tech companies do best: copy, paste, improve, and ultimately commoditize. People are using this model and loving it. It’s out there, and the genie cannot be put back in the bottle.
What no one had on their bingo sheet for 2025 was that a Chinese AI company would pull this off so quickly. However, it shouldn’t surprise us. You’ll recall last year I covered similar themes when sharing highlights from Kai-Fu Lee’s NY Times bestselling book, AI Super-Powers: China, Silicon Valley, and the New World Order.
Here’s Kai-Fu Lee:
This brings us to the second major transition, from the age of expertise to the age of data. Today, successful AI algorithms need three things: big data, computing power, and the work of strong - but not necessarily elite - AI algorithm engineers. Bringing the power of deep learning to bear on new problems requires all three, but in this age of implementation, data is the core. That’s because once computing power and engineering talent reach a certain threshold, the quantity of data becomes decisive in determining the overall power and accuracy of an algorithm.
And that’s what DeepSeek has just done: getting computing power to a certain “threshold.” I.e., made it cheap. Very cheap.
The response by OpenAI, Meta, Google, and Anthropic matters now. What’s their next play? Lower costs? Deploy more sophisticated models? Probably both. And then DeepSeek will copy and paste, and get us all closer to that “certain threshold” Kai-Fu Lee writes about. It seems like a race to the bottom. No wonder Meta set-up four war rooms to respond to the DeepSeek threat:
Of the four war rooms, two teams are focused on uncovering how High-Flyer significantly reduced training and operational costs for DeepSeek. Their goal is to apply similar strategies to improve Llama. The other two teams are investigating the data used to train the model and exploring potential modifications to Llama based on its architecture.
But here’s the rub they’ll face: DeepSeek is leaning into a semi open-source model. You can peek under the hood. That comes with a level of transparency we’ve not seen in most models to date. Casey Newton covered this point earlier in the week:
Unlike other reasoning models before it, DeepSeek responds to your queries by telling you what it thinks you want, and what it thinks it should do in response. It’s as if you had an assistant who, before completing the task you requested, first tells you their complete understanding of your desires and their plan for fulfilling them. The technical term for what r1 is doing is exposing its chain of thought, and it might prove to be as influential on other companies as anything else that DeepSeek has done.
This is just the start. By making this model partially open-source, anyone now has the ability to build a derivative model. That’s a big deal. Up until now, you had to rent a model by either using one directly, or building an application that connected to an API. In the last week, hundreds of derivative models have been created, and we’re just getting started. It’s a big contrast with closed models from OpenAI, Google, and others that guard their technology as trade secrets.
Yeah, but it’s Chinese…
But let’s be real, this is also an arms race. DeepSeek’s success has sparked a debate over the Biden Administration’s export controls, which were designed to prevent Chinese companies from building leading AI models. Turns out, that constraints lead to innovation.
Or, it simply leads to IP theft.
Likely, somewhere in between.
DeepSeek emerged at a time when the consensus was that China lagged significantly behind in AI. The quality of its R1 model suggests otherwise..
Either way, I imagine DeepSeek becomes a darling of the CCP. DeepSeek’s founder, Liang Wenfeng, was invited to a high-profile meeting with Chinese Premier Li Qiang. Does that mean government backing is inevitable? I’m not sure how DeepSeek can navigate the inevitable regulatory limitations in China while simultaneously overcoming global skepticism related to security and data privacy.
The playbook here is predictable:
Launch your product for near-free
Build up a huge user base
The CCP gets miffed that one of its companies is giving away powerful AI to everyone
Respond by monetizing, and sitting on a treasure trove of user data it acquired during the initial “free” phase.
Of course, I’m purely speculating. Truth is, we don’t know what DeepSeek’s ambitions are. Which is the worrying bit.
Why does it matter?
I’m increasingly firming up my view that what will separate the high performers from those who will get left behind, is the ability to make good use of these models.
1. AI proliferation is guaranteed.
With people already building derivative models and new apps on the back of DeepSeek, we’re headed toward much more AI proliferation than we’ve experienced to date. Customized agents. Specialized apps. Workflow optimizers. Things that once required a huge investment in people and resources, suddenly doesn’t. And that means, we’ll see a lot more AI in everything we do.
2. AI Moats aren’t real
I’ve said this before, and I’ll say it again: Whether you work in house, at an association, or at a public/corporate affairs agency, having access to the same large language models as everyone else won’t be a differentiator. What will be the differentiator is your curiosity and willingness to constantly experiment with AI.
Big AI companies have operated under the assumption that proprietary AI models would serve as competitive moats—massive R&D investments would ensure sustained dominance.
If cost compression continues at this pace, does that moat disappear?
What happens when the barrier to entry for developing competitive AI models drops to near zero?
In a world of abundant, cheap, and powerful AI models, differentiation shifts from who owns the model to who makes the best use of it.
The real value moves from the model to execution. Those who integrate AI best—not those who merely have access—will win. You’ll have access to a big bounty of AI models and agents to help you do you work.
There will be no real competitive advantage in using one model over the other. It’s what you do with your selected models that will make the ultimate difference. And that means, you need to understand the nuances of each model and their full potential. And you need to do this at a time when there will be new models popping up every few weeks.
This means that simply ‘adopting AI’ isn’t a strategy anymore.
3. Consider making the shift from renter to builder
Until now, most professionals and organizations have been AI consumers—renting models via APIs and adapting their workflows accordingly.
DeepSeek’s semi open-source approach accelerates the transition to a builder mindset.
With the ability to modify and fine-tune models, organizations can tailor AI to their specific needs.
If you’re only consuming off-the-shelf AI, you’re already behind. The winners will be the ones who build, tweak, and optimize AI for their own workflows.
4. The regulatory landscape is going to get real murky
The U.S. has tried to curb China’s AI advancements through export controls. Clearly, that’s not working.
If Chinese AI models become industry-standard, what does that mean for Western regulatory approaches?
How will governments reconcile security concerns with economic competitiveness?
Businesses relying on Chinese AI models may face regulatory scrutiny. Compliance teams need to stay ahead of shifting policies, especially in sensitive industries.
A Tipping Point?
In 2001, Malcolm Gladwell wrote The Tipping Point, a book about how small, seemingly insignificant changes trigger massive shifts.
DeepSeek might just be one of those tipping points.
It’s not the most advanced AI model. It’s not the most powerful. But it’s cheap, widely available, and open enough that thousands of developers are already building on top of it.
In other words, DeepSeek isn’t an AI revolution—it’s an economic revolution in AI. DeepSeek doesn’t need to be the best. It just needs to be good enough, cheap enough, and fast enough—and that’s what makes it unstoppable.
And that should scare the incumbents far more than any breakthrough in reasoning or efficiency.
Because once AI is cheap enough, easy enough, and ubiquitous enough, it stops being a differentiator and starts being an expectation.
At that point, the winners won’t be the ones who own the best models.
They’ll be the ones who figured out what to do with them first.
So the real question isn’t whether AI will change your industry. That’s settled.
The real question is:
When AI is everywhere, what will you do differently?