3 minute read / Jan 26, 2025 /
The AI Cost Curve Just Collapsed Again
A microwave that writes its own recipes. A smart watch that crafts personalized workout plans. A ticket kiosk that negotiates refunds in natural language. This isn’t science fiction - it’s 2025, & DeepSeek just made it far more affordable.
The Chinese AI company released two breakthroughs: V3, which slashes training costs by 90+%, & R1, which delivers top-tier performance at 1/40th the cost. But the real innovation? They proved that sometimes simpler is better.
AI models are notorious for their creative relationship with truth. Throughout 2024, researchers threw increasingly complex solutions at this problem.
DeepSeek’s R1 showed that the answer was surprisingly straightforward: just ask the AI to show its work. By narrating their reasoning processes, AI models became dramatically more accurate. Even better, these improvements could be distilled into smaller, cheaper models.1
The net : powerful smaller models with nearly all of the capability of their bigger brothers, and the lower latency of small models, plus 25-40x reduction in price - a trend we’ve discussed in our Top Themes in Data in 2025.
What does this mean for Startupland?
-
The tech giants won’t stand still. Expect an arms race as large competitors rush to replicate & improve upon these results. This guarantees more innovation & further cost reductions in 2025, creating a broader menu of AI models for startups to choose from.
-
Startup margins will surge. As AI performance per dollar skyrockets, startup economics will fundamentally improve. Products become smarter while costs plummet. Following Jevon’s Paradox, this cost reduction won’t dampen demand - it’ll explode it. Get ready to see AI everywhere, from your kitchen appliances to your transit system.
-
The economics of data centers and energy demand may change fundamentally. Google, Meta, & Microsoft are each spending $60-80B annually on data centers, betting on ever-larger infrastructure needs. But what if training costs drop 95% & the returns from bigger models plateau? This could trigger a massive shift from training to inference workloads, disrupting the entire chip industry. NVidia has fallen 12% today because of this risk.
Large models are still essential in developing smaller models like R1. The large models produce training data for the reasoning models & then serve as a teacher for smaller models in distillation. I diagrammed the use of models from the R1 paper below. The models are yellow circles.2
- R1 & similar models do something remarkable in the world of AI: they show their work. This isn’t just good UX - it’s potentially game-changing for regulatory compliance. GDPR demands explainable decision-making, & explicit reasoning could satisfy both regulators & enterprise customers who need auditability. Plus, it creates a feedback loop that helps users understand & trust the system’s decisions.
- The elephant in the room: Will U.S. companies deploy Chinese models? With escalating tech restrictions - from GPU export controls to networking equipment bans - superior performance alone might not overcome security concerns. Enterprise & government sectors will likely stick with domestic options, but the consumer market could be more flexible.
What’s clear is that AI’s economics are being rewritten faster than anyone predicted. For startups, this creates both opportunity & urgency. Those who move quickly to harness these more efficient models will gain significant advantages in cost structure & capability.
1 I’m simplifying here. The innovation was a combnation of chain-of-thought fine-tuning & reinforcement learning, looped twice through.
2 The R1 paper describes a process starting with a very large (600b+ parameter model). Create chain-of-thought training data, fine tune a new model based on that reasoning, then apply reinforcement learning. Repeat the process. Take the outputted model & distill it (teach a smaller model to copy it) using Llama3. The net result is R1 (a very large, fast, efficient reasoning model) & a distilled model (smaller with 95%+ of the capabilities of the big one).