The Bacon & the Skillet: When Does the AI Market Congeal?

The AI market today is bacon in a hot skillet. Everything is sizzling, moving, & changing at an incredible pace. We’re all watching it closely.

Market share is fluid because no one yet knows what AI can do & the second we think have grasped it, models improve. The Nvidia chip performance & the launch of Gemini 3 the biggest gain ever in Google model performance suggest no simmering ahead.

As long as the underlying models hurtle towards PhD level performance, people will continue to test. How much better is Gemini 3 at coding? tool calling? writing?

Read more

The Scaling Wall Was A Mirage

Two revelations this week have shaken the narrative in AI : Nvidia’s earnings & this tweet about Gemini.

Oriol Vinyals tweet about Gemini 3 scaling

The AI industry spent 2025 convinced that pre-training scaling laws had hit a wall. Models weren’t improving just from adding more compute during training.

Then Gemini 3 launched. The model has the same parameter count as Gemini 2.5, one trillion parameters, yet achieved massive performance improvements. It’s the first model to break 1500 Elo on LMArena & beat GPT-5.1 on 19 of 20 benchmarks.

Read more

What 375 AI Builders Actually Ship

70% of production AI teams use open source models. 72.5% connect agents to databases, not chat interfaces. This is what 375 technical builders actually ship - & it looks nothing like Twitter AI.

350 out of 413 teams use open source models

70% of teams use open source models in some capacity. 48% describe their strategy as mostly open. 22% commit to only open. Just 11% stay purely proprietary.

Read more

Teaching Local Models to Call Tools Like Claude

Ten months ago, DeepSeek collapsed AI training costs by 90% using distillation - transferring knowledge from larger models to smaller ones at a fraction of the cost.

Distillation works like a tutor training a student : a large model teaches a smaller one.1 As we’ve shifted from knowledge retrieval to agentic systems, we wondered if there was a parallel technique for tool calling.2

Could a large model teach a smaller one to call the right tools?

Read more

Running Out of AI

By Monday lunch, I had burned through my Claude code credits. I’d been warned ; damn the budget, full prompting ahead.

Screenshot 2025-11-12 at 8.25.37 AM
I typed ultrathink to solve a particularly challenging coding problem, knowing the rainbow colors of the word was playing with digital fire.

Screenshot 2025-11-12 at 8.26.41 AM

When that still couldn’t solve the issue, I summoned Opus, the biggest & most expensive model, to solve it.

Read more

Datadog: As Reliable as Your Golden Retriever

Datadog is becoming a platform company, & its Q3 2025 results underscore how successful this transition is. If nothing else, the consistency around 25% growth for the last 12 quarters exemplifies this point.

Datadog revenue growth chart showing quarterly revenue & year-over-year growth rate

Net dollar retention underpins this growth, combined with accelerating new customer account acquisition. One of the biggest changes in the last five quarters is terrific cross-selling across an increasingly large product suite.

Read more

Are We Being Railroaded by AI?

Just how much are we spending on AI?

Compared to other massive infrastructure projects, AI is the sixth largest in US history, so far.

Just How Much Are We Spending on AI - Infrastructure spending as % of GDP

World War II dwarfs everything else at 37.8% of GDP. World War I consumed 12.3%. The New Deal peaked at 7.7%. Railroads during the Gilded Age reached 6.0%.

Read more

A 1 in 15,787 Chance Blog Post

I wrote a post titled Congratulations, Robot. You’ve Been Promoted! in which OpenAI declared that their AI coders were no longer junior engineers but mid-level engineers.

The post triggered the largest unsubscription rate in this blog’s history. It was a 4-sigma event.

A Nassim Taleb black swan, this was something that should happen once every 700 years of a blog author’s career.

Clearly, the post struck a nerve.

In a job market 13% smaller for recent grads than recent years, a subtle fear persists that positive developments in AI accuracy & performance accelerate job losses. Stanford’s research found :

Read more

OpenAI's $1 Trillion Infrastructure Spend

OpenAI has committed to spending $1.15 trillion on hardware & cloud infrastructure between 2025 & 2035.1

The spending breaks down across seven major vendors: Broadcom ($350B), Oracle ($300B), Microsoft ($250B), Nvidia ($100B), AMD ($90B), Amazon AWS ($38B), & CoreWeave ($22B).2

Using some assumptions, we can generate a basic spending plan through contract completion.3

Year MSFT ORCL AVGO NVDA AMD AWS CRWE Annual Total
2025 $2 $0 $0 $0 $0 $2 $2 $6
2026 $3 $0 $2 $2 $1 $3 $3 $14
2027 $5 $25 $4 $6 $3 $4 $3 $50
2028 $10 $60 $10 $12 $8 $5 $7 $112
2029 $20 $60 $25 $31 $24 $6 $7 $173
2030 $60 $60 $64 $49 $54 $8 $0 $295
TOTAL $250 $300 $350 $100 $90 $38 $22 $1,150

Across these vendors, estimated annual compute spending grows from $6B in 2025 to $173B in 2029, reaching $295B in 2030. We built a constrained allocation model with the boundary conditions defined in the appendix below, but this is just a guess. The actual growth rates are 124% (2027→2028), 54% (2028→2029), & 70% (2029→2030).

Read more

Small Data Becomes Big Data

I sleep better knowing my agents work through the night. Less work for me in the morning.

My podcast processor transcribes & analyzes conversations. I started on my laptop, needed a little database to collect podcast data & metadata, & booted up a DuckDB instance.

But then the data started to grow, & I wanted the podcast processor to run by itself. I changed two little letters, & the database moved to the cloud :

Read more