It’s easy to get swept up in the moment when working with AI every day. There’s an underlying feeling that something tectonic is happening — a kind of buzz that doesn’t quite feel like a trend, or even a “new wave.” It’s bigger than that. For those of us at Dellecod Software, who’ve been building infrastructure and applications across tech cycles, the pace and scale of AI’s evolution feels different. More than hype, we’re seeing early indicators of a foundational platform shift — the kind that only comes around every couple of decades.
Some have started calling this the most important tech transformation since electricity. That’s a lofty comparison, but not entirely unrealistic. What’s notable is that we’ve reached a moment where consumer adoption, enterprise appetite, infrastructure investment, nation-state competition, and developer momentum are all hitting at once.
And yet, we’re still early. We’re in inning one or two.
We’re not just watching history — we’re participating in its scaffolding.
The Business Models Are Emerging — and They Work
So much of the historical challenge with frontier tech has been monetization. Not enough customer demand. Too much friction. Pilots that don’t convert. AI, despite its complexity, is cutting through all of that faster than most expected. The two clearest business models today are:
1. Consumer AI, delivering value directly to individuals — with consumer-grade pricing, sometimes up to $200–$300/month. That’s real traction.
2. Enterprise AI, where ROI is measured in productivity gains: automated support, faster marketing funnels, streamlined ops. Even with high compute costs, the value being delivered is tangible.
We’re already seeing startup pricing experiments that stretch far beyond traditional SaaS norms. At the same time, infrastructure providers are adapting, with usage-based “token by the drink” pricing that reflects AI’s bursty, variable consumption patterns.
It’s worth noting: this isn’t just about companies building end-user interfaces. It’s about the plumbing too. From GPUs to orchestration layers, the AI stack is spawning a new class of businesses that may live mostly under the hood, but power everything above it.
Costs Are Falling Fast — Which Means Demand Is Surging
The most underappreciated story in AI right now isn't the models themselves. It's the economics.
The cost to train and use frontier models is rapidly declining — often faster than Moore’s Law — thanks to smarter architectures, improved chips, and spread-out infrastructure amortization. Companies like AWS have found ways to extend the useful life of their GPU fleets, pushing efficiency to the forefront. At the same time, model architectures are becoming slimmer and smarter, enabling performance at lower compute.
This is creating intense price elasticity in the market. As costs fall, use cases flood in. People and businesses who couldn’t justify AI before are now kicking off experiments. For developers, it’s becoming viable to tinker with models without breaking the bank. That’s not just good for cost curves — that’s how ecosystems get built.
The Rise of the Small and the Open
One of the patterns we're watching closely is how small, open-source models are achieving near parity with large, closed models — and doing it shockingly fast. Sometimes lagging by months, not years.
This dynamic is reshaping how we think about innovation. Once a capability is proven by a leading lab or tech company, the blueprint often leaks out to the broader research community. Then the work of replication — and occasionally refinement — takes hold across the open-source ecosystem. The result: a flatter innovation curve, where capability is more widely distributed.
That’s playing out in surprising places. Just months after top Western labs released state-of-the-art models, Chinese companies like Moonshot, DeepSeek, and Alibaba’s Qwen have dropped competitive models of their own. Some even run efficiently on consumer laptops, making high-function AI radically more accessible.
We’re already seeing startups opting to integrate these open models — not just for cost, but for control, flexibility, and IP reasons. The open vs. closed debate isn’t settled, but it’s clearly not a winner-takes-all. At least not yet.
Global Stakes, Policy Tensions
AI isn’t just a software story — it’s geopolitical now.
China has emerged as a serious and capable force in AI. The move toward open-source there isn’t ideological; it’s practical. Open commoditizes fast, spreads fast, and can capsize industries with scale pricing. That’s something Western policymakers are starting to worry about. There’s tension: how does the US remain innovative without overregulating? How do we cultivate safety without neutering experimentation?
One lesson from recent legislation — like California’s SB1047, narrowly vetoed at the last minute — is that regulation, particularly at the state level, is moving faster than many expected. Over 1,200 AI-related bills are now in discussion across the US. We don’t just need smart regulation — we need coordinated regulation.
For developers and builders, this means tighter attention to compliance, risk, and disclosure. But it also means an opportunity. The companies that figure out how to build safely and explain clearly will likely pull ahead. The frontier loves clarity.
Consumer Behavior ≠ Consumer Sentiment
If you go by polling data, the public seems anxious about AI. But if you go by usage data — they’re all in. AI tools are now regular companions for millions of people across writing assistance, medical triage, tutoring, therapy, and more.
This discrepancy between what people say and what people do is important. It tells us that beneath the surface concerns, there’s a real hunger for relief — for insight, speed, support. When something works, people adopt it. Quickly and deeply.
For builders, that’s a reminder to be honest and human in our messaging. Fear isn’t irrational. Nor is it terminal. It abates with trust, transparency, and usefulness. Adoption still follows value.
Looking Ahead
We don’t know what dominant model architectures will look like in five years. We don’t know who the next inflection player will be. But we do feel confident about the broad trajectory:
- Infrastructure will continue catching up to demand.
- Chips will commoditize and specialize in parallel.
- Developer velocity will accelerate, not decelerate.
- Open models will get better. Closed ones will, too.
- AI will spread — not just on servers, but on edge devices, internal networks, vertical stacks.
In other words, software will get weirder, faster, more anticipatory. And for all the complexities ahead — regulatory, ethical, existential — it's hard not to feel a quiet optimism. Not because the risks are overblown, but because the possibilities are extraordinary.
We’ve lived through tech shifts before. This one’s different in scale, but similar in pattern: there’s fear, abundance, then normalization. Eventually the disruptive becomes essential.
At Dellecod, we’re still experimenting. Still wrong sometimes. Still learning every day from the communities around us, from researchers half our age, and from the teams dreaming up use cases we never imagined.
In moments like this, building responsibly — and listening carefully — feels like the only real strategy. And one worth betting on.
Some have started calling this the most important tech transformation since electricity. That’s a lofty comparison, but not entirely unrealistic. What’s notable is that we’ve reached a moment where consumer adoption, enterprise appetite, infrastructure investment, nation-state competition, and developer momentum are all hitting at once.
And yet, we’re still early. We’re in inning one or two.
We’re not just watching history — we’re participating in its scaffolding.
The Business Models Are Emerging — and They Work
So much of the historical challenge with frontier tech has been monetization. Not enough customer demand. Too much friction. Pilots that don’t convert. AI, despite its complexity, is cutting through all of that faster than most expected. The two clearest business models today are:
1. Consumer AI, delivering value directly to individuals — with consumer-grade pricing, sometimes up to $200–$300/month. That’s real traction.
2. Enterprise AI, where ROI is measured in productivity gains: automated support, faster marketing funnels, streamlined ops. Even with high compute costs, the value being delivered is tangible.
We’re already seeing startup pricing experiments that stretch far beyond traditional SaaS norms. At the same time, infrastructure providers are adapting, with usage-based “token by the drink” pricing that reflects AI’s bursty, variable consumption patterns.
It’s worth noting: this isn’t just about companies building end-user interfaces. It’s about the plumbing too. From GPUs to orchestration layers, the AI stack is spawning a new class of businesses that may live mostly under the hood, but power everything above it.
Costs Are Falling Fast — Which Means Demand Is Surging
The most underappreciated story in AI right now isn't the models themselves. It's the economics.
The cost to train and use frontier models is rapidly declining — often faster than Moore’s Law — thanks to smarter architectures, improved chips, and spread-out infrastructure amortization. Companies like AWS have found ways to extend the useful life of their GPU fleets, pushing efficiency to the forefront. At the same time, model architectures are becoming slimmer and smarter, enabling performance at lower compute.
This is creating intense price elasticity in the market. As costs fall, use cases flood in. People and businesses who couldn’t justify AI before are now kicking off experiments. For developers, it’s becoming viable to tinker with models without breaking the bank. That’s not just good for cost curves — that’s how ecosystems get built.
The Rise of the Small and the Open
One of the patterns we're watching closely is how small, open-source models are achieving near parity with large, closed models — and doing it shockingly fast. Sometimes lagging by months, not years.
This dynamic is reshaping how we think about innovation. Once a capability is proven by a leading lab or tech company, the blueprint often leaks out to the broader research community. Then the work of replication — and occasionally refinement — takes hold across the open-source ecosystem. The result: a flatter innovation curve, where capability is more widely distributed.
That’s playing out in surprising places. Just months after top Western labs released state-of-the-art models, Chinese companies like Moonshot, DeepSeek, and Alibaba’s Qwen have dropped competitive models of their own. Some even run efficiently on consumer laptops, making high-function AI radically more accessible.
We’re already seeing startups opting to integrate these open models — not just for cost, but for control, flexibility, and IP reasons. The open vs. closed debate isn’t settled, but it’s clearly not a winner-takes-all. At least not yet.
Global Stakes, Policy Tensions
AI isn’t just a software story — it’s geopolitical now.
China has emerged as a serious and capable force in AI. The move toward open-source there isn’t ideological; it’s practical. Open commoditizes fast, spreads fast, and can capsize industries with scale pricing. That’s something Western policymakers are starting to worry about. There’s tension: how does the US remain innovative without overregulating? How do we cultivate safety without neutering experimentation?
One lesson from recent legislation — like California’s SB1047, narrowly vetoed at the last minute — is that regulation, particularly at the state level, is moving faster than many expected. Over 1,200 AI-related bills are now in discussion across the US. We don’t just need smart regulation — we need coordinated regulation.
For developers and builders, this means tighter attention to compliance, risk, and disclosure. But it also means an opportunity. The companies that figure out how to build safely and explain clearly will likely pull ahead. The frontier loves clarity.
Consumer Behavior ≠ Consumer Sentiment
If you go by polling data, the public seems anxious about AI. But if you go by usage data — they’re all in. AI tools are now regular companions for millions of people across writing assistance, medical triage, tutoring, therapy, and more.
This discrepancy between what people say and what people do is important. It tells us that beneath the surface concerns, there’s a real hunger for relief — for insight, speed, support. When something works, people adopt it. Quickly and deeply.
For builders, that’s a reminder to be honest and human in our messaging. Fear isn’t irrational. Nor is it terminal. It abates with trust, transparency, and usefulness. Adoption still follows value.
Looking Ahead
We don’t know what dominant model architectures will look like in five years. We don’t know who the next inflection player will be. But we do feel confident about the broad trajectory:
- Infrastructure will continue catching up to demand.
- Chips will commoditize and specialize in parallel.
- Developer velocity will accelerate, not decelerate.
- Open models will get better. Closed ones will, too.
- AI will spread — not just on servers, but on edge devices, internal networks, vertical stacks.
In other words, software will get weirder, faster, more anticipatory. And for all the complexities ahead — regulatory, ethical, existential — it's hard not to feel a quiet optimism. Not because the risks are overblown, but because the possibilities are extraordinary.
We’ve lived through tech shifts before. This one’s different in scale, but similar in pattern: there’s fear, abundance, then normalization. Eventually the disruptive becomes essential.
At Dellecod, we’re still experimenting. Still wrong sometimes. Still learning every day from the communities around us, from researchers half our age, and from the teams dreaming up use cases we never imagined.
In moments like this, building responsibly — and listening carefully — feels like the only real strategy. And one worth betting on.