Dellecod Software

The Future Learns as We Do

There’s something quietly captivating about listening to someone like Sam Altman talk about AI. It’s not the usual tech optimism or polished corporate speak. It’s a mix of awe, calculation, and long-range vision—an awareness that we’re traveling a road we’ve never walked before, with no clear signposts and plenty of fog.

What struck us most in the recent conversation with Altman was how the path of deep learning continues to exceed expectations. Despite being a single family of techniques, deep learning keeps delivering new capabilities we hadn’t predicted—from language reasoning to video generation—simply by scaling up and refining the same core principles. Altman called it “the miracle that keeps on giving,” and that’s not an exaggeration. The past few years have shown again and again that the next leap often comes not from reinvention, but deeper commitment to scaling what’s already working.

This idea provides an invisible backbone to OpenAI’s strategy. Their structure—combining research, consumer applications, and massive infrastructure—isn’t just a business convenience. It’s a deliberate move to build the conditions for continual discovery. In a world where the rate of surprise is high, keeping your feedback loops tight between theory, practice, and deployment becomes the real competitive edge.

One facet of the interview that resonated with us deeply is the notion that AI and society have to co-evolve. We often think of technological capabilities in isolation. But tools like ChatGPT or Sora aren’t just outputs of progress—they’re catalysts for human adaptation. Sora, for example, may seem removed from the quest for AGI, but its role in helping people understand what synthetic media can do is already reshaping how we imagine the future. Showing people these capabilities early gives culture a chance to catch up, adjust, and respond. That’s not just PR—it’s strategic empathy at scale.

Altman’s comment that “AI will pass AGI milestones gradually — not with a big bang” is telling. It cuts against the prevailing vision of a single moment of emergence. Instead, we’re more likely to see a gradient of change—an accumulation of competence. It’s why OpenAI is placing serious bets on next-generation interfaces: real-time context-aware agents, video-first outputs, and eventually AI that understands more of our world than we do. The move beyond typing in a chat box is already starting, and where that ends is still open.

One of the more tantalizing ideas centers around "AI scientists." Altman expressed personal excitement about the ability of future models to contribute to actual scientific research. He’s already seen hints in GPT-5 of something novel: models making original contributions, not just regurgitating known patterns. If this holds true, it could imply an acceleration of discovery that human-only research couldn’t achieve alone. Within a couple of years, AI might not just assist with scientific work—it could expand the frontiers of science itself.

This points to what feels like a deeper story beneath all the headlines: AI developing a model of the world. If Sora helps AIs "see," and language models help them "reason," then the collective training of these systems starts to look like the construction of a layered, dynamic understanding of reality. In this sense, artificial general intelligence isn’t some theoretical finish line, but a process of compositional learning—one modality at a time.

None of this happens without scale. OpenAI’s shift to vertical integration—essentially controlling everything from underlying chips to user-facing applications—echoes historical examples like Apple. But the motivation is different. It's not purity of product design. It's about being able to test hypotheses fast, train and evaluate at scale, and manage scarce compute when choices have to be made. Interestingly, Altman admitted that in moments of constraint, OpenAI will divert compute from its most popular products to support research. That’s a rare stance in a business climate often obsessed with user numbers.

On that note, it’s staggering to realize that ChatGPT has 800 million weekly active users—about 10% of the world population. Very few tools—not even the smartphone—scaled that quickly. But OpenAI doesn’t seem especially architectural about user metrics. They are building for something deeper: infrastructure for global cognition.

Inevitably, this leads to questions about governance, policy, and economics. On regulation, Altman took a nuanced view—favoring restraint except at the very high end of model capability. Too much red tape now could trip innovation, especially in competitive national contexts. On monetization, he was pragmatic. Sora will likely need business models that reflect its high compute cost—per-use pricing, maybe ads—but never in a way that erodes trust. It’s a hard balance: free-flowing, transformative tech that somehow stays aligned with a user community's values. Time will tell.

As makers ourselves, what lingers from this conversation isn’t just excitement about what these tools can do, but how they invite us to evolve, too. Altman made a passing comment that echoed louder in retrospect: future entrepreneurs shouldn’t chase today’s business templates. Instead, they should build directly into the space that near-free AGI unlocks. That’s not advice to pivot. It’s counsel to stay close to innovation, to keep curiosity sharp, to learn as the tech learns.

That feels like the call of this moment—stay near the edge, don’t flinch, and pay attention to the next unexpected thing that works. As we’ve seen with deep learning, the miracles are still coming.