It’s a strange thing, witnessing the future show up faster than expected—then realizing it’s also going to take longer than we thought.
At Dellecod, we’ve been reflecting on just how quickly language models are advancing. Tasks that once seemed off-limits to automation—reasoning through problems, writing software, navigating interfaces—are being rapidly absorbed by tools that learn at industrial scale. Only a short time ago, it felt like these capabilities were flashes of distant possibility. Today, they’re starting to show up in products we use—products we build with.
There’s an undeniable softness in the center of all this speed: the models learn slowly in new settings, miss context easily, and lack anything close to real common sense. Their brilliance is brittle. Progress still leans heavily on brute-force tricks—larger datasets, more fine-tuning, more clever prompting. It’s easy to forget how far we’ve come, but it’s also easy to overstate where things are headed.
The debate around AGI illustrates this tension. Is it near? Is it myth? A definition we’ve heard and keep returning to defines it as AI capable of replacing any average remote worker. Not AI as philosopher or inventor—but AI as a diligent, flexible knowledge worker who follows through. Seen in that light, one could argue that we’re surprisingly close. Not there yet, but close enough that it’s time to think seriously about what comes next.
One consequence we can already see: the ladder into the working world is shifting. Entry-level jobs—once the traditional on-ramps for developing skill and judgment—are becoming automated. LLMs perform these roles with increasing fluency and almost no marginal cost. That’s a potentially massive disruption. Without exposure to hands-on experience, how will tomorrow’s experts emerge?
On the other hand, there’s something else happening that gives us hope, maybe even excitement. AI is lowering the barriers to entrepreneurship in a way that feels quietly radical. A single founder with a good idea now has access to leverage—through automation, code generation, and increasingly autonomous agents—that used to require a team and funding. You don’t need to be at FAANG or sitting in Silicon Valley to launch something meaningful. The idea of “sovereign individuals”—economic actors operating independently of large institutions—is gaining traction.
This is part of a broader pattern: AI isn’t just helping the big players. For once, it’s not winner-take-all. The Web2 era rewarded companies who captured mass-scale user bases and surfed strong network effects. AI, by contrast, might support a more fragmented, fast-turnover environment—one where different personalities and capabilities emerge across tools and products. In this world, more people get to win.
But nothing guarantees that we steer toward the right outcomes. Much of today’s energy is focused on making models more capable. That’s useful, but there’s a risk of complacency—of assuming this path is the only path. Foundational questions still matter. What is intelligence? How does learning work in biological systems? Why do humans adapt so well with so little data?
Unfortunately, the hype around LLMs is pulling talent away from these deeper questions. It’s understandable. Building products feels more immediate. It pays better. But if we over-invest in one paradigm, we may be locking ourselves into a narrow way of thinking about what AI can be. Even now, some of the best models plateau on tasks where humans thrive by default—pulling from varied, lived experience, weighing nuance, inferring from silence.
Where things are headed in the next five to ten years is anyone’s guess. But there’s a sense of inevitability around certain shifts. AI agents that can design and coordinate software projects are advancing fast—Replit’s agent jumped from two minutes of autonomy to over 28 hours in a short span. Tools that remember context across projects, integrate with collaborative workspaces, and act on user intent across code and documents are starting to converge.
In that kind of ecosystem, skill sets will matter less than fluency in navigating systems. For many, “job” will increasingly mean managing AI processes rather than executing routine tasks. We imagine roles that look less like engineers and more like orchestrators—multi-model product designers, AI collaboration leads, iterative prompt engineers.
Education paths will evolve too. Studying computer science still makes sense, especially if you want to guide or shape agents. But we also see increasing relevance in neuroscience and philosophy of mind. The simple act of asking: What is understanding? What does learning mean? These questions now sit at the center of applied technology.
One final tension we think about often: the infrastructure isn’t keeping up. You can build incredibly sophisticated AI systems today, but reliable RL environments, clean training data, and power supply all risk becoming bottlenecks. One day, we might find that the future isn’t held back by inspiration or capability—it’s held back by under-investment in basics like labeling accuracy and electricity generation.
We don’t pretend to have answers. But as a team trying to build things in this unusual moment, we’re learning to live in paradox. To recognize the profound promise and the real limitations. To be astonished by how much can be done with a clever API—and remain grounded about how much remains unsolved.
It feels like the world is being rearranged. Not evenly, not instantly. But the direction is real. AI won’t replace everyone. But it will change everything. And somewhere in that reordering, a new kind of creativity is emerging—quiet, distributed, not waiting for permission.
We're hopeful. Cautiously so.
At Dellecod, we’ve been reflecting on just how quickly language models are advancing. Tasks that once seemed off-limits to automation—reasoning through problems, writing software, navigating interfaces—are being rapidly absorbed by tools that learn at industrial scale. Only a short time ago, it felt like these capabilities were flashes of distant possibility. Today, they’re starting to show up in products we use—products we build with.
There’s an undeniable softness in the center of all this speed: the models learn slowly in new settings, miss context easily, and lack anything close to real common sense. Their brilliance is brittle. Progress still leans heavily on brute-force tricks—larger datasets, more fine-tuning, more clever prompting. It’s easy to forget how far we’ve come, but it’s also easy to overstate where things are headed.
The debate around AGI illustrates this tension. Is it near? Is it myth? A definition we’ve heard and keep returning to defines it as AI capable of replacing any average remote worker. Not AI as philosopher or inventor—but AI as a diligent, flexible knowledge worker who follows through. Seen in that light, one could argue that we’re surprisingly close. Not there yet, but close enough that it’s time to think seriously about what comes next.
One consequence we can already see: the ladder into the working world is shifting. Entry-level jobs—once the traditional on-ramps for developing skill and judgment—are becoming automated. LLMs perform these roles with increasing fluency and almost no marginal cost. That’s a potentially massive disruption. Without exposure to hands-on experience, how will tomorrow’s experts emerge?
On the other hand, there’s something else happening that gives us hope, maybe even excitement. AI is lowering the barriers to entrepreneurship in a way that feels quietly radical. A single founder with a good idea now has access to leverage—through automation, code generation, and increasingly autonomous agents—that used to require a team and funding. You don’t need to be at FAANG or sitting in Silicon Valley to launch something meaningful. The idea of “sovereign individuals”—economic actors operating independently of large institutions—is gaining traction.
This is part of a broader pattern: AI isn’t just helping the big players. For once, it’s not winner-take-all. The Web2 era rewarded companies who captured mass-scale user bases and surfed strong network effects. AI, by contrast, might support a more fragmented, fast-turnover environment—one where different personalities and capabilities emerge across tools and products. In this world, more people get to win.
But nothing guarantees that we steer toward the right outcomes. Much of today’s energy is focused on making models more capable. That’s useful, but there’s a risk of complacency—of assuming this path is the only path. Foundational questions still matter. What is intelligence? How does learning work in biological systems? Why do humans adapt so well with so little data?
Unfortunately, the hype around LLMs is pulling talent away from these deeper questions. It’s understandable. Building products feels more immediate. It pays better. But if we over-invest in one paradigm, we may be locking ourselves into a narrow way of thinking about what AI can be. Even now, some of the best models plateau on tasks where humans thrive by default—pulling from varied, lived experience, weighing nuance, inferring from silence.
Where things are headed in the next five to ten years is anyone’s guess. But there’s a sense of inevitability around certain shifts. AI agents that can design and coordinate software projects are advancing fast—Replit’s agent jumped from two minutes of autonomy to over 28 hours in a short span. Tools that remember context across projects, integrate with collaborative workspaces, and act on user intent across code and documents are starting to converge.
In that kind of ecosystem, skill sets will matter less than fluency in navigating systems. For many, “job” will increasingly mean managing AI processes rather than executing routine tasks. We imagine roles that look less like engineers and more like orchestrators—multi-model product designers, AI collaboration leads, iterative prompt engineers.
Education paths will evolve too. Studying computer science still makes sense, especially if you want to guide or shape agents. But we also see increasing relevance in neuroscience and philosophy of mind. The simple act of asking: What is understanding? What does learning mean? These questions now sit at the center of applied technology.
One final tension we think about often: the infrastructure isn’t keeping up. You can build incredibly sophisticated AI systems today, but reliable RL environments, clean training data, and power supply all risk becoming bottlenecks. One day, we might find that the future isn’t held back by inspiration or capability—it’s held back by under-investment in basics like labeling accuracy and electricity generation.
We don’t pretend to have answers. But as a team trying to build things in this unusual moment, we’re learning to live in paradox. To recognize the profound promise and the real limitations. To be astonished by how much can be done with a clever API—and remain grounded about how much remains unsolved.
It feels like the world is being rearranged. Not evenly, not instantly. But the direction is real. AI won’t replace everyone. But it will change everything. And somewhere in that reordering, a new kind of creativity is emerging—quiet, distributed, not waiting for permission.
We're hopeful. Cautiously so.