Dellecod Software

From Fear to Pragmatism in AI Policy

In our quiet moments here at Dellecod, we often find ourselves discussing not just lines of code or model benchmarks, but the broader trajectory of technology — what it could mean for society, policy, and innovation at large. Lately, those conversations have revolved around something that feels like a turning tide in U.S. AI policy: the shift from anxiety to action, from fear toward thoughtful progress.

For a while, there was a heavy, almost suffocating caution around AI. There were public petitions calling to “pause” it, Hatch-Act-level whispers in academic halls, and policies proposing liability for open-source developers if a model was misused. California's SB 1047 was a particularly sobering moment. It suggested codifying fear into law — making developers responsible for vague “mass casualty” events, defined as three or more deaths or overwhelming a rural hospital.

Three deaths. In a country where software powers hospital systems, vehicles, and MRIs, the vagueness of that metric felt less like a safeguard and more like a stop sign for innovation.

And yet, something changed. The fear narrative began to erode — not because the underlying risks vanished, but because new voices began to speak up.

Take the example of DeepSeek Math V2, a publicly released model from a Chinese lab that startled many in Washington. For a long time, the U.S. played mental chess against imagined AI futures, worrying about alignment risks and doomsday scenarios. Meanwhile, other countries quietly got better. Fast. When DeepSeek hit the world stage, it wasn’t just a technical benchmark — it was a wake-up call. The pace of innovation doesn’t wait for philosophical consensus.

What emerged from this shift — and something we’ve welcomed — is a return to pragmatism. No more equating large language models with nuclear weapons or fighter jets. No more attempts to gatekeep open knowledge as if it hadn’t already entered the bloodstream of the internet. Instead, we now see a government attempting to craft an actual plan — one that encourages research, fosters evaluation frameworks, and doesn’t treat software like uranium.

The new U.S. AI Action Plan struck a noticeably different tone. In place of alarmism, it opened with a line about “a new frontier of scientific discovery.” It wasn’t just symbolic. It acknowledged the need for rigor — through evaluations, risk assessment infrastructure, and interpretability research — before regulation. It aimed to fund talent, improve safety tools, and (critically) support open-source innovation.

Open source deserves that support and more. It’s no longer just the province of ideologues or academics. It’s a quiet workhorse — and, increasingly, a strategic asset. Companies adopting sovereign AI models — running open-source AI in sensitive, secure, on-prem environments — aren’t doing so for sentiment. They’re doing it for control, accountability, and competitive edge.

There’s wisdom in recognizing the different layers here. Closed and open source aren’t rivals, just different market fits. Some governments and industries need models they can inspect, audit, and control. Others chase frontier capabilities. One doesn’t replace the other — they coexist, like HTTP and HTTPS.

Of course, alignment and safety are still priorities. But we’re not going to stop using electricity because we don’t fully understand the brain. We build, we test, we adjust. We push forward, consciously. We can't let vague existential fears — p(doom) theories untethered from empirical data — dictate the pace of silicon-scale progress, especially when delays mean real losses: in healthcare, science, education, or pandemic readiness.

The trouble with fear is that it narrows vision. But the cost of inaction is often invisible. When AI research slows, we don’t see the cancer diagnosis that arrives a decade too late. We don’t see the student who never gets personalized tutoring. These aren’t made-for-movie threats — they’re opportunities missed by the margins.

What’s encouraging now is that technologists are re-entering the conversation — the ones who stayed quiet through the fear cycle. Engineers, academics, VCs — people who've shipped real systems, debugged model drift at 2 am, or argued with a transformer that forgot simple arithmetic. These people aren’t alarmist. They’re not starry-eyed. They’re practical. And they’re showing up.

As an industry — and as a society — we need that kind of skepticism. The “extraordinary evidence” kind. Because while AI does bring challenges, most can be managed the way we’ve always managed technology: with engineering discipline, testing, regulation when needed, and humility when warranted. Overreach rarely helps. Underreaction isn’t wise either.

One area where we still need more investment is academia. Historically, so many breakthroughs began not in boardrooms or basements, but on campuses. Research culture matters. It was disappointing to see how little was earmarked for this in the Action Plan.

Still, we’re hopeful. The broader tone — open, inquisitive, balanced — is a break from the bunker mentality of the last few years. It signals trust in the broader ecosystem: open-source developers, applied researchers, startup founders — everyone contributing to progress, not waiting for it.

It matters that we show we care about how AI is built. But it also matters that we build. That's how we spot the problems early. That’s how we reduce the risk of misuse. And that’s how we steer toward breakthroughs that actually matter.

So we’ll keep coding. Keep thinking. Keep collaborating with those outside our walls. And we’ll keep adding our voice to the quiet, pragmatic center that believes tech is messy — but worth getting right.