Dellecod Software

AI Transformation Starts Inside the Enterprise

We’ve been talking a lot lately about how AI is reshaping the enterprise, but a recent conversation with Ben Sharifian at Scale AI gave language to what many of us have been sensing: truly transformative AI work doesn’t start with software — it starts with people, solving real problems, inside the mess of enterprise systems. Not from the outside, but from within.

The idea of “forward deployed” teams isn’t new. Palantir paved the way with its embedded consultants and engineers, showing that working shoulder to shoulder with a customer can yield much more than just adoption — it can lead to deep insight, differentiated solutions, and eventually, defensible products. But what’s emerging now is a model where AI, and specifically large foundation models, become the medium through which this transformation happens.

At Scale, these forward deployed teams — engineers, PMs, ML specialists — are essentially boots on the ground inside Fortune 500s and government orgs. Their goal isn’t just implementation. It’s exploration. They help these institutions figure out not just how to use AI, but what problems matter most and what workflows truly constrain them. That kind of discovery work doesn't show up on a roadmap or get demoed at a product launch — but it’s where the real moat is built.

There’s something refreshingly honest in Ben’s framing of the tradeoff: trading margin for moat. Building AI tools for the enterprise isn’t immediately lucrative. It’s not a clean SaaS business with high gross margins and self-serve adoption. It’s manual and layered — it involves change management, legacy systems, security reviews, and the politics of enterprise software buying. But done right, it can make a company indispensable. And that’s something you don’t lose with the next contract negotiation.

We’ve seen this ourselves. Some of our most productive partnerships have started with messy, high-touch engagements. Not everything scales — and that’s okay. The understanding we build during those early phases often unlocks reusable product and higher-leverage automation down the line. It also gives us a better filter for which problems are worth solving because they have depth and variability, not just surface appeal.

One thing that really stood out in the Scale approach is their talent model. They’re not chasing Ivy League pedigrees or ten-year resumes. They want folks who are curious and hands-on, willing to build integrations, sit in meetings, and glean insights from the way users actually work. That sounds basic, but in today’s product world — especially in AI — it’s surprisingly rare. Too many teams are trying to solve enterprise problems from a whiteboard or a lab rather than getting into the weeds of actual business logic.

Another nuanced takeaway: success in this space isn’t just having the best models or data pipelines. It’s about becoming the system of record — or better yet, the system of intelligence. That requires building not just software, but trust and relevance. If you build an agent that makes a CFO’s forecast more accurate, you're not just another AI tool — you're part of the financial nervous system. That’s hard to displace.

At Dellecod, we often ask ourselves: are we building something users would feel pain losing? It’s a humbling question, but it keeps us grounded. And it aligns with one of Ben’s quieter assertions — that vanity metrics and shiny demos are distractions from enduring value. What makes value enduring is that it’s rooted in the daily work of a user, the decision they have to make, the data they don’t have, the latency in their process.

This also changes what success looks like for go-to-market teams. Instead of racing to hit a revenue number, the best teams are now aligning their commissions and incentives around strategic fit — which contracts help us learn? Which use cases repeat? Which customers want to co-evolve the product with us? This may mean saying no more often. But it also means the yeses are more meaningful.

Looking ahead, the forward deployed model might be with us for a while — five to ten years, in Ben’s estimate. That doesn’t mean everything stays high-touch forever. The real hope is that as workflows solidify and patterns emerge, much of the custom effort can be absorbed by agents themselves. Some of what used to require a consulting firm or an SI partner may eventually be handled by well-trained, domain-specific AI agents. But getting there requires being close to the problem today.

What this all adds up to is a reshaping of enterprise AI — not as a product you sell, but as a capability you co-develop with customers. That’s slower. More cumbersome. But it’s also more meaningful. When you build with this mindset, you’re not chasing market share — you’re earning trust, capturing workflows, and turning proprietary data into enduring advantage.

Enterprises don’t want abstract AI tools. They want intelligence that fits how they work — with all the edge cases, the legacy systems, the audit requirements. If you embed with them to understand that reality, and you’re humble enough to build around it, you can emerge not just with a better product — but with a foundational understanding of where AI really adds value.

That’s our takeaway. Not a magic formula, not a shortcut. Just a deeper belief that the real leverage in enterprise AI starts at the edge — where engineers, designers, and PMs meet real problems — and moves inward from there.