Article

Why the next enterprise AI advantage will be systemic, not algorithmic

Insight Partners | August 04, 2025| 1 min. read

Despite rapid progress in generative AI, most enterprises are still grappling with the same question: How do these models actually fit into operational systems? Managing Director George Mathew addressed this challenge directly in a recent investor panel hosted by Acceldata. He made the case that real enterprise value will come not from raw model capabilities but from how those models are embedded into layered, explainable, and resilient software environments.

“The foundation models are best suited for probabilistic reasoning that will look like what a human can do and potentially what a human can never do,” he said. But on their own, they are insufficient. Mathew emphasized the importance of “compound AI systems” — enterprise architectures that combine foundation models with existing calculation engines, aggregation layers, and data pipelines.

Watch the full panel:

Startups need more than a feature

Mathew challenged a recurring tendency in early-stage AI software: building entire companies around narrow functionality. “There’s like 30 folks doing text-to-SQL right now,” he said. “Isn’t the foundation model going to take that over? Yeah, probably. But that’s a feature.”

He continued: “If the only thing an entrepreneur is now doing is providing a text-to-SQL service, that’s going to be tough. You probably shouldn’t be building 30 companies on a feature.”

Instead of chasing novelty, Mathew encouraged a focus on infrastructure-level problems where AI can add lasting value. His current investment theses prioritize categories such as master data management, observability, metadata cataloging, and workload orchestration.

The energy constraint is coming

While many conversations around scaling AI focus on hardware supply, Mathew pointed to an underappreciated constraint: energy. “We might not have enough power, at least on a clean basis, to build the scale of systems that we’re talking about,” he said.

He described a potential shift in the economics of AI: “Eventually it’s going to go from compute arbitrage, which it is right now, to energy arbitrage.” In that environment, organizations and governments that manage energy efficiently may gain a lasting advantage.

Explainability is no longer optional

Trust remains a central challenge for enterprise AI. According to Mathew, explainability is now a practical requirement. “There’s got to be some lineage traceability,” he said. “The observability alone starts to drive a lot of the explainability.”

He referenced his investment in Fiddler, a company focused on AI monitoring and guardrails, and pointed to ongoing research. “If you look at the hardest research in the model world right now, it’s mechanistic interpretability.”

This area of work is focused not just on model performance but on whether a model can reliably and consistently reproduce its results and explain how it does so. “Explainability is going to be, and is already, quite an important topic of work,” he said.

Software development is shifting

As AI continues to reshape enterprise software, Mathew sees a shift in how systems are built. “The new operating model is going to be fundamentally different,” he said. “You start top-down now. You don’t start bottom-up.”

This shift, he noted, will demand a different mindset. “Every time there’s been these big shifts, we’ve had to readapt. We will relearn. Who cares what kind of language it is — we’ll readapt to whatever suits the new operating model.”

Taken together, Mathew’s remarks describe a version of AI that is less speculative and more practical. For enterprise leaders, success will depend not on building around isolated breakthroughs but on integrating AI into systems that are explainable, efficient, and designed to endure.


*Note: Insight has invested in Acceldata and Fiddler AI.