Five rules for building enterprise AI that actually works — lessons from Dell Technologies and Rackspace Technology

While enterprises everywhere are accelerating their AI projects, most initiatives struggle to move beyond experimentation.
The barrier isn’t budget or a lack of enthusiasm, but how these projects are executed, according to Adam Perry, chief technologist at Dell Technologies’ private equity practice, and Edward Kerr, head of product for software and services at Rackspace Technology.
They both sit at the intersection of infrastructure, data, security, and applied AI. As a result, they share a view of what it actually takes to make enterprise AI deliver value.
For Perry and Kerr, enterprise AI only succeeds when it creates increasing returns, not increasing complexity. Without the right foundations — data, governance, workflows, and guardrails — AI can quickly become a source of operational friction rather than a competitive advantage.
Below, they share five insights that set apart the organizations achieving transformation with AI from those just experimenting with it.
1. Don’t skip your data strategy
For a company to become AI-ready, no amount of enthusiasm, budget, or modeling expertise can make up for an unclear data strategy, argues Perry.
“Most companies, when they go to implement any sort of AI strategy, a lot of proofs of concept fail out of the gate, and that’s because of the data strategy.” Early tools are impossible to deploy when the underlying data is incomplete, fragmented, or poorly governed.
“It’s very difficult to navigate the massive pool of data that’s out there.”
Large enterprises experience this acutely. They sit on mountains of structured, semi-structured, and unstructured data scattered across cloud platforms, applications, and legacy systems. “It’s very difficult to navigate the massive pool of data that’s out there,” says Perry.
Every successful AI deployment begins by identifying the subset of data that actually matters for a given outcome, and ensuring it is accurate, accessible, and well-governed. “The way that you do that,” says Kerr, “is you architect around data gravity…as opposed to sending my data out into every which place … [it’s] in a private and secure environment.”
This principle shapes the Dell-Rackspace partnership. Many of their joint customers already host critical workloads in Rackspace data centers, reducing risk and complexity. However, location alone does not resolve fragmentation.
Perry emphasizes the need for a unifying layer that abstracts differences in systems and formats: “A data lakehouse type of architecture where we’re able to federate queries across multiple data sources…and bring that data into a consumable format for AI.”
In other words, AI readiness is ultimately data readiness.
2. AI should add value, not complexity
Enterprises often ask when AI should be added to a product, but that’s the wrong way of looking at it, says Kerr. “The easy way to know when products should incorporate AI is when they’re compounding value and not compounding complexity.”
“AI is really just an extension of human intelligence.”
AI isn’t inherently valuable, he says. “AI is really just an extension of human intelligence.” He uses meeting summarization tools to make his point. “Everybody loves the fact that AI can summarize those notes,” but “the people who have to consume those notes now have an abundance of notes…this noise that’s constantly going on, it’s not really pushing the conversation forward.”
The real value would come from connecting those summaries to context — product data, previous decisions, competitor intelligence — to move work forward and drive meaningful outcomes. “If you’re not generating value [with] AI, you’re just adding complexity, and it’s a big problem, and that’s when you know it’s premature,” says Kerr.
Don’t start an enterprise AI program with a broad ambition. Start with one workflow where AI clearly elevates the outcome, then expand deliberately.
3. Don’t be generic
A common mistake enterprises make is attempting to do everything at once: distributing AI too widely or creating generic tools instead of tackling specific problems.
“What companies do is try to boil the ocean,” says Perry. “They come up with too many use cases, or they’ll implement something like a chatbot and…hope that that’s going to bring some value.”
“What companies do is try to boil the ocean.”
It rarely does, he says. Instead, “you want to focus on that one use case and then branch out from there.” Rackspace takes this approach internally by designing agentic workflows, explains Kerr. It could start with something as simple as an AI Agent that researches a product requirement document.
That might evolve into a chain of coordinated Agents. One validates the idea against other documents, another generates a proof of concept, and another updates infrastructure using historical patterns. “We help [our customers] build those agentic journeys from ‘I have this one little use case’…to ‘it works’…[to] ‘now, where do we go from here?’”
“Enough with the chatbots. Take this AI technology and integrate it into those workflows.”
His message to founders is straightforward: “Enough with the chatbots. Take this AI technology and integrate it into those workflows.”
4. Address trust issues to deliver value
Even with the right architecture and a sound data strategy, enterprises can’t unlock real value from AI until their organizations trust it. “How do we trust AI?” Perry asks. “It’s very difficult to govern the change of the data over time.”
He describes a landscape where models can drift or be “poisoned” as data evolves, forcing companies to rethink how they verify their own information. “We’re in a new paradigm where we have to figure out what’s true and what’s not, and that goes for company data as well.”
This includes determining who owns which data, how it flows, and how its accuracy is maintained. Without that, even sophisticated models lack credibility.
But Kerr argues that the trust issue is emotional. “It’s not that we don’t believe the outcome is good,” he says. “It’s actually that we don’t want to be cut out of the loop.”
“You really need to empower system users.”
His solution: Give users control over how AI behaves. “You really need to empower system users…to set data contracts, to create scaffolding between them and the AI.”
That requires visibility into how decisions are made, which is why explainability audits are becoming more popular, Kerr notes. “We want to know what the AI did to get there, so that we can audit it after the fact.”
On the systems side, Perry stresses the role of constraints: “We can put guardrails in place…to make the outcomes more deterministic and have a specific answer that aligns with the organization.”
Trust is earned not through blind faith in AI models, but through clarity, accountability, and structure.
5. Treat AI as a human discipline
As sophisticated as today’s models are, Perry and Kerr argue that the most enduring advantage remains human. Effective AI requires people who can reason, communicate, interpret context, and design workflows that combine human and machine judgment.
“Having computer scientists, data scientists, and things like that is very important, but having the human element is equally as important,” Perry says. As AI increasingly handles execution, the differentiating skills shift toward judgment and creativity. “Maybe [that] will lead to a higher demand for people studying humanities in the future…It’s an important part of the equation.”
The enterprises that cultivate these capabilities, alongside engineering talent, will be better equipped to integrate AI effectively.
Building the AI flywheel
Throughout their discussion, Perry and Kerr return to a shared theme: AI should create momentum, not maintenance. They envision a system where better data leads to better AI, better AI leads to better workflows, and better workflows create better data — a cycle of improvement.
“We need to figure out how to use AI to create the kind of flywheel effect where AI is making the product better, not…increasing the complexity,” reiterates Kerr.
That flywheel comes from a clear data strategy, a focused starting point, human-centered design, robust guardrails, and infrastructure that reflects where data actually lives.
For enterprises to truly compete in an AI future, AI must first become a foundation, rather than a feature — and eventually, a competitive engine.
*Note: This article is part of our ScaleUp:AI 2025 Partner Series, highlighting insights from the companies and leaders shaping the future of AI.







