Allie K. Miller on why enterprise AI isn’t failing — it’s right on schedule

AI adoption in the enterprise hasn’t hit a wall — it’s following the usual timeline, according to Allie K. Miller, CEO of Open Machine. Fresh off a main stage appearance at ScaleUp:AI, Miller, speaking with LinkedIn’s Tanya Dua, shared why she believes we’re right where we should be when it comes to deployment at scale.
“If you look at pretty much any tech movement… you have something come out in research, and then usually about three years later it starts popping up in startups,” says Miller. “Usually, about three years [after that], is when it hits enterprise. And so I think we’re right on track for enterprise adoption.”
These insights came from our ScaleUp:AI event in November 2024, an industry-leading global conference that features topics across technologies and industries. Watch the full session below:
Key takeaways
- “Enterprise adoption really does take three years.”
- “Behavior change takes time, and you have to bake that into your assumptions.”
- “You have people in your company, or maybe it’s you, who are doing unbelievable work… give them the floor”
- “2025 is going to be the year of AI agents.”
The biggest blockers? Behavior change and bad KPIs
Miller outlined two “prime failure modes” for enterprise AI: not accounting for behavior change and setting the wrong key performance indicators.
“The first is not realizing that behavior change is going to be such a big part of this… it might be behavior change for your executive team, it might be behavior change for your engineers or product teams, it might be behavior change for your customer support team or your end customers.”
She added, “The second is honestly setting the wrong KPIs… what they really should be doing is figuring out how those outputs are being created today and then figuring out if AI can help and then figuring out by how much it would need to help to be successful.”
Too many enterprises are handing out AI tools with no strategy
When it comes to leadership and clarity, Miller sees a trend: too much openness, not enough guidance.
“The majority of enterprises are giving too wide a space..They’re just kind of saying, ‘Hey, we bought this product, do something.’”
She recommends identifying internal champions: “You have people in your company, or maybe it’s you, who are doing unbelievable work… give them the floor and give them the opportunity to actually share with the company and the leaders how they’re using AI in their work.”
AI agents have potential — but they’re not ready yet
While AI agents are often framed as a solution to enterprise concerns around data and privacy, Miller believes there’s still work to be done, especially when it comes to agents understanding humans.
“We have to come up with systems where agents can better understand your preferences… how can we get AI agents, two things: one is better understanding our intentions and needs… and the second is access to resources.”
She added, “These systems are not yet good enough, even with those guard rails in place, to be able to be trusted to make those edits and then publish. We’re just not there yet.”
Every job will become “AI-fueled” — but experimentation is the edge
On the future of work, Miller didn’t downplay disruption: “Jobs will absolutely be impacted… we would be doing ourselves a massive disservice, especially to the younger generation, to say that it won’t.”
The key skill? Turning curiosity into action. “The people who convert the open questions into action, see how the AI is responding and then keep going — those are the people who are better off for the next several years.”
Matching investments to outcomes with the Dot-Dash-Star framework
Allie K. Miller recommends categorizing AI use cases into a Dot-Dash-Star framework to help organizations manage their AI investments and approach returns and risks. This framework breaks down different types of AI use cases based on their characteristics, expected outcomes, and timeframes for value.
Here are the three categories in the framework:
Dot
This refers to a really specific line of business use case. Examples include things like automating sales call transcripts or implementing an HR bot. These should be proven use cases.
The return timeframe for Dot use cases is typically about three months, or six months for a slower-moving Enterprise. Miller warns against people confusing these, stating that a line of business use case like an HR bot should not take two years to prove value.
Dash
his category is about empowering your whole workforce. It involves providing broad productivity tools like ChatGPT Enterprise or Microsoft Copilot. Measuring the return for Dash use cases can be tricky and the KPIs are often harder to define.
The expected return is sometimes fuzzier, potentially focusing on metrics like employee happiness or productivity, though these can be difficult to measure. For this category, organizations may simply need to have faith that it’s the right decision.
Star
This represents some big revenue-generating AI investment. The KPIs for Star investments are largely focused on revenue, or potentially efficiency that helps hit a similar revenue goal. The time to value for Star investments tends to be longer, potentially one to two years.
“Each of those has different KPIs and different return time frames,” Miller explained.