ScaleUp:AI

The jagged frontier of generative AI: A conversation with Ethan Mollick

Insight Partners | March 04, 2026| 5 min. read
Ethan Mollick

Generative AI is advancing fast, dazzling us with feats of reasoning or creative generation one moment and stumbling over simple tasks or logic the next. Wharton professor and AI researcher Ethan Mollick joins Lonne Jaffe to explore the “jagged frontier”: where threshold effects in the improvement of genAI capabilities are reshaping product strategy, talent composition, and organizational design.

Key speakers

  • Lonne Jaffe: Managing Director, Insight Partners
  • Ethan Mollick: Professor, The Wharton School; Author, Co-Intelligence

Key takeaways

  • The “bitter lesson” of AI research is now arriving in business: general-purpose models trained on outputs are beginning to displace handcrafted, process-specific solutions.
  • Moats still exist — but only where process, not output, is what creates value.
  • Agents are no longer theoretical. A 1% improvement in model accuracy can multiply agentic task capacity by 2–3x, and that threshold has quietly been crossed.
  • Switching costs, one of software’s most durable moats, are being targeted at the Fortune 50 level, with large-scale legacy migration projects already underway.
  • The biggest constraint on AI adoption is not technology. It is leadership, organizational design, and the incentive structures that surround both.

These insights came from our ScaleUp:AI event in October 2025, an industry-leading global conference that features topics across technologies and industries. Watch the full session below:

The bitter lesson comes for business

The bitter lesson is a concept from machine learning research with a simple, uncomfortable premise: Your beautiful, handcrafted solution to a specific problem will eventually be destroyed by a general-purpose model trained at scale.

Mollick illustrates it with chess. For decades, building a competitive chess engine meant hiring grandmasters and encoding every gambit and position. That approach powered Deep Blue. Then, in 2018, Google’s AlphaZero learned chess from scratch by playing itself — no human knowledge, just computation and reinforcement learning.

The business analogy is arriving in real time. When Mollick and his colleagues pushed back on an early OpenAI Agent for lacking a visible to-do list, the response was clarifying: “The model is directly reinforcement learning trained on what good PowerPoint and Excel looks like. It just does it. We have no idea how or why.”

That’s the moment the bitter lesson enters the enterprise. If your product’s value is in the output — a report, a customer interaction, a generated document — a general-purpose model can be trained to match or exceed it. The process disappears. The moat evaporates.

Where moats survive, Mollick argues, is where process is the point. Where the back-and-forth matters. Where human interaction, organizational judgment, and multi-stakeholder workflows are inseparable from the value delivered.

“If the output is what matters in your business, you’re in trouble of being bitter lessoned. If the process matters — the conversations, the writing of the report more than the report itself — then there’s hope.”
— Ethan Mollick

The incumbent paradox

When ChatGPT first emerged, conventional wisdom favored incumbents. They had distribution, customers, and existing data relationships. AI would arrive like electricity, a utility to be wired into existing products. Startups, the logic went, would struggle to compete.

That narrative has fractured.

The frontier labs didn’t just provide a commodity input. They kept shipping. Agents, multimodal reasoning, deep research capabilities — capabilities that would have defined standalone products just 18 months ago are now baseline features. Meanwhile, many early AI-native products were simply not good enough, and the organizational adoption they relied on was slower than expected.

That leaves incumbents in a paradox: They have the customers and the data, but also the inertia. In Mollick’s telling, many large organizations are still governed by AI ethics committees assembled in early 2023 — bodies that don’t fully understand the current technology and are working through backlogs of hundreds of use cases. Meanwhile, the models have moved on by several generations.

The phase progression most enterprises are experiencing:

  1. Phase Zero — AI ethics committee formed; rules established; real adoption stalls.
  2. Phase One — Co-pilot deployed; “talk to your documents” solution built; works moderately well; not transformative.
  3. Phase Two — Direct chatbot and tool access; build vs. buy decisions; vendor fatigue sets in.
  4. Phase Three — The locus shifts from IT to organizational design. CEOs start personally engaging with AI strategy.

The savviest organizations are already in phase three, and they’re rethinking vendor relationships accordingly. As Mollick describes it, the framing has evolved from “we need a vendor to help us” to “we are renting capability while we learn.” The difference is agency.

The switching cost reckoning

One of the most durable moats in enterprise software has always been switching costs. Products that customers don’t love but can’t leave. Multi-year contracts. Data locked in proprietary formats.

That era is facing its first serious structural challenge.

At the Fortune 50 level, Mollick reports a consistent message from CTOs: “We’re not signing another five-year contract.” Some organizations are already running experiments where AI Agents interact with legacy systems through their human-facing interfaces — using the UI as a bridge — to extract and recombine data without a formal migration. It’s slow, imperfect, and happening right now.

The incentive is clear: hundreds of millions of dollars in potential savings, and accumulating frustration with lock-in that no longer delivers proportional value.

“Organizations that are transparent about how they’re helping solve problems will survive. AI magic is going to wear thin very, very quickly.”

— Ethan Mollick

Agents are real. Most people haven’t noticed.

Perhaps the most significant signal from the conversation: Agents have crossed a threshold, and the market hasn’t priced it in.

For years, the assumption was that agentic AI was limited by error accumulation; hallucination rates compound over long task chains, making sustained autonomous work unreliable. That assumption is now outdated. Larger models are self-correcting at rates that make extended agentic work viable.

OpenAI’s GDP evaluation paper is instructive here. Experts with an average of 14 years of experience were given tasks requiring four to eight hours to complete. AI systems produced comparable outputs — rated nearly 50/50 in blind expert evaluation — in five to ten minutes. In software specifically, the AI outperformed humans most of the time.

The agentic layer is no longer something to plan for. It’s already embedded in the interfaces most knowledge workers use daily. The opportunity cost of waiting is no longer theoretical.

The jagged frontier is shrinking

The “jagged frontier” concept — the idea that AI has surprising areas of strength alongside surprising blind spots — has been a useful frame for explaining AI limitations to skeptical stakeholders. It is becoming less useful because the frontier is narrowing.

Tasks that reliably illustrated AI’s gaps 12 months ago have been resolved. The remaining edge cases are increasingly technical, domain-specific, and debatable — not the clear demonstrations of incompetence that once made for compelling cautionary tales.

Mollick is direct about what this means for software vendors in particular: the reassurance that comes from watching competitors fail to adopt AI is borrowed time. Organizational inertia is a delay, not a moat. The 95%-of-projects-fail statistic, widely repeated across LinkedIn and boardrooms, originated from roughly 50 interviews conducted at a single conference. It is not a study.

Clear-eyed assessment of the actual capability curve, not the six-week-old capability curve, is what separates organizations with a chance from those in comfortable decline.

Leadership is the actual variable

The technical questions are largely resolved. The model capabilities required to transform most industries are already deployed. What remains is the harder problem: getting organizations to act.

Mollick’s prescription is pointed. Vague mandates to “be 10% more productive” don’t move people — and they shouldn’t. What moves organizations is a concrete, compelling vision of what the company looks like on the other side of this transition. That requires senior leaders to do real thinking, not just issue directives.

The most effective organizational patterns Mollick has observed:

  • The lab model: Identify the 2% of employees already pushing boundaries with AI — they’re usually easy to find because they’ve been trying to get access for months — and formalize them as an internal capability center. Quick wins follow quickly.
  • Permission from the top: At organizations where the C-suite has personally bought in — naming JP Morgan’s Jamie Dimon and Mary Erdos as examples — the cultural permission structure changes, and experimentation accelerates.
  • Incentive redesign: Some organizations are running weekly prizes for the most impactful automation. Others require teams to spend two hours attempting a task with AI before posting a job requisition for that role. The specific mechanism matters less than the signal it sends.

The organizations that will win, Mollick argues, share one trait: Senior leadership willing to absorb risk. That has historically been the province of founders. It is increasingly the defining variable for established companies, too.

Shaping what comes next

The frame that Jaffe and Mollick keep returning to is a familiar one in disruption history: Tower Records, Blockbuster, London car services when Uber arrived. The question is never whether the shift is happening. It is whether your organization will be among those that bet on the curve early enough to shape what comes next.

For founders, the message is to be more ambitious, not less. The capability gains that make agentic products economically viable are recent — and largely unpriced in market expectations. There is more room to build, and more risk in building too cautiously.

For enterprises, the message is simpler and harder: the window for comfortable observation has closed. What remains is a choice between leading the transition or being led through it.

Watch more sessions from ScaleUp:AI, and read more recaps on the blog.