ScaleUp:AI

How Promptfoo is restoring trust and security in generative AI

Insight Partners | December 03, 2025| 4 min. read
Ian Webster Promptfoo

As generative AI has advanced, it has exposed gaping holes in cybersecurity defenses.

Traditional cybersecurity tools were built for predictable software systems, where vulnerabilities lie in the logic or structure of code.

Large language models, by contrast, operate in natural language and context. An attacker can manipulate a model simply by talking to it, using prompts or jailbreaks to trick it into revealing data, executing unintended instructions, or bypassing safeguards.

Put simply, old-school defenses aren’t reliable because they’re designed to read code, not conversation.

AI vulnerability testing platform Promptfoo was founded to close this gap. “When you deploy AI, you’re taking on certain risks, mainly around security, safety, legal, compliance, and policy,” says CEO and cofounder Ian Webster. “What we do at Promptfoo is find and fix vulnerabilities in generative AI.”

Cracking the code

The idea for Promptfoo came to Webster while he was working as a senior software engineer at Discord in 2022. When he began leading a team that built LLM-based products, he saw firsthand that traditional testing and security methods could not secure genAI systems as they moved from prototypes to production.

“I was responsible for shipping AI to 200 million users,” he says. “And in doing that, I learned all of the wonderful things, and also a lot of the terrible things, that follow when you put AI in front of so many people.”

He designed the first version of Promptfoo as an open-source tool — “just something that I was doing for fun on the side” — to help application developers like him test, discover, and fix LLM failures.

Webster worked on the project solo for two years before his longtime friend Michael D’Angelo, a former VP of engineering and head of machine learning at identity verification company Smile ID, joined as cofounder and CTO.

The platform struck a chord with developers from the start. When it officially launched in July 2024, with $5M in seed funding, it was already serving 25,000 software engineers at companies like Shopify, Amazon, and Anthropic.

Machines testing machines

Promptfoo remains open source, but the company now offers a managed cloud or on-premise option for organizations needing advanced features, support, scale, and compliance.

“What’s happening today across many enterprises is that there is great appetite for … building new and exciting things with AI, but in order to productionize them and ensure their safety, security, and reliability, there’s a lot of work that needs to be done,” Webster says.

With Promptfoo, developers and security teams can rigorously test and fortify their AI applications against malicious behavior and vulnerabilities before they are deployed to users. “You can basically think of it as a vulnerability scanner,” says Webster.

“We help companies bridge the gap from rough prototype to something … hardened and battle-ready for the real world.”

Instead of manual testing, Promptfoo’s software red teams a customer’s AI application by talking directly to it through the chat interface or APIs, using specialized AI models and Agents that behave exactly like users — or specifically, like attackers trying to deceive the system.

When an attack succeeds, Promptfoo records it, analyzes why it worked, and automatically iterates through an agentic reasoning loop to refine the test and uncover deeper flaws, turning exploits into a blueprint for stronger defenses.

“There’s no time to lose”

Adoption has been rapid: Today, Promptfoo serves more than 200,000 developers and over 80 Fortune 500 companies. In July 2025, the company raised an $18.4M Series A, led by Insight Partners.

The funding will drive Promptfoo’s next phase of growth as it continues to define the standard for enterprise AI security, expand its team, and speed up platform development to meet surging demand from Global 2000 organizations.

“We’re in a space that is pretty hot [and] crowded,” notes Webster. “Global 2000 companies are going to be buying or making decisions in the AI security space in the next 12 months or so. So there’s no time to lose.”

Promptfoo is hiring aggressively across research, engineering, and go-to-market, but Webster says he isn’t just looking to get “the smartest people in a room together”.

“I don’t believe that that’s a recipe for success, at least not alone.” He’s more interested in what drives prospective employees. “Do you wake up in the morning and think, what can you ship today? What are your goals for the week?”

Autonomous and multi-agent systems of the future

Promptfoo is scaling into a rapidly changing security landscape. As enterprise AI systems evolve beyond simple chatbots, it won’t be as easy to diagnose system issues, says Webster. “As applications become agents with sub-agents or more complicated workflows, it’s going to be hard to take that black-box approach.”

Instead, Promptfoo focuses on observability — giving teams deep visibility into what’s happening inside production-scale agentic systems. “It’s going to become more important that people instrument their applications,” Webster predicts, “and there’s a feedback loop from that instrumentation into the software that is doing the red teaming or security testing.”

In preparation for the next wave of autonomous and multi-agent systems, Promptfoo’s future releases will introduce chain-level tracing, Model Context Protocol (MCP) integration, and a test-driven development framework for prompt evaluation. Together, these make security testing continuous and automated for any AI deployment.

Beyond the agentic hype

Despite industry buzz, Webster believes we are yet to discover AI’s full potential. “I think that we are only scratching the surface,” he says. “At Promptfoo, we work with some of the largest companies in the world — Fortune 500-type companies that have big plans around agents, but are still trying to figure out some of the basics in terms of infrastructure security.”

Mainstream adoption is still to come. “While there’s a lot of talk and hype around agents … they have not reached the vast majority of enterprise processes, internal workflows, and certainly not end-user-facing software.”

Still, Webster thinks that the real transformation is coming. “I think AI is underhyped in terms of the change that it will create in business and large enterprises and consumer habits. … I also think that we’ll see AI pop up in places that we really don’t expect. It’ll just become easier to add things with AI, rather than to use existing solutions.”

“The whole software industry looks so different five years, 10 years from now, and that’s going to ripple out in so many different ways.”

As an engineer, he’s already felt that change firsthand. “Just seeing how [AI] has completely altered my personal engineering workflows over the past couple of years has been very substantial.”

Factoring in “real-world considerations”

From Webster’s perspective, developers are years ahead of the rest of the world when it comes to AI. He would like to close the gap between “real-world considerations, like security and policy,” and the people actually building AI.

“If we want to see a world where AI really benefits people and companies, we need to put those tools in the hands of developers.”

As AI systems become more autonomous, he argues, the responsibility to keep them safe must move closer to the people building them. That means building AI that’s as intelligent as it is accountable — systems that can be trusted because they were designed to be tested.


*Note: Insight has invested in Promptfoo. This article is part of our ScaleUp:AI 2025 Partner Series, highlighting insights from the companies and leaders shaping the future of AI.