We use cookies on this site to enhance your experience. Visit our Privacy Policy for more info.

ScaleUp:AI

Designing and scaling responsible AI

Insight Partners | January 17, 2024| 2 min. read

What does it mean to build responsible AI and what are the societal risks if it’s not done correctly? For an AI system to be successful, it should be trustworthy, unbiased, and transparent while protecting privacy. But building an ethical system can be as difficult as it is crucial. Dataiku Responsible AI Lead Triveni Gandhi, Diligent Institute Executive Director Dottie Schindlinger­, Weights & Biases Co-Founder Chris Van Pelt — moderated by Managing Director George Mathew — share their tips for designing, training, building, and scaling ethical AI systems, from data governance to understanding output and everything in between.

These insights came from our ScaleUp: AI event in October 2023, an industry-leading global conference that features topics across technologies and industries. Watch the full session below:

Responsible AI is a collaborative effort

Gandhi, Schindlinger­, and Van Pelt all emphasized the importance of collaboration among different roles within an organization in the discussion about the challenges and opportunities of responsible AI. They agreed that creating responsible AI systems requires not only technical expertise but also an understanding of ethical implications and social impacts.

As Gandhi explained, the most successful organizations in scaling responsible AI are those who are willing to bring all different parts of the organization into the room. This includes business users, governance experts, and data scientists. She emphasized, “We all have our specific roles, and I think to Chris’s point of, you know, the software developer knows that, ‘Oh, I can go into Chat GPT, but when it changes an answer, what does it mean for the answer to be different? What is good or bad in this case? What is good or bad to the goals of my organization and the ethos of my organization?'”

“We need leadership that understands this.”

Schindlinger­ also stressed the importance of leaders within the organization understanding AI and its implications. She noted, “We need leadership that understands this. One of the things we’ve been doing is helping boards of directors and senior leaders of organizations really ratchet up their level of understanding around trustworthy AI and just AI in general.”

Rapid evolution brings both excitement and concerns

The panelists expressed excitement about the possibilities of AI systems, particularly in how they can assist in programming and development. However, the panelists also voiced concerns about the speed at which these systems are evolving and the potential for misuse.

Van Pelt cited AI systems like CoPilot as an exciting development. He said, “The thing that excites me is like CoPilot…I use it every day as a developer.” However, he also expressed concern about the rapid pace of progress in AI. He further explained, “The thing that scares me is, I saw how much improvement we had from GPT-2 to [GPT-3] to [GPT-4]. There’s no indication that’s going to stop.”

“When you run a model, and you get 99% accuracy, it’s an immediate red flag…And I think that end users — the average person in the world — should probably take that same level of skepticism with AI.”

Similarly, Gandhi highlighted the need for a critical approach to AI systems. She said, “Data scientists are very wary of anything that looks too good to be true. When you run a model, and you get 99% accuracy, it’s an immediate red flag…And I think that end users — the average person in the world — should probably take that same level of skepticism with AI.”

While AI has the potential to greatly benefit our lives, it is not infallible. Just like any other technology, AI can make mistakes, and it is important for users to understand its limitations. We must also be aware of potential biases in AI systems, as they are often developed and trained by humans who may have their own biases. It is important for data scientists and AI developers to constantly test and evaluate their models to guarantee they are producing accurate and fair results.

Governance systems are mission-critical

The panelists emphasized the importance of AI governance systems in ensuring responsible AI development. They noted that these systems can provide a central system for tracking the entire lineage of AI models, thereby providing more visibility and control over their development and application.

As Van Pelt explained, “You need a central system that can really keep track of that entire lineage, which is exactly what Weights & Biases does.” He also highlighted the importance of monitoring AI systems to provide higher levels of governance. He said, “A set of functionality we just released is around large language model monitoring where within an organization. You can tell your engineers, ‘Hey, go see if you can create something really interesting here with this stuff,’ and by configuring the Weights & Biases LLM monitoring proxy, all of that information can be stored in a central dashboard so you can see what people are doing and what responses are coming out.”

Gandhi also noted, “The ones that are most successful right now at both scaling and scaling responsibly are the ones who are willing to bring all of the different parts of the organization into the room and level set.” This highlights the importance of a holistic, organization-wide approach to AI governance.


Note: Insight Partners has invested in Diligent, Dataiku, and Weights & Biases.