Written by Ben Wodecki and republished with permission from AI Business.
Board-level discussions needed to address bias, says senior executive of Alphabet’s X.
Financial services firms that use diverse datasets in their machine learning-based models to generate credit scores are more inclusive and thereby manage bias risk more holistically, according to speakers at the ScaleUp: AI conference.
Historically, banks have discriminated against customers based on race and ethnicity. That historical data infused with past discriminatory practices is used to create modern credit scoring systems, said Jay Budzik, CTO of Zest AI, which built an alternative credit-scoring model.
Budzik said the increased profits banks generate with ML can be partially diverted to create more equitable outcomes for those who would have been discriminated against in the past. His team at Zest AI sought to create ML-based credit scoring models that were “fairer and less racist.”
Asmau Ahmed, senior executive of Alphabet’s moonshot project company called X, thanked Budzik for his honesty since he does not come from a disadvantaged group. She also encouraged board-level conversations about AI’s impact on inequality.
“People are talking about and are aware of AI, but I don’t think people get how to have that conversation” about its impact, Ahmed said.
Ahmed stressed the need for wider discussions around potentially damaging deployments, saying that AI and ML present a “huge risk” to “significantly widening or closing the equity gap.”
“We’re at a critical junction in history,” she added.
Ahmed called for data used in ML models to be representative and clear of bias, with a potential shift away from historically discriminatory data. Such issues aren’t solely related to finance, she said, but they are spanning all industries.