Blended learning: AI

Case Study 2
Case study 2: AI in Credit Scoring
Consider a financial institution, MoneyWise Bank, that has recently started using an Artificial Intelligence (AI) system to evaluate the creditworthiness of loan applicants. This system is designed to analyze large amounts of data quickly to decide who is eligible for a loan.
​
The AI system at MoneyWise Bank reviews applicants' financial histories, including their past loans, repayments, income levels, and spending habits. It's programmed to learn from historical data to identify patterns and predict who is likely to repay a loan.
​
After some time, a troubling pattern emerges. The AI system is consistently denying loans to people from certain neighborhoods. On closer examination, it becomes clear that these neighborhoods are historically less affluent, and their residents have been underserved by financial institutions in the past. The AI, learning from historical data, is unintentionally continuing this trend of inequality.
​
MoneyWise Bank is now facing a significant ethical challenge. They need to ensure that their AI system is fair and doesn't reinforce past socio-economic biases. At the same time, the bank must accurately assess credit risks to remain financially responsible. The bank must find a balance between ethical lending practices and financial prudence.
​
Questions for Reflection:
-
What steps can MoneyWise Bank take to address the biases in their AI credit scoring system?
-
How can the bank ensure a fair and unbiased assessment of loan applicants from all neighborhoods?
-
What is the role of financial institutions in preventing the perpetuation of historical inequalities through AI systems?
-
If you were applying for a loan, how would you feel knowing that an AI system with these biases is evaluating your application?