AI can make bank lending fairer

0

As banks increasingly deploy artificial intelligence tools to make lending decisions, they must revisit an undesirable fact about the practice of lending: Historically, it has been riddled with bias against protected features, such as race, gender and sexual orientation. These biases are evident in the choices of institutions in terms of who obtains credit and on what terms. In this context, relying on algorithms to make credit decisions instead of relying on human judgment seems like an obvious solution. What machines lack heat, they surely compensate with objectivity, right?

Unfortunately, what is true in theory has not been confirmed in practice. Lenders often find that AI-powered engines have many of the same biases as humans. They were often fed a diet of biased credit decision data, drawn from decades of inequities in the housing and loan markets. If left unchecked, they threaten to perpetuate prejudices in financial decisions and worsen wealth gaps around the world.

Insight Center

The problem of bias is endemic, affecting financial services start-ups as well as incumbents. A landmark 2018 UC Berkeley Study found that even though fintech algorithms charge on average 40% less to minority borrowers than face-to-face lenders, they still award additional mortgage interest to borrowers in protected classes. Recently, Singapore, the UK, and some European countries released guidelines requiring companies to promote fairness in their use of AI, including in lending. Many aspects of loan equity are legally regulated in the United States, but banks still have to make choices in terms of which equity measures should be prioritized or de-prioritized and how they should be approached.

So how can financial institutions turn to AI to reverse the discrimination of the past and, instead, foster a more inclusive economy? In our work with financial services companies, we find that the key lies in creating AI-powered systems designed to encourage less historical accuracy but greater fairness. It means training and testing them not only on loans or mortgages issued in the past, but rather on how money should have been loaned in a fairer world.

The problem is that humans often cannot detect the injustice that exists in the huge data sets analyzed by machine learning systems. Lenders are therefore increasingly relying on AI to identify, predict and eliminate bias against protected classes that are inadvertently built into algorithms.

Here’s how:

Remove bias from the data before creating a model.

An intuitive way to eliminate bias from a credit decision is to remove data discrimination before the model is built. But this requires more adjustment than simply removing data variables that clearly suggest gender or ethnicity, as past biases have effects that spill over everywhere. For example, loan data samples for women are generally smaller because, proportionately, financial institutions have approved fewer and fewer loans to women in recent decades than to men with credit scores and debts. equivalent income. This leads to more frequent errors and false inferences for candidates who are under-represented and treated differently. Manual interventions to attempt to correct for bias in the data can also result in self-fulfilling prophecies, as errors or assumptions made can be repeated and amplified.

To avoid this, banks can now use AI to spot and correct patterns of historical discrimination against women in raw data, compensating for changes over time by deliberately altering that data to give a probability of artificial and fairer approval. For example, using AI, one lender found that historically women would need to earn 30% more than men on average for loans of the same size to be approved. He used AI to retroactively balance the data that was used to develop and test his AI-based credit decision model by shifting the distribution of women, reconciling the proportion of previously granted Citrus North Payday loans to women to the same amount. than for men with an equivalent risk. profile, while maintaining the relative ranking. Due to the fairer representation of how lending decisions should have been made, the algorithm developed was able to approve loans more in line with how the bank wanted to extend credit more fair in the future.

Choose better lenses for models that discriminate.

Yet even after the data has been adjusted, banks may often need an additional layer of defense to prevent bias or the remaining traces of its effects from seeping in. To achieve this, they “regularize” an algorithm so that it not only aims to fit historical data, but also to get good scores on some measure of fairness. They do this by including an additional parameter that penalizes the model if it treats protected classes differently.

For example, a bank discovered by applying AI that very young and very old applicants did not have equal access to credit. To encourage fairer credit decisions, the bank devised a model that required its algorithm to minimize an injustice score. The score was based on the difference between results for people of different age groups with the same risk profile, including intersections between subgroups, such as older women. By taking this approach, the final AI-powered model could close the mathematical gap between how similar people from different groups are treated by 20%.

Introduce an AI-driven opponent.

Even after correcting the data and regularizing the model, it is still possible to have a seemingly neutral model that continues to have a disparate impact on protected and unprotected classes. So many financial institutions are taking it one step further and building an additional, supposedly ‘adversarial’ AI-based model to see if it can predict a protected class bias in decisions made by the first model. If the adversary successfully detects a protected characteristic such as race, ethnicity, religion, gender, sexuality, disability, marital status or age, from how the first model credit process an applicant, the original model is corrected.

For example, antagonistic AI-based models can often detect postal codes of ethnic minorities from the results of a proposed credit model. This can often be due to a bewildering interaction with lower wages associated with overlapping zip codes. Indeed, we have seen conflicting models showing that an original model is likely to offer lower limits to requests for postal codes associated with an ethnic minority, even if the original model or available data did not include race or location. ethnicity as an entry to verify.

In the past, these issues would have been resolved by trying to manually change the settings of the original model. But now we can use AI as an automated approach to readjust the model to increase the influence of variables that contribute to fairness and reduce those that contribute to bias, in part by aggregating the segments, until that the challenger model is no longer able to predict ethnicity. using postal codes as a proxy. In one case, this resulted in a model that further differentiated zip codes, but reduced the gap in mortgage approval rates by as much as 70% for some ethnicities.

Certainly, financial institutions should lend wisely, depending on whether people are willing and able to pay their debts. But lenders shouldn’t treat people differently if they have similar risk profiles, whether that decision is made by artificial neural networks or by human brains. Reducing stigma is not only a socially responsible approach – it also allows for a more profitable business. The first to reduce prejudice thanks to AI will have a real competitive advantage in addition to doing their moral duty.

Algorithms cannot tell us which definitions of fairness to use or which groups to protect. Left on their own, machine learning systems can cement the very biases we want them to eliminate.

But the AI ​​should not be unchecked. With a deeper awareness of the biases hidden in data and with goals that reflect both financial and social goals, we can develop models that work and do good.

There is measurable evidence that loan decisions based on machine learning systems verified and adjusted by the steps outlined above are fairer than those previously made by people. One decision at a time, these systems forge a more financially fair world.

Leave A Reply

Your email address will not be published.