The Power of Adversarial Debiasing: Upstart’s Approach to Less Discriminatory Alternatives

Less Discriminatory Alternatives (LDAs) in lending are practices that identify fair ways to evaluate borrowers, particularly those in protected classes such as age, race, and gender. In addition to helping to mitigate potential biases and expand access to credit, the search for LDAs is a key component of regulatory compliance under laws such as the Equal Credit Opportunity Act. At Upstart, we’ve developed an innovative approach to LDAs that we believe enhances fairness and upholds our commitment to accurate credit risk assessment.

Given our mission of expanding access to credit for all, the search for LDAs has been a priority for Upstart from the beginning. Early attempts included the removal of certain variables from our artificial intelligence models and the use of “hyperparameter tuning” to identify the best settings, or parameters, for a model. But neither method achieved the desired result. Variable removal reduced accuracy and failed to meaningfully improve fairness, while hyperparameter tuning improved fairness marginally but was slow, costly, and impractical for frequent use. 

Today, we use adversarial debiasing to achieve the right balance. Unlike traditional modeling approaches that focus solely on optimizing for accuracy, adversarial debiasing introduces a dual objective: optimizing for both accuracy and fairness during the model-training process. Among the LDA methods we have tested and implemented, we’ve found that adversarial debiasing offers significant fairness improvements and minimal accuracy loss, without burdening computation. 

In adversarial debiasing, a main model is tasked with making accurate credit risk assessments, while a second adversary model tries to detect protected demographic attributes that may influence those assessments. In an iterative process, the main model increasingly removes information from its predictions—with minimal impact on accuracy—until it minimizes the adversary model’s ability to find a protected demographic attribute. If the adversary model cannot detect a protected attribute, the assumption is that the main model’s predictions must be demographically unbiased.

Upstart has innovated on this standard adversarial debiasing approach by reimagining it as an offset to our main AI model. We call it the “Universal LDA” framework. First, we train the main model solely for accuracy. Once those predictions are fixed, a separate adjuster model is trained using adversarial debiasing, balancing fidelity to the original predictions with the goal of keeping the adversary model from detecting bias.

This offset approach offers several advantages: It is faster to run, easier to implement across new products, and more effective overall in balancing accuracy and fairness compared to the standard adversarial deabiasing approach. Additionally, it provides greater interpretability by separating the original predictions from the fairness adjustment, which allows Upstart to better understand and control how fairness is improved within its underwriting process. 

By combining practical experience with lending partners, insights from academic literature on algorithmic fairness, and our own innovative advancements, Upstart has developed an effective and efficient method for producing LDAs that meet fair lending requirements. Our ongoing research continues to refine the balance between fairness and accuracy, enabling us to expand access to credit by delivering precise predictions of borrower risk while ensuring every applicant gets a fair shake.

NEW! Announcing Upstart T-Prime