- beat the world champion Go player 4:1 (AlphaGo)
- beat AlphaGo 100:0 (AlphZero)
- predict the structure of proteins (AlphaFold)
- achieve a median score in a coding competition (AlphaCode)
DeepMind's accomplishments provide a great backdrop for understanding both the state and pace of the art in AI in 2022.
…AI is coming down the road, and I feel like if we build it in the right way, and deploy it in the right way, for the benefit of everyone, I think it’s [going to] be the most amazing transformative technology that humanity has ever invented.
- Demis Habibis, DeepMind co-founder and CEO
Inseparable from the rise of AI and its subset disciplines is something known as the alignment problem. A book by the same name, authored by Brian Christian, introduced me to the term, and is a compelling and highly informative read. The alignment problem is the potential gap between what we want and intend AI to do, and what it actually does. As Christian puts it, machine learning is "gradually putting the world on autopilot", and asks the question: what could go wrong (and what has already gone wrong)?
There are two vectors of concern: ethics (and law and human and civil rights), and safety. Safety as in safety from robots, which are already among us. Asimov's three laws of robotics seem completely prescient here.
But turning to the mortgage industry, it is the realm of ethics that is of more immediately concern. Avoiding bias is an overriding concern in mortgage lending - this is reason we have the Home Mortgage Disclosure Act data. We intend to explore the theme of bias throughout 2022 in this blog, along with other AI themes. Rounding out this introductory post, I'll make three claims about data and bias:
- Bias can be introduced by how you collect data
- Bias can be introduced by the data you don't collect
- Bias can be introduced by what you do with data
If the alignment problem or the risk of bias in mortgage lending is on your mind, please reach out - we'd love to hear your thoughts and experiences.