Case studies
10 minutes

Automated Valuation Models (AVM)

March 10, 2022
Subscribe to our blog
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In 2010 Congress passed the Dodd-Frank Act, adding controls and regulations designed to prevent another meltdown like the one we experienced in 2008. The act has 16 sections, some of which stand as acts in their own right, including:

  • Title III: Enhancing Financial Institution Safety and Soundness Act of 2010
  • Title IX: Investor Protection and Securities Reform Act of 2010
  • Title X: Consumer Financial Protection Act of 2010
  • Title XII: Improving Access to Mainstream Financial Institutions Act of 2010

One of these acts - Title XIV: the Mortgage Reform and Anti-Predatory Lending Act - deserves special notice in light of our focus in this blog on the alignment problem. It is in this section that Dodd-Frank takes up the topic of Automated Valuation Models (AVM).

For purposes of this section, the term ‘automated valuation model’ means any computerized model used by mortgage originators and secondary market issuers to determine the collateral worth of a mortgage secured by a consumer’s principal dwelling.

The act's general provision is fairly high level, requiring that AVMs adhere to quality control standards designed to:

(1) ensure a high level of confidence in the estimates produced by automated valuation models;
(2) protect against the manipulation of data;
(3) seek to avoid conflicts of interest;
(4) require random sample testing and reviews; and
(5) account for any other such factor that the agencies listed in subsection (b) determine to be appropriate.

Last month, the CFPB issued a statement#nbsp;showing that it has taken up the baton of the Dodd-Frank Title XIV mandate. The statement - announcing a small business impact review - was the CFPB's first step toward an eventual pan-agency rule, and it established two things. First, it shows that the CFPB is focused on the fifth of the five regulatory vectors (statutory factors) listed above. Second, the statement establishes how the CFPB views the problem domain:

Both in-person and algorithmic appraisals appear to be susceptible to bias and inaccuracy, absent appropriate safeguards.

And:

The CFPB is particularly concerned that without proper safeguards, flawed versions of these models could digitally redline certain neighborhoods and further embed and perpetuate historical lending, wealth, and home value disparities.

It's worth noting that the Bureau - in the 42-page outline of proposals linked to in their statement - paints the landscape a bit more broadly. First, they note that AVMs hold out the possibility of actually reducing discrimination - a nod to the possibility of objectivity through software. Second, they are considering exempting AVMs from their rulemaking when used by a certified or licensed appraiser who is "already subject to quality control standards under other Federal and State regulation and supervision" - a nod to objectivity through training and certification.

Despite these considerations, the bulk of the CFPB's concern rests on the risk of discrimination through the use of AVMs, and for this reason they are leaning toward establishing a new nondiscrimination quality control factor that would require "institutions to establish policies and procedures to mitigate against fair lending risk in their use of AVMs."

What would this look like? It's too early to tell, but clues come from the questions they are asking small businesses to comment on:

  • Risks and benefits of underlying AVM methodologies, e.g., repeat sales indices and mark-to-market, hedonic price models, appraiser emulation;
  • How neighborhood characteristics and amenities (e.g., nearby parks and trails, transit access, grocery stores) should be incorporated in an AVM model;
  • Ensuring unbiased input data;
  • Data accuracy and integrity;
  • Use of comparables;
  • Geographic segmentation;
  • Identifying and quantifying disparities;
  • Identifying proxies for prohibited basis characteristics;
  • Techniques to measure and monitor for potential discrimination in AVMs; and
  • Addressing historical incidences of appraisal bias that may be perpetuated through AVMs.

As we've observed before, these concerns boil down to:

  1. Bias can be introduced by how you collect data
  2. Bias can be introduced by the data you don't collect
  3. Bias can be introduced by what you do with data (including the many aspects of building, training, and operating machine learning models)

We're keeping an eye on this issue as AI and the mortgage industry continue to evolve. Drop us a note - we'd love to hear your thoughts on this topic as well.

Join our newsletter to stay up to date on data and releases.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2024 Polygon Research. All right reserved.