The Consumer Financial Protection Bureau (CFPB), Civil Rights Division of the United States Department of Justice, Federal Trade Commission, and U.S. Equal Employment Opportunity Commission released a joint statement outlining a commitment to uphold the American core principles of fairness, equality, and justice as emerging automated systems, including those marketed as “artificial intelligence” or “AI” become more common and impact civil rights, fair competition, consumer protection, and equal opportunity.
All four regulatory agencies have expressed concerns about potentially harmful uses of automated systems and resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems. Technology marketed as “artificial intelligence” and as taking bias out of decision making has the potential to produce outcomes that result in unlawful discrimination. Potential sources of discrimination in automated systems include the following:
- Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
- Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.
- Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.
In the joint statement, these four regulatory agencies reiterated their resolve to monitor the development and use of automated systems and promote responsible innovation to protect the rights of American consumers. According to CFPB Director Rohit Chopra, “[t]oday’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.” “As social media platforms, banks, landlords, employers, and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result,” said Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division. To view the full joint statement, please visit this link.