Skip to main content
Menu

Main navigation

  • About
    • Annual Reports
    • Leadership
    • Jobs
    • Student Programs
    • Media Information
    • Store
    • Contact
    LOADING...
  • Experts
    • Policy Scholars
    • Adjunct Scholars
    • Fellows
  • Events
    • Upcoming
    • Past
    • Event FAQs
    • Sphere Summit
    LOADING...
  • Publications
    • Studies
    • Commentary
    • Books
    • Reviews and Journals
    • Public Filings
    LOADING...
  • Blog
  • Donate
    • Sponsorship Benefits
    • Ways to Give
    • Planned Giving

Issues

  • Constitution and Law
    • Constitutional Law
    • Criminal Justice
    • Free Speech and Civil Liberties
  • Economics
    • Banking and Finance
    • Monetary Policy
    • Regulation
    • Tax and Budget Policy
  • Politics and Society
    • Education
    • Government and Politics
    • Health Care
    • Poverty and Social Welfare
    • Technology and Privacy
  • International
    • Defense and Foreign Policy
    • Global Freedom
    • Immigration
    • Trade Policy
Live Now

Blog


  • Blog Home
  • RSS

Email Signup

Sign up to have blog posts delivered straight to your inbox!

Topics
  • Banking and Finance
  • Constitutional Law
  • Criminal Justice
  • Defense and Foreign Policy
  • Education
  • Free Speech and Civil Liberties
  • Global Freedom
  • Government and Politics
  • Health Care
  • Immigration
  • Monetary Policy
  • Poverty and Social Welfare
  • Regulation
  • Tax and Budget Policy
  • Technology and Privacy
  • Trade Policy
Archives
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • Show More
February 5, 2021 3:37PM

Algorithmic Bias Under the Biden Administration

By Matthew Feeney and Rachel Chiu

SHARE

This is the third and final entry analyzing technology policy issues (the gig economy, online speech, and algorithmic bias) under the Biden administration.

Algorithmic Bias in the Public and Private Sector

Private companies, federal agencies, and law enforcement are increasingly using Artificial intelligence (AI) and machine learning to evaluate information. According to the National Security Commission on Artificial Intelligence, AI refers to the “ability of a computer system to solve problems and to perform tasks that would otherwise require human intelligence.” AI-powered systems may be faster and more accurate than humans but, as a result of flawed datasets and design, they can still discriminate and exhibit bias.

AI consists of a series of algorithms, “instructions” for solving a problem. Algorithmic decision-making refers to the process of inputting data to generate a score, choice, or other output. The result is used to render a decision such as classification, prioritization, or sorting.

Although algorithms are inherently methodical processes, a 2019 Microsoft research study demonstrated how they can still discriminate. After being trained with Google News articles, a natural language processing program was tasked to predict words in analogies. The program produced gender stereotypes to an alarming extent because it learned from flawed data. Although the technology did not contain prejudice, it mimicked human bias present in the training dataset.

Algorithmic decision-making is used for a range of purposes, including hiring, personal finance, and policing. Thus, when algorithmic bias occurs, it can have significant effects on social and economic opportunities.

This section focuses on the impact of algorithmic bias in both the private and public sector:

  • In the private sector, AI-powered tools assist professionals in sorting and decision-making. Recruiters use automated processes to expedite applicant screening, interviewing, and selection. Algorithms are analogously used in financial services for credit risk assessment and underwriting.
  • In the public sector, computerized facial recognition is used by police for identification. This technology confirms the identity of someone by detecting a human face in a photo or video and analyzing its physical attributes. Accuracy varies depending on the subject’s race and gender.

In both applications, a complete and representative dataset is necessary to avoid algorithmic biases and inaccuracies. Policymakers are weighing the benefits and risks of algorithmic decision-making, while simultaneously addressing civil rights and privacy considerations.

Biden-Harris Administration

President Joe Biden and Vice President Kamala Harris support greater regulations for AI-powered systems. In their view, algorithms can be conduits for racial prejudice and amplify inequalities. To that end, the Biden administration will be focused on eliminating racial disparities perpetuated by technology.

During the campaign, President Biden promised to create a new public credit reporting and scoring division within the Consumer Financial Protection Bureau to “minimize racial disparities.” According to the Biden-Harris campaign website, he plans to address “algorithms used for credit scoring [and their] discriminatory impact... by accepting non-traditional sources of data like rental history and utility bills to establish credit.”

While serving as a U.S. senator, Vice President Harris was an outspoken critic of algorithmic bias. She co-sponsored the Justice in Policing Act and sent letters to several federal agencies about the dangers of facial recognition technology. The letters, sent to the Equal Employment Opportunity Commission, Federal Trade Commission, and the Federal Bureau of Investigation, asked officials to clarify how they were “addressing the potentially discriminatory impacts of facial analysis technologies.”

In a 2019 speech, Vice President Harris cautioned that “there is a real need to be very concerned about [artificial intelligence and machine learning]... how being built into it is racial bias.” She also noted that “unlike the racial bias that all of us can pretty easily detect when you get stopped in a department store or while you’re driving, the bias that is built into technology will not be easy to detect.”

On January 15th, President Biden appointed Alondra Nelson to be the deputy director for science and society at the White House Office of Science and Technology Policy. Nelson, a sociologist who has studied the social impact of emerging technologies, has stated that “we have a responsibility to work together to make sure that our science and technology reflects us.”

Current State of Regulation

Federal Trade Commission (FTC)

While artificial intelligence and machine learning pose new challenges for existing regulatory frameworks, automated decision-making has existed for years. The FTC has enforced federal consumer protection laws, such as the Fair Credit Reporting Act (1970) and the Equal Credit Opportunity Act (1974). Both laws regulate automated decision-making systems in the financial services industry.

In recent years, the FTC has issued guidelines for businesses who use algorithmic systems, including a 2016 report and blog post last year.

Congressional Proposals

Several bills have been proposed in recent years to ameliorate algorithmic bias. They include the following:

  • Algorithmic Accountability Act (2019): The bill was introduced by Senators Cory Booker (D-NJ), Ron Wyden (D-OR), and Representative Yvette Clarke (D-NY). According to Senator Wyden, the bill would have required “companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find."
  • Consumer Online Privacy Rights Act (2019): The bill, sponsored by Senator Maria Cantwell (D-WA), would have established new requirements for companies that use algorithmic decision-making to process data.
  • Justice in Policing Act (2020): The bill was sponsored by then-Senator Kamala Harris (D-CA), Senator Cory Booker (D-NJ), and Representatives Karen Bass (D-CA) and Jerrold Nadler (D-NY). It would have been the first federal restriction on facial recognition technology.
  • Facial Recognition and Biometric Technology Moratorium Act (2020): Sponsored by Senator Edward Markey (D-MA) and Jeff Merkley (D-OR), along with Representatives Pramila Jayapal (D-WA) and Ayanna Pressley (D-MA). The bill would have established a five-year moratorium on police use of facial recognition technology. It is set to be reintroduced this year.

State Proposals

Lawmakers in Illinois, New Jersey, Washington, and California have also proposed bills to regulate algorithmic systems.

In 2017, New York City passed Local Law 49, the first law in the United States to tackle algorithmic bias and discrimination. Local Law 49 established the Automated Decision Systems Task Force to monitor city use of algorithmic decision-making and provide recommendations. The twenty-member task force has faced criticism from legal experts due to its inability to fully define “automated decision system.” Members have also blamed the city officials for denying access to critical information needed to make recommendations. New York University professor Julia Stoyanovich told The Verge that she “would not have signed for this task force if [she] knew [it] was just a formal sort of exercise.”

Jurisdictions across the country have banned government use of facial recognition. In Illinois, a 2008 law entitled the Biometric Information Privacy Act (BIPA) has been used to curtail facial recognition. Illinois residents sued Clearview AI under the BIPA for creating a facial recognition software from billions of social media photos scrapped without permission. The technology company subsequently canceled all contracts in the state and promised to exclusively work with government entities.

Addressing Algorithmic Bias

Policymakers can take steps to help locate and mitigate algorithmic bias. Since the effects of such discrimination vary across sectors, ethical and regulatory considerations should consequently be proportionate to the impact.

AI has the potential to positively impact personal finance and employment. At this juncture, companies are faced with competing legal obligations that make bias even harder to detect. Laws such as the Civil Rights Act of 1964 and Equal Credit Opportunity Act incentivize companies to ignore protected class characteristics such as age, race, and sex— even though this information would improve the algorithm’s accuracy. Such requirements were written with human bias in mind and do not effectively attenuate algorithmic bias. To address discrimination in AI-powered tools, policymakers should reevaluate how existing regulations and enable companies to train their algorithms with full and complete information.

Algorithmic bias in law enforcement facial recognition tools presents greater challenges. Facial recognition technology is trained with billions of photos and videos, often repurposed and used without consent. This technology is relatively underdeveloped, having emerged within the past few years as the software of choice for police despite major flaws. (Accuracy is dependent upon the quality of the image, in addition to the subject themselves.) Last year, three men were wrongfully arrested by police because a facial recognition software misidentified them. Given the significant risk posed by facial analysis tools, more transparency and oversight are needed to prevent abuse and civil liberties violations.

The bias issues associated with facial recognition systems have prompted calls for police use of the technology to be banned. In a handful of jurisdictions lawmakers have implemented such bans. While the potential abuse and misuse of facial recognition does raise significant civil liberties concerns, outright bans of the technology are not the best policy.

Rather than ban facial recognition, lawmakers should consider making the use of facial recognition contingent on a set of policies that allow for police to use facial recognition while also protecting civil liberties. Currently, there are no police departments in the United States that have implemented these policies, which include prohibitions on real-time capability, accuracy requirements, restrictions on what data can be queried. Although the vast majority of policing in the United States is handled at the state and local level, the federal government can nonetheless condition grants on best practices related to surveillance technology, including facial recognition systems.

Related Tags
Technology and Privacy

Stay Connected to Cato

Sign up for the newsletter to receive periodic updates on Cato research, events, and publications.

View All Newsletters

1000 Massachusetts Ave, NW,
Washington, DC 20001-5403
(202) 842-0200
Contact Us
Privacy

Footer 1

  • About
    • Annual Reports
    • Leadership
    • Jobs
    • Student Programs
    • Media Information
    • Store
    • Contact

Footer 2

  • Experts
    • Policy Scholars
    • Adjunct Scholars
    • Fellows
  • Events
    • Upcoming
    • Past
    • Event FAQs
    • Sphere Summit

Footer 3

  • Publications
    • Books
    • Cato Journal
    • Regulation
    • Cato Policy Report
    • Cato Supreme Court Review
    • Cato’s Letter
    • Human Freedom Index
    • Economic Freedom of the World
    • Cato Handbook for Policymakers

Footer 4

  • Blog
  • Donate
    • Sponsorship Benefits
    • Ways to Give
    • Planned Giving
Also from Cato Institute:
Libertarianism.org
|
Humanprogress.org
|
Downsizinggovernment.org