Published: Thursday, May 23, 2024
DENVER – One of seven bills to prevent AI from discriminating when making decisions that have a direct impact on people’s lives — such as who gets hired or how much money is available for a house or medical care — was passed. Colorado Gov. Jared Polis reluctantly signed the bill Friday.
Colorado’s bill, as well as those that failed in Washington, Connecticut, and elsewhere, faced many battles. These included civil rights groups, legislators wary of wading deep into a tech few understand, and governors concerned about being an odd-state and scaring AI startups.
Polis signed Colorado’s bill “with reservations,” saying that he was concerned about regulations that would stifle AI innovation. The bill can be amended before it becomes law and has a 2-year runway.
Polis wrote: “I encourage the legislators to make significant improvements before this takes effect.”
Colorado’s proposal is complex. It includes six bills that are similar. However, they will all require companies to evaluate the risk of discrimination posed by their AI, and to inform customers when AI has been used to make a decision that affects them.
These bills are distinct from the more than 400 AI related bills that were debated in this year. The majority of the bills are targeted at specific AI applications, like using deepfakes to make pornography or in elections.
The seven bills have a more ambitious approach, addressing discrimination in major industries, and tackling one of technology’s most complex and perverse problems.
“We have no visibility in the algorithms used, whether or not they work, or if we are discriminated against,” stated Rumman Chowdhury. He was previously the leader of Twitter’s AI Ethics team.
Although anti-discrimination legislation is in place, those who are studying AI discrimination claim that it’s an entirely different animal, and the U.S. has already fallen behind on regulating this.
Christine Webber, an attorney for civil rights who has handled class actions over discrimination against Boeing and Tyson Foods, said that computers make biased decisions on a large scale. Webber is now nearing the final approval of one of the nation’s first class actions over AI discrimination.
Webber said, “It is not true that the old system was free of bias.” One person can only read so many resumes per day. You can only make a certain number of biased decisions per day, but the computer is able to do this quickly and across many people.
There’s a high chance that AI will evaluate your application when you apply for a new job, apartment, or home loan: it could be sending it to the next level, giving it a score, or filtering out. According to the Equal Employment Opportunity Commission, up to 83% of employers are using algorithms to assist in hiring.
AI doesn’t have a clue what to look for when evaluating a resume, so it is taught by looking at past resumes. Data used to train the algorithms may contain bias.
Amazon, for instance, developed a hiring algorithm based on resumes from the past, which were largely composed of male applicants. It downgraded resumes that contained the words “women’s” or listed women’s college because it was not represented in its historical data, the resumes. The project was abandoned.
Webber’s lawsuit claims that a system for scoring rental applications, which assigns scores to applicants based on their race or ethnicity, has disproportionately given lower scores to Black and Hispanic applicants. An AI system designed to assess medical care needs was found to have overlooked Black patients.
A few studies and lawsuits have provided a peek under the hood, but the majority of algorithms are still hidden. Pew Research’s polling shows that Americans are mostly unaware of the use of these tools. In general, companies are not required to disclose explicitly that AIs were used.
Webber said that “just pulling back the curtains so we can see who is really doing the assessment and what tool is used is a big, huge step.” “The laws are useless if we don’t have some basic information.”
Colorado’s bill and another bill that survived in California are both trying to change this. These bills are similar, and include a Connecticut flagship proposal that was defeated by the governor.
Colorado’s bill requires companies that use AI to make decisions that affect Americans to assess the AI annually for bias. They will also be required to implement an internal oversight program, report discrimination to the state attorney general and notify customers if an AI is used to make their decision.
Academics and labor unions are concerned that, if companies are expected to oversee themselves, it will be difficult to prevent discrimination before the system has caused damage. In this new, highly competitive field, companies are afraid that forcing transparency will reveal trade secrets and lead to litigation.
AI companies also sought, and received, a provision allowing only the Attorney General, not citizens to bring lawsuits under the law. The attorney general is responsible for the enforcement details.
Although larger AI companies are more or less on board with the proposals, a small group of Colorado-based AI firms said that these requirements may be manageable for behemoth companies but not for budding startups.
Logan Cerkovnik is the founder of Thumper.ai. He said, “We’re in a new era, one of primordial soup,” referring to AI. “An overly restrictive law that restricts the use of technology and forces us to define it will be detrimental to innovation.”
They all agreed with AI companies that it’s critical to address what is formally referred to as “algorithmic bias”. They said that the bill, as it is written, falls short of this goal. They proposed instead strengthening existing anti-discrimination legislation.
Chowdhury is concerned that lawsuits can be too expensive and time-consuming to be effective enforcement tools. Instead, laws should go beyond what Colorado has proposed. Chowdhury, along with academics, have instead proposed an accredited, independent group that can test for bias in AI algorithms.
Chowdhury said, “You can deal with one person who is biased or discriminatory.” “What happens when it is embedded in the institution as a whole?”