Artificial Intelligence touches almost every aspect of our lives, from mobile banking and online shopping to social media and real-time traffic maps. But what happens when artificial intelligence is biased? What if it makes mistakes on important decisions ā from who gets a job interview or a mortgage to who gets arrested and how much time they ultimately serve for a crime?
āThese everyday decisions can greatly affect the trajectories of our lives and increasingly, theyāre being made not by people, but by machines,ā said °µTV computer science professor .
A growing body of research, including Davidsonās, indicates that bias in artificial intelligence can lead to biased outcomes, especially for minority populations and women.
Facial recognition technologies, for example, have come under increasing scrutiny because theyāve been shown to better detect white faces than they do the faces of people with darker skin. They also do a better job of detecting menās faces than those of women. Mistakes in these systems have been implicated in a number of false arrests due to mistaken identity.
In fact, concern about bias in facial recognition technologies led to a number of bans on their use. In June 2019, San Francisco was among the first cities in the nation to ban the use of facial recognition technologies by the police and other city departments. The State of California followed suit in January 2020, imposing a three-year moratorium on the use of facial recognition technology in police body cameras.
Has racial profiling gone digital?
A number of technologies take facial recognition a step further, analyzing and interpreting facial attributes and other data to make risk assessments, identify threats or unusual behavior. In the world of data surveillance, this is called āanomaly detection.ā Increasingly, these technologies are deployed by law enforcement, airport security, and retail and event security firms.
In a recent study, Davidson and Ph.D. student Hongjing Zhang demonstrated that these types of anomaly detection algorithms are more likely to predict that African Americans or darker skinned males are anomalies.
āSince anomaly detection is often applied to people who are then suspected of unusual behavior, ensuring fairness becomes paramount,ā Davidson said. āIf one of these algorithms is used for surveillance purposes, itās much more likely to identify people of color. If a white person walks in, it would not be likely to trigger an anomalous event. If a black person walks in, it would.ā
āThe machine is not biased. It has no moral compass. Itās just seen more white faces in the data it was trained on before and so itās learned to associate that with normality.ā ā Ian Davidson, °µTV computer science professor
That sounds a lot like computer-aided racial profiling.
āBut itās completely unintentional,ā Davidson said. āThe machine is not biased. It has no moral compass. Itās just seen more white faces in the data it was trained on before and so itās learned to associate that with normality.ā
Ensuring that AI is fair and free from bias is complex, he explained. His work shows that adding more people of color to the data the machine learns from helps, but it does not eliminate the issue.
Bias reflects an unjust world
Examples abound of technologies that donāt perform accurately for people with darker skin. In February, the FDA warned that the pulse oximeters used to monitor oxygen saturation levels for COVID-19 patients may be less accurate for people with darker skin pigmentation. A also found that people of color were more likely to be hit by driverless cars because object detection systems they use to recognize pedestrians donāt work as well on people with darker skin.
āBias reflects a world that is unjust,ā said Computer Science Professor . He teaches an ethics course thatās required for all °µTV students majoring in computer science and engineering. Part of the course focuses specifically on bias in AI and other technologies.
āI want students to have an awareness of the problem and to understand why there are biases in our decisions,ā Koehl said. āFor example, if all your colleagues are white male, youāre unlikely to discuss problems associated with machine recognition of dark skin.ā
āThe danger with bias comes from the fact that we consider AI as a system that can make decisions.ā ā Patrice Koehl, °µTV computer science professor
This lack of diversity in the workforce has been a persistent problem in the technology sector. Nearly 80 percent of employees at Apple, Facebook, Google and Microsoft are male and thereās been little growth in Black, LatinX and Native representation since 2014, according to Mozillaās 2020 Internet Health Report.
āThe danger with bias comes from the fact that we consider AI as a system that can make decisions,ā Koehl said. āYou want that decision to be as informed as possible. If the information you provide is wrong or biased, the decision will be wrong.ā
When it comes to developing future technologies, Koehl is optimistic that todayās students will do a better job. āThe problem associated with AI was created in the last 20 years, partly by software engineers. If those engineers were able to create such a big problem, my hope is that the next generation of engineers will spend just as much time looking at the problems and identifying solutions,ā he said.
Unraveling the tangled roots of bias
While thereās growing awareness of bias in artificial intelligence, thereās no simple solution. Bias can be introduced in a number of ways, beyond the software engineer developing a new technology. Artificial intelligence and machine learning algorithms rely on data, which is not always representative of minority populations and women. Thatās because behind the data, the decisions ā about which data to collect and how to use it ā are still made by people.
āWe cannot address bias and unfairness in AI without addressing the unfairness of the whole data pipeline system,ā said , director of °µTVā Center for Data Science and Artificial Intelligence Research, or CeDAR.
CeDAR is a hub for research activity focused on using AI for social good, from better healthcare to precision agriculture and combating climate change. Fighting bias and standing up for privacy is a natural part of that mission, Strohmer said.
āThings like racial profiling existed before these tools. AI just enhances an existing bias." ā Thomas Strohmer, director of CeDAR
āThings like racial profiling existed before these tools. AI just enhances an existing bias. If you feed a biased data set into an algorithm, the result will be a biased algorithm.ā
Because new technologies are often adopted at scale, Strohmer noted, biases can quickly become widespread, and theyāre not always easy to detect. To determine if thereās bias in a data set or an algorithm, you need access to the data and the algorithm.
Facial recognition, video analytics, anomaly detection and other kinds of pattern matching are being used in law enforcement ā often out of public view.
AI in the shadows
, a professor at the °µTV School of Law, says this hidden nature of AI is a major concern.
āIf there are problems in law enforcement, they are increasingly difficult for people to see,ā said Joh, who has written extensively about technology, policing and bias. āMost people understand the utility of a firearm and a badge. If someone experiences excessive force by the police, we intuitively understand that. With technology, we might not even recognize that the problem exists. You might never know unless you become the target of that interaction.ā
For this reason, she said accountability is crucial. She points to increasing experimentation with AI tools by police departments in towns and cities all across the country, all with little consideration for long-term consequences.
āWe need to realize that these tools can quickly get out of hand or can be used in ways that are unexpected and have socially harmful consequences or disparate consequences,ā Joh said. āA certain amount of bias has always existed in policing. Hidden technologies can exacerbate the problem immensely.ā
Joh added itās not too late for police departments and other organizations to take a step back and ask the most fundamental questions about the use of AI: Should we be using these tools at all?
Harnessing AIās power for good
āEducation, training and promoting diversity are key to addressing how technology is perpetuating bias,ā said , associate director of the °µTV DataLab: Data Science and Informatics department. āJust as AI is contributing to these persistent societal problems, it can also be empowering for uncovering bias and finding solutions.ā
In this regard, the DataLab leads by example. It supports a diverse faculty and affiliates program and hosts events for women and underrepresented individuals in data science.
WHAT DO DATING TECHNOLOGY AND ALZHEIMERāS HAVE IN COMMON?
°µTV neuropathologist Brittany Dugger, along with researchers at UC San Francisco, has found a way to teach a computer to precisely detect one of the hallmarks of Alzheimerās disease in human brain tissue, delivering a proof of concept for a machine-learning approach capable of automating a key component of Alzheimerās research. , and .ā&²Ō²ś²õ±č;
āIn data science, we are working with large sets of information and these inevitably reflect the structural inequalities of our society,ā said Reynolds, who is an experimental ecologist by training. āWithout a critical and inclusive approach to data and the AI tools it enables, we run the risk of reproducing past injustices.ā
The DataLab helps students and researchers understand the complexity of large data systems and how technologies and computational methods work. This has added value to a number of research projects, including to improve early diagnosis of Alzheimerās in women and communities of color, being conducted by , assistant professor of pathology and laboratory medicine, and her team at °µTV Health.
Trust will require transparency
The trend toward greater use of artificial intelligence shows no signs of abating ā whether itās to improve healthcare, recommend movies via a streaming service, conduct surveillance, or any of a myriad of other uses.
For that reason, transparency in AI is more important than ever. According to Davidson, that will require fairness, explainability and privacy.
āAs machines replace humans and make more decisions, we need to be able to trust them,ā he said. āThat includes understanding exactly how machine algorithms are processing information and how decisions are being made.ā
Related Stories
Media Resources
Catherine Kenny, °µTV News and Media Relations, 530-752-3140, cmkenny@ucdavis.edu