Bias can be introduced into the algorithms of hardware and software in a variety of ways.
By Steven Astorino, Mark Simmonds, and Jean-François Puget
Editor's Note: This article is excerpted from chapter 3 of Artificial Intelligence: Evolution and Revolution
Pre-existing Bias
Pre-existing bias in an algorithm is a consequence of underlying social and institutional ideologies. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious. Poorly selected input data will influence the outcomes created by machines. Encoding pre-existing bias into software can preserve social and institutional bias, and without correction, could be replicated in all future uses of that algorithm.
An example of this form of bias is the 1981 British Nationality Act (BNA) program designed to automate the evaluation of new UK citizens. The program reflects the tenets of the law, which state that a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not. In its attempt to transfer a particular logic into an algorithmic process, the program inscribed the logic of the BNA into its algorithm, which could perpetuate it even if the act is eventually repealed.
Technical Bias
Facial recognition software used in conjunction with surveillance cameras has been found to display bias in recognizing faces of different races.
Technical bias emerges through limitations of a program, computational power, a program’s design, or other constraint on the system. Such bias can also be a restraint of design—for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.
Another case is software that relies on randomness for fair distributions of results. If the random-number generation mechanism is not truly random, it can introduce bias—for example, by skewing selections toward items at the end or beginning of a list.
A decontextualized algorithm uses unrelated information to sort results— for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of an airline name beginning with the letter “A” over an airline beginning with the letter “U.” The opposite may also apply, in which results are evaluated in contexts different from which they are collected. Data may be collected without crucial external context—for example, when facial recognition software is used by surveillance cameras but evaluated by remote staff in another country or region or evaluated by non-human algorithms with no awareness of what takes place beyond the camera’s field of vision. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who committed the crime.
Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury. Plagiarism-detection software compares student-written texts to information found online and returns a probability score that the student’s work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where the software might identify foreign speakers of English for plagiarism while allowing native speakers to evade detection.
Emergent Bias
Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts. Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, new business models, or shifting cultural norms. This may result in excluding groups through technology, without providing clear outlines to understand who is responsible for their exclusion. Similarly, problems may emerge when training data (the samples “fed” to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world.
Additional emergent biases include:
Correlations
Unpredictable correlations can emerge when large datasets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the effect would be almost identical to discrimination through the use of direct race or sexual orientation data. In other cases, the algorithm might draw conclusions from correlations, without being able to understand those correlations. For example, one triage program might give a lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm might do this because it simply compares survival rates: asthmatics with pneumonia are at the highest risk. For this same reason, hospitals might give such asthmatics the best and most immediate care.
Unanticipated Uses
Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand. These exclusions can become compounded as biased or exclusionary technology is more deeply integrated into society.
Apart from exclusion, unanticipated uses may emerge from the end-user relying on the software rather than their own knowledge. For example, the designers of an immigration system might have access to legal expertise beyond the end-users in immigration offices, whose understanding of both software and immigration law would likely be unsophisticated. The agents administering the questions might rely entirely on the software, which could exclude alternative pathways to citizenship, and use the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software’s algorithm might indirectly lead to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the broader criteria of immigration law in a country.
Feedback Loops
Emergent bias may also create a feedback loop, or recursion, if data collected for an algorithm results in real-world responses that are fed back into the algorithm. For example, simulations of the predictive policing software might suggest an increased police presence in neighborhoods with a higher ethnic population based on crime data reported by the public. The simulation might show that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation could interpret police car sightings in modeling its predictions of crime and could in turn assign an even larger increase of police presence within those neighborhoods. This could lead to racial discrimination becoming a factor in arrests, and such feedback loops could reinforce and perpetuate racial discrimination in policing.
Recommender systems such as those used to recommend online videos or news articles can also create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time, this may lead to users entering a “filter bubble” and being unaware of important or useful content.
Impact
What are the ways that these algorithmic biases can affect society?
Commercial Influences
Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example, an airline might create a flight-finding algorithm. The software might present a range of flights from various airlines to customers, but weigh factors that boost its own flights, regardless of price or convenience.
Advertising-funded search engines might be inherently biased toward the advertisers and away from the needs of the consumers. This bias could be considered an “invisible” manipulation of the user.
Voting Behavior
Undecided voters may be swayed if an algorithm, with or without intent, boosts page listings for a rival candidate. This could create a digital gerrymandering effect in elections through the selective presentation of information by an intermediary to meet its agenda, rather than to serve its users, if intentionally manipulated.
Gender Discrimination
Some sites may recommend male variations of women’s names in response to search queries while not making similar recommendations in searches for male names. For example, “Andrea” might bring up a prompt asking if users meant “Andrew,” but queries for “Andrew” might not ask if users meant to find “Andrea.”
Department stores might gather data points to infer when women customers are pregnant, even if those women had not announced it, and then share that information with marketing partners. Because the data has been predicted, rather than directly observed or reported, a company has no legal obligation to protect the privacy of those customers.
Racial and Ethnic Discrimination
Algorithms have been criticized as a method for obscuring racial prejudices in decision-making. Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, people of a particular race or ethnicity might receive longer sentences than people of a different race who committed the same crime. This could potentially mean that a system amplifies the original biases in the data.
One example is the use of risk assessments in criminal sentencing and parole hearings. Judges could be presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. The nationality of a criminal’s father might be a consideration in those risk assessment scores, potentially creating an unfair bias toward one or more races.
Image-identification algorithms have incorrectly identified some people as gorillas. Image-recognition algorithms in cameras might incorrectly conclude that some ethnic groups are deliberately squinting or making silly faces simply because the algorithms don’t recognize differences in facial features across ethnic minorities. Such examples are the product of bias in biometric datasets. (Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points.) Speech recognition technology can have different accuracies depending on a user’s accent. This may be caused by the lack of training data for speakers who have that accent.
Biometric data about race may also be inferred, rather than observed. For example, names commonly associated with a particular race could be more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual’s name.
Surveillance
Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors and to determine who belongs in certain locations at certain times. The ability of such algorithms to recognize faces across a racial spectrum might be limited by the racial diversity of images in their training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender.
The software may identify men more frequently than women, older people more frequently than the young, or one race more than another.
Depending on its use, facial recognition software may contain bias when trained on criminal databases compared to non-criminal data.
Lack of Transparency
Commercial algorithms are proprietary and may be treated as trade secrets. Treating algorithms as trade secrets protects companies, such as search engines where a transparent algorithm might reveal tactics to manipulate search rankings. This makes it difficult for researchers to conduct inter- views or analysis to discover how algorithms function. Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output.
Complexity
Algorithmic processes are complex, often exceeding the understanding of the people who use them. Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code’s input or output.
Black-boxing is a process in which scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become. Others have critiqued the black-box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones.
Large teams of programmers may operate in relative isolation from one another and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms. Not all code is original; it may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.
Additional complexity occurs through machine learning (ML) and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms. One unidentified streaming radio service reported that it used five unique music-selection algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do. Companies also run frequent A/B tests to fine-tune algorithms based on user response. For example, a search engine might run millions of subtle variations of its service per day, creating different experiences of the service between each use and/or user.
Lack of Data About Sensitive Categories
A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by anti-discrimination law, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in device fingerprinting, ubiquitous computing, and the Internet of Things. In other cases, the data controller may not wish to collect such data for reputational reasons or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union’s General Data Protection Regulation (GDPR), such data falls under the “special category” provisions (Article 9) and therefore comes with more restrictions on potential collection and processing.
Algorithmic bias does not only include protected categories but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult.
Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories—for example, insurance rates based on historical data of car accidents that may overlap, strictly by coincidence, with residential clusters of ethnic minorities.
Methods and Tools
There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. This emergent field focuses on tools that are typically applied to the data used by the program rather than the algorithm’s internal processes. These methods may also analyze a program’s output and its usefulness.
Currently, a new IEEE standard is being drafted that aims to specify methodologies to help creators of algorithms eliminate issues of bias and articulate transparency (e.g., to authorities or end-users) about the function and possible effects of their algorithms. The project was approved in February 2017. More information is available at https://standards.ieee.org/project/7003.html.
LATEST COMMENTS
MC Press Online