25
Mon, Nov
1 New Articles

Types of Algorithmic Bias

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Bias can be introduced into the algorithms of hardware and software in a variety of ways.

By Steven Astorino, Mark Simmonds, and Jean-François Puget 

Editor's Note: This article is excerpted from chapter 3 of Artificial Intelligence: Evolution and Revolution

Pre-existing Bias

Pre-existing bias in an algorithm is a consequence of underlying social and institutional ideologies. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious. Poorly selected input data will influence the outcomes created by machines. Encoding pre-existing bias into software can preserve social and institutional bias, and without correction, could be replicated in all future uses of that algorithm.

An example of this form of bias is the 1981 British Nationality Act (BNA) program designed to automate the evaluation of new UK citizens. The program reflects the tenets of the law, which state that a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not. In its attempt to transfer a particular logic into an algorithmic process, the program inscribed the logic of the BNA into its algorithm, which could perpetuate it even if the act is eventually repealed.

Technical Bias

Facial recognition software used in conjunction with surveillance cameras has been found to display bias in recognizing faces of different races.

Technical bias emerges through limitations of a program, computational power, a program’s design, or other constraint on the system. Such bias can also be a restraint of design—for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.

Another case is software that relies on randomness for fair distributions of results. If the random-number generation mechanism is not truly random, it can introduce bias—for example, by skewing selections toward items at the end or beginning of a list.

A decontextualized algorithm uses unrelated information to sort results— for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of an airline name beginning with the letter “A” over an airline beginning with the letter “U.” The opposite may also apply, in which results are evaluated in contexts different from which they are collected. Data may be collected without crucial external context—for example, when facial recognition software is used by surveillance cameras but evaluated by remote staff in another country or region or evaluated by non-human algorithms with no awareness of what takes place beyond the camera’s field of vision. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who committed the crime.

Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury. Plagiarism-detection software compares student-written texts to information found online and returns a probability score that the student’s work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where the software might identify foreign speakers of English for plagiarism while allowing native speakers to evade detection.

Emergent Bias

Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts. Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, new business models, or shifting cultural norms. This may result in excluding groups through technology, without providing clear outlines to understand who is responsible for their exclusion. Similarly, problems may emerge when training data (the samples “fed” to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world.

Additional emergent biases include:

Correlations

Unpredictable correlations can emerge when large datasets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the effect would be almost identical to discrimination through the use of direct race or sexual orientation data. In other cases, the algorithm might draw conclusions from correlations, without being able to understand those correlations. For example, one triage program might give a lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm might do this because it simply compares survival rates: asthmatics with pneumonia are at the highest risk. For this same reason, hospitals might give such asthmatics the best and most immediate care.

Unanticipated Uses

Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand. These exclusions can become compounded as biased or exclusionary technology is more deeply integrated into society.

Apart from exclusion, unanticipated uses may emerge from the end-user relying on the software rather than their own knowledge. For example, the designers of an immigration system might have access to legal expertise beyond the end-users in immigration offices, whose understanding of both software and immigration law would likely be unsophisticated. The agents administering the questions might rely entirely on the software, which could exclude alternative pathways to citizenship, and use the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software’s algorithm might indirectly lead to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the broader criteria of immigration law in a country.

Feedback Loops

Emergent bias may also create a feedback loop, or recursion, if data collected for an algorithm results in real-world responses that are fed back into the algorithm. For example, simulations of the predictive policing software might suggest an increased police presence in neighborhoods with a higher ethnic population based on crime data reported by the public. The simulation might show that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation could interpret police car sightings in modeling its predictions of crime and could in turn assign an even larger increase of police presence within those neighborhoods. This could lead to racial discrimination becoming a factor in arrests, and such feedback loops could reinforce and perpetuate racial discrimination in policing.

Recommender systems such as those used to recommend online videos or news articles can also create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time, this may lead to users entering a “filter bubble” and being unaware of important or useful content.

Impact

What are the ways that these algorithmic biases can affect society?

Commercial Influences

Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example, an airline might create a flight-finding algorithm. The software might present a range of flights from various airlines to customers, but weigh factors that boost its own flights, regardless of price or convenience.

Advertising-funded search engines might be inherently biased toward the advertisers and away from the needs of the consumers. This bias could be considered an “invisible” manipulation of the user.

Voting Behavior

Undecided voters may be swayed if an algorithm, with or without intent, boosts page listings for a rival candidate. This could create a digital gerrymandering effect in elections through the selective presentation of information by an intermediary to meet its agenda, rather than to serve its users, if intentionally manipulated.

Gender Discrimination

Some sites may recommend male variations of women’s names in response to search queries while not making similar recommendations in searches for male names. For example, “Andrea” might bring up a prompt asking if users meant “Andrew,” but queries for “Andrew” might not ask if users meant to find “Andrea.”

Department stores might gather data points to infer when women customers are pregnant, even if those women had not announced it, and then share that information with marketing partners. Because the data has been predicted, rather than directly observed or reported, a company has no legal obligation to protect the privacy of those customers.

Racial and Ethnic Discrimination

Algorithms have been criticized as a method for obscuring racial prejudices in decision-making. Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, people of a particular race or ethnicity might receive longer sentences than people of a different race who committed the same crime. This could potentially mean that a system amplifies the original biases in the data.

One example is the use of risk assessments in criminal sentencing and parole hearings. Judges could be presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. The nationality of a criminal’s father might be a consideration in those risk assessment scores, potentially creating an unfair bias toward one or more races.

Image-identification algorithms have incorrectly identified some people as gorillas. Image-recognition algorithms in cameras might incorrectly conclude that some ethnic groups are deliberately squinting or making silly faces simply because the algorithms don’t recognize differences in facial features across ethnic minorities. Such examples are the product of bias in biometric datasets. (Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points.) Speech recognition technology can have different accuracies depending on a user’s accent. This may be caused by the lack of training data for speakers who have that accent.

Biometric data about race may also be inferred, rather than observed. For example, names commonly associated with a particular race could be more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual’s name.

Surveillance

Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors and to determine who belongs in certain locations at certain times. The ability of such algorithms to recognize faces across a racial spectrum might be limited by the racial diversity of images in their training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender.

The software may identify men more frequently than women, older people more frequently than the young, or one race more than another.

Depending on its use, facial recognition software may contain bias when trained on criminal databases compared to non-criminal data.

Lack of Transparency

Commercial algorithms are proprietary and may be treated as trade secrets. Treating algorithms as trade secrets protects companies, such as search engines where a transparent algorithm might reveal tactics to manipulate search rankings. This makes it difficult for researchers to conduct inter- views or analysis to discover how algorithms function. Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output.

Complexity

Algorithmic processes are complex, often exceeding the understanding of the people who use them. Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code’s input or output.

Black-boxing is a process in which scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become. Others have critiqued the black-box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones.

Large teams of programmers may operate in relative isolation from one another and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms. Not all code is original; it may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.

Additional complexity occurs through machine learning (ML) and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms. One unidentified streaming radio service reported that it used five unique music-selection algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do. Companies also run frequent A/B tests to fine-tune algorithms based on user response. For example, a search engine might run millions of subtle variations of its service per day, creating different experiences of the service between each use and/or user.

Lack of Data About Sensitive Categories

A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by anti-discrimination law, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in device fingerprinting, ubiquitous computing, and the Internet of Things. In other cases, the data controller may not wish to collect such data for reputational reasons or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union’s General Data Protection Regulation (GDPR), such data falls under the “special category” provisions (Article 9) and therefore comes with more restrictions on potential collection and processing.

Algorithmic bias does not only include protected categories but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult.

Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories—for example, insurance rates based on historical data of car accidents that may overlap, strictly by coincidence, with residential clusters of ethnic minorities.

Methods and Tools

There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. This emergent field focuses on tools that are typically applied to the data used by the program rather than the algorithm’s internal processes. These methods may also analyze a program’s output and its usefulness.

Currently, a new IEEE standard is being drafted that aims to specify methodologies to help creators of algorithms eliminate issues of bias and articulate transparency (e.g., to authorities or end-users) about the function and possible effects of their algorithms. The project was approved in February 2017. More information is available at https://standards.ieee.org/project/7003.html.

 

 

Mark Simmonds

Mark Simmonds is a Program Director in IBM Data and AI communications. He writes extensively on machine learning and data science, holding a number of author recognition awards. He previously worked as an IT architect, leading complex infrastructure design projects. He is a member of the British Computer Society and holds a Bachelor’s Degree in Computer Science.


MC Press books written by Mark Simmonds available now on the MC Press Bookstore.

Artificial Intelligence: Evolution and Revolution Artificial Intelligence: Evolution and Revolution
Get started on your AI journey with insights for a path to success.
List Price $19.95

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: