Artificial intelligence (AI) algorithms show tremendous potential in guiding decisions; therefore, many enterprises have implemented AI techniques. AI investments reached approximately US$68 billion in 2020.1 The problem is that there are many unknowns, and 65 percent of enterprises cannot explain how their AI tools work.2 Decision makers may be overly trusting the unknown components inside what is referred to as the AI magical black box and, therefore, may unintentionally expose their enterprises to ethical, social and legal threats.
Despite what many believe about this new technology being objective and neutral, AI-based algorithms often merely repeat past practices and patterns. AI can simply "automate the status quo."3 In fact, AI-based systems sometimes make problematic, discriminatory or biased decisions4 because they often replicate the problematic, discriminatory or biased processes that were in place before AI was introduced. With the widespread use of AI, this technology affects most of humanity; therefore, it may be time to take a systemwide view to ensure that this technology can be used to make ethical, unbiased decisions.
Systems Theory
Many disciplines have encouraged a systems theory approach to studying phenomena.5 This approach states that inputs are introduced into a system and processes convert these inputs into outputs (figure 1).
An algorithm is defined as "a standard procedure that involves a number of steps, which, if followed correctly, can be relied upon to lead to the solution of a particular kind of problem."6 AI algorithms use a defined model to transform inputs into outputs. British statistician George E. P. Box is credited with saying "All models are wrong, but some are useful."7 The point is that the world is complex, and it is impossible to build a model that considers all possible variables leading to desirable outcomes. However, a model can help decision makers by providing general guidance on what may happen in the future based on what has happened in the past.
AI algorithms either use predefined models or create their own models to make predictions. This is the basis for categorizing AI algorithms as either symbolic or statistical.8 Symbolic algorithms use a set of rules to transform data into a predicted outcome. The rules define the model, and the user can easily understand the system by reviewing the model. From a systems theory perspective, the inputs and processes are clearly defined. For example, symbolic algorithms are used to develop credit scores, where inputs are defined a priori and processes (calculations) are performed using associated weights and formulas to arrive at an output. This output is then used to decide whether to extend credit.
Statistical algorithms, in contrast, often allow the computer to select the most important inputs to develop a new model (process). These models are usually more sophisticated than those developed by symbolic algorithms but still predict outcomes based on inputs. One issue associated with the use of statistical algorithms is that the user may not know which inputs have been chosen or which processes have been used to convert inputs into outputs, and, therefore, may be unable to understand the model they are utilizing to make decisions.
A model can help decision makers by providing general guidance on what may happen in the future based on what has happened in the past.
Systems theory shows that the output of these models depends heavily on the inputs (data) provided and the types of algorithms (processes) chosen. Subsequent use of the AI model’s outputs to make decisions can easily lead to bias, resulting in systematic and unknowing discrimination against individuals or groups.9
Computer System Bias
The systems theory discussion of AI algorithms can be extended by considering how biases affect outcomes. Bias has been defined as choosing one generalization over another, other than strict consistency with the observed training instances.10 This is often seen where one outcome is more likely to be from one set of functions than from another. Bias is often seen either in the collection of data, where a non-representative sample is taken, which affects the outcome, or by the algorithms themselves, which lean toward one outcome. Decisions based on an AI algorithm’s outputs are often of great interest to the general public because they can have a major impact on people’s lives, such as whether they are selected for a job interview or whether they are approved for a home loan. In IT, three categories of bias can be distinguished: preexisting, technical and emergent.
Preexisting Bias
Sometimes, a bias established in society (preexisting) is transferred into software. This can happen either explicitly, such as when a discriminatory attitude is deliberately built into the algorithm, or implicitly, such as when a profiling algorithm is trained with the help of historical data based on bias. Preexisting bias is often introduced at the input stage of the system. One example of a preexisting bias is use of the classic Fair Isaac Corporation (FICO) algorithm to calculate a creditworthiness score. In this case, cultural biases associated with the definition of traditional credit can result in discrimination because some cultures place more emphasis on positive payment.11
Technical Bias
Technical bias is often the result of computer limitations associated with hardware, software or peripherals. Technical specifications can affect a system’s processes, leading to certain groups of people being treated differently from others. This may occur when standards do not allow certain characteristics to be recorded, or it can be the result of technical limitations related to the software or programming of algorithms. These defects in the algorithm are often seen in the processing stage of the system. An example of technical bias is the inability to reproduce an answer using an AI algorithm. This might occur when the data being used to train (or develop) the model are altered slightly, resulting in a different model being produced. Being unable to reproduce a model with the same or similar data leads to doubt about the model’s overall validity. An example of this type of bias occurred at Amazon, when women’s résumés were excluded based on the system’s selection of words in applicants’ résumés. Amazon tried to remove the bias but eventually had to abandon the AI algorithm because it could not identify the underlying technical logic that resulted in discriminatory outputs.12
Emergent Bias
Emergent bias happens due to the incorrect interpretation of an output or when software is used for unintended purposes. This type of bias can arise over time, such as when values or processes change, but the technology does not adapt13 or when decision makers apply decision criteria based on the incorrect output of the algorithm. Emergent bias normally occurs when the output is being converted into a decision.
An example of emergent bias is the use of Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is used by some US court systems to determine the likelihood of a defendant’s recidivism. The algorithms are trained primarily with historical data on crime statistics, which are based on statistical correlations rather than causal relationships. As a result, bias related to ethnicity and financial means often results in minorities receiving a bad prognosis from the model.14 A study of more than 7,000 people arrested in the US State of Florida evaluated the accuracy of COMPAS in predicting recidivism. It found that 44.9 percent of the African Americans defendants labeled high risk did not reoffend. Making decisions using this model’s outcomes would, therefore, negatively affect nearly 45 percent of African American defendants for no valid reason. Conversely, only 23.5 percent of white defendants labeled as high risk in the study did not reoffend.15 Knowingly using biased outcomes to make decisions introduces emergent bias that may lead to negative outcomes.
Conducting correlation studies of the variables before introducing them into the system may provide an early indication of potential discrimination problems.
These forms of bias can occur in different combinations,16 and they point to the mystery hidden inside the AI black box (figure 2). The AI system can result in outcomes that are unethical or discriminatory. Therefore, it is critical that decision makers understand what happens inside the box—at each step, and with each type of bias—before using AI outputs to make decisions.
Analyzing the AI Black Box and Removing Bias
After understanding how bias can find its way into the AI black box, the goal of managers is to identify and remove the biases. As shown in figure 3, there are five steps to avoid bias and discrimination.
Step 1: Inputs
The goal of AI algorithms is to develop outputs (predictions) based on inputs. Therefore, it is possible that AI algorithms will detect relationships that could be discriminatory based on any given input variable. Including inputs such as race, ethnicity, gender and age authorizes the system to make recommendations based on these inputs, which is, by definition, discrimination. And making decisions based on these factors is also discrimination. This is not an admonishment to never use these inputs; rather, it is a warning that discrimination could be claimed. For example, automobile insurance models predict accident rates based on age and gender. Most people would probably agree that this is fair, so including age and gender would be appropriate inputs in an accident risk model. But enterprises should be transparent about their inputs to ensure that the public is aware of what factors are considered in the decision-making process. Conducting correlation studies of the variables before introducing them into the system may provide an early indication of potential discrimination problems.
Preexisting bias can also be introduced with inputs. This can be done by choosing to include potentially discriminating inputs or by choosing other inputs that may act as proxies for gender, age and ethnicity, such as choosing personal economic variables as inputs due to socioeconomic relationships. Decision makers must know which inputs are being used by the AI algorithms.
Step 2: Processes
The algorithms used to convert inputs to outputs must be understood at the highest possible level of detail. This is easy for symbolic algorithms, but much more difficult for some statistical AI techniques. Decision makers should request that the inputs used in developing and training statistical AI models be identified to determine which variables are being used. Another consideration is to ascertain why a particular AI technique was chosen and to determine the incremental explanatory power of a model that cannot be explained vs. one that can be explained. Different AI algorithms should be tested to determine the accuracy of each one, and further investigation into inputs is warranted when unexplainable models significantly outperform explainable models.
Investigating multiple models can also help identify technical bias. For example, gone are the days when weather forecasters relied on just one model. It is much more common now for spaghetti models to be used, giving an overview of what many models are predicting instead of relying on just one.
After an algorithm has been developed, user acceptance testing (UAT) is an important step to ensure that the algorithm is truly doing what it was designed to do. UAT should "examine all aspects of an algorithm and the code itself."17 For nondiscriminatory AI applications, it is important to consider justice as a target value and to provide people disadvantaged by an AI-based decision with the ability to enforce their rights.18
Step 3: Outputs
This is the last step before a decision is made. Correlation and moderation studies of the outputs and the most common inputs associated with discrimination (e.g., gender, race, ethnicity) should be conducted. The results should be reported to decision makers to ensure that they are aware of how inputs are related to outputs.
Step 4: Decisions
The AI system is reconnected to the outside environment during this step. Decision makers must understand that decisions made using AI have consequences for the people or systems associated with those decisions. It is very important at this point to analyze whether discrimination can result from the decisions being made. This is especially important when the decision is directly connected to the output without any human intervention. However, it is important to note that human interaction does not eliminate the possibility of emergent bias, as human beings are inherently biased.19
Step 5: Outcomes
At this point, the decisions have interacted with the environment and the effects are known—but only if the enterprise follows up and measures the actual outcomes. Too often, enterprises do not monitor what has happened and continue to use the same inputs and processes, unknowingly allowing biases to shape the outputs that influence the decision-making process.
The Final Check: Ethics
Understanding inputs, processes, outputs, decisions and outcomes may ensure that bias has been removed from an AI algorithm and potential discrimination has been identified. However, enterprises can follow all these steps and still have ethical issues associated with the decision-making process. The life cycle model for developing ethical AI is a great approach for identifying and removing ethical AI issues.20 What decision makers may still be missing is a framework for deciding what is unethical.
There are many frameworks that address the ethical aspects of algorithms. Some of these frameworks are specialized, covering a particular area such as healthcare.21 Two frameworks—the PAPA model22 and the AI4People model23—have gained widespread application. They present an interesting contrast since the former was developed at the end of the 1980s and the latter was developed in 2018, in a completely different world of technology.
PAPA Model
The PAPA model identifies four key issues to preserve human dignity in the information age:
- Privacy—The amount of private information one wants to share with others and the amount of information that is intended to stay private
- Accuracy—Who is responsible for the correctness of information
- Property—Who owns different types of information and the associated infrastructure
- Accessibility—How information is made available to different people and under what circumstances access is given24
Too often, enterprises do not monitor what has happened and continue to use the same inputs and processes, unknowingly allowing biases to shape the outputs that influence the decision-making process.
AI4People Model
The AI4People model is much more recent. It was developed by analyzing six existing ethical frameworks and identifying 47 principles for making ethical decisions.25 Five core principles emerged:
- Beneficence—The promotion of well-being and the preservation of the planet
- Nonmaleficence—The avoidance of personal privacy violations and the limitations of AI capabilities, including not only humans’ intent to misuse AI but also the sometimes unpredictable behavior of machines
- Autonomy—The balanced decision-making power of humans and AI
- Justice—The preservation of solidarity and prevention of discrimination
- Explicability—Enabling of the other principles by making them understandable, transparent and accountable26
Figure 4 compares these models and presents ethical AI algorithm considerations for decision makers.
These two ethical models show that enterprises should consider other outcome variables that can determine the ethical implications of their decisions. This should be done not only by the software developers, but also by the stakeholders involved in the algorithm.27 Decision makers should evaluate each of the areas listed in figure 4 and determine which additional outcome variables they should collect and track to ensure that their AI algorithms are not negatively impacting ethical interests. The limited views of one decision maker may be inadequate for this task. One option is to assemble a diverse board to evaluate these areas and make recommendations about which outcomes to track. This may help expose initially unidentified concerns that can be incorporated into the AI algorithms’ redesign to ensure that ethical issues are addressed.28
The prerequisites for an ethical AI algorithm are unbiased data, explainable processes, unbiased interpretation of the algorithm’s output and monitoring of the outcomes for ethical, legal and societal effects.
Conclusion
The use of AI algorithms can contribute to human self-realization and result in increased effectiveness and efficiency. Therefore, this technology is already being applied in numerous industries. However, there are risk factors and ethical concerns regarding the collection and processing of personal data and compliance with societal norms and values. The prerequisites for an ethical AI algorithm are unbiased data, explainable processes, unbiased interpretation of the algorithm’s output and monitoring of the outcomes for ethical, legal and societal effects.
AI algorithms can improve organizational performance by better predicting future outcomes. However, decision makers are not absolved from understanding the inputs, processes, outputs and outcomes of the decisions made by AI. Taking a systems theory approach may help enterprises ensure that the legal, ethical and social aspects of AI algorithms are examined. Outcomes based on AI algorithms must not be assumed to be rigid and finite because society and technology are constantly changing. Similarly, ethical principles may change too. When the environment for an algorithm changes, the algorithm must be adapted, even if it requires going beyond the original specifications.29
Endnotes
1 Zoldi, S.; "It’s 2021. Do You Know What AI Is Doing?" FICO Blog, 25 May 2021, http://www.fico.com/blogs/its-2021-do-you-know-what-your-ai-doing
2 Ibid.
3 O’Neil, C.; "The Era of Blind Faith in Big Data Must End," Ted Talks, 2017, http://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end
4 Zielinski, L. et al.; "Atlas of Automation—Automated Decision-Making and Participation in Germany," Algorithmwatch, 2019, http://atlas.algorithmwatch.org/
5 Johnson, J. A.; F. E. Kast; J. E. Rosenzweig; "Systems Theory and Management," Management Science, vol. 10, iss. 3, January 1964, p. 193–395, http://www.jstor.org/stable/2627306
6 Haylock, D.; F. Thangata; Key Concepts in Teaching Primary Mathematics, Sage, United Kingdom, 2007
7 Horruitiner, C. D.; "All Models Are Wrong," Medium, 13 January 2019, http://medium.com/the-philosophers-stone/all-models-are-wrong-4c407bc1705
8 Pearce, G.; "Focal Points for Auditable and Explainable AI," ISACA® Journal, vol. 4, 2022, http://1q3y.39680a.com/archives
9 Friedman, B.; H. Nissenbaum; "Bias in Computer Systems," ACM Transactions on Information Systems, vol. 14, July, 1996, p. 330–347, http://doi.org/10.1145/230538.230561
10 Mitchell, T. M.; The Need for Biases in Learning Generalizations, Rutgers University, New Brunswick, New Jersey, USA, 1980, http://www.cs.cmu.edu/~tom/pubs/NeedForBias_1980.pdf
11 Scarpino, J.; "Evaluating Ethical Challenges in AI and ML," ISACA Journal, vol. 4, 2022, http://1q3y.39680a.com/archives
12 Ibid.
13 Op cit Friedman and Nissenbaum
14 Angwin, J.; J. Larson; S. Mattu; L. Kirchner; "Machine Bias," Propublica, 23 May 2016, http://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
15 Ibid.
16 Beck, S. et al.; "Kunstliche Intelligenz und Diskriminierung," 7 March 2019, http://www.plattform-lernende-systeme.de/publikationen-details/kuenstliche-intelligenz-und-diskriminierung-herausforderungen-und-loesungsansaetze.html
17 Baxter, C.; "Algorithms and Audit Basics," ISACA Journal, vol. 6, 2021, http://1q3y.39680a.com/archives
18 Op cit Beck
19 Op cit Scarpino
20 Ibid.
21 Xafis, V.; G. O. Schaefer; M. K. Labude et al.; "An Ethics Framework for Big Data in Health and Research," ABR, vol. 11, 1 October, 2019, p. 227–254, http://doi.org/10.1007/S41649-019-00099-x
22 Mason, R.; "Four Ethical Issues of the Information Age," MIS Quarterly, vol. 10, 1986, p. 5–12
23 Floridi, L.; J. Cowls; M. Beltramelti et al; "AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations," Minds and Machines, vol. 28, December 2018, p. 689-707, http://doi.org/10.1007/S11023-018-9482-5
24 Op cit Mason
25 Op cit Floridi et al.
26 Barton, M. C.; J. Poppelbuf3; "Prinzipien fur die ethische Nutzung kunstlicher Intelligenz," HMD, vol. 59, 2022, p. 468–481, http://doi.org/10.1365/S40702-022-00850-3
27 Hauber, H.; "Data Ethics Frameworks,". Information, Wissenschaft and Praxis, vol. 72, iss. 5-6, 2021, p. 291–298, http://doi.org/10.1565/iwp-2021-2178
28 van Bruxvoort, X.; M. van Keulen; "Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System," Applied Science, vol. 11, 2021, p. 11187, http://doi.org/10.3390/app112311187
29 Pearce, G., M. Kotopski; "Algorithms and the Enterprise Governance of AI," ISACA Journal, vol. 4, 2021, http://1q3y.39680a.com/archives
SIMONA AICH
Is an undergraduate student at Ludwigshafen University of Business and Society (Ludwigshafen, Germany) in a cooperative study program with SAP SE, a market leader for enterprise resource planning (ERP) software. She has completed internships in various departments, including consulting and development. Her research interests include the impact of algorithms, a topic she plans to pursue in her master’s degree studies.
GERALD F. BURCH | PH.D.
Is an assistant professor at the University of West Florida (Pensacola, Florida, USA). He teaches courses in information systems and business analytics at both the graduate and undergraduate levels. His research has been published in the ISACA® Journal. He can be reached at gburch@uwf.edu.