top of page
Search

AI and Ethics: A Balancing Act


Rapid technological advancements from AI and IoT to blockchain and self-driving cars have undoubtedly been an absolute gamechanger. Such advancements promise consumers higher efficiency and substantially lower costs. Today, AI is essential in every industry, and business spendings on AI are expected to hit the $50 billion mark by the end of 2022, and a whopping $110 billion dollars by the end of 2024. While the utility and efficacy of AI is undisputable, there have been growing concerns that such technologies are likely to cause more societal harm than economic good. Today, most companies and organizations use AI to improve efficiency and make determinations related to employment, health, creditworthiness, and even criminal justice. However, one of the biggest ethical impediments to successful implementation of AI is that AI software and algorithms are often encoded with structural biases. Can AI exist without biases? What can companies and organizations do to reduce bias in their AI? Fairness, transparency and justice call for mitigation of bias from the use of AI. AI and Ethics is a balancing act!


What is bias?

Bias is a strong inclination of the mind or a preconceived opinion about something or someone. It is an inherent part of human nature. We each have our distinct preferences, likes and dislikes, and different points of view. Bias is not restricted to gender or race, but it encompasses socio-economic backgrounds, educational backgrounds and the like. It is no surprise, then, that these preferences, or biases, are found in data.


Bias can creep into data and algorithms in numerous ways. AI systems are taught to make decisions based on data which is the exhaust of humans as we live our lives. Data is fed by human beings that are implicitly and explicitly biased. Implicit bias is a kind of unconscious bias which refers to human prejudice, stereotypes and beliefs and reflect historical and social inequities outside one’s consciousness and control. Human beings training AI are also explicitly biased meaning, bias that humans possess on a conscious level. These biases are within an individual’s conscious awareness and can be reflected upon and monitored easily. Thus, AI trained by human beings will mirror such human behavior resulting in biased data. Another pertinent source of bias is flawed data collection and data sampling, in which groups from whom data is collected are not diverse. They are over or underrepresented in the training data. For instance, Joy Buolamwini at MIT working with Timnit Gebru, a leading AI ethics researcher, discovered that facial analysis technologies had higher error rates for minorities and particularly minority women, due to unrepresentative training data.


What are some instances of biased data?

There has been rapid technological advancement with the development of machine learning. AI, a form of machine learning is present in countless forms of technology. As a result, there is a lot of AI that is biased.


For instance, Amazon was using a hiring algorithm that chose applicants based on words like “captured” or “executed” which were terms more commonly used on men’s resumes. When Amazon investigated into the algorithm, it found that it automatically handicapped the applications and resumes that contained words like “women”. Therefore, Amazon finally discarded the algorithm and did not use it to evaluate candidates for recruitment.


Another example of biased AI systems is the COMPAS algorithm. COMPAS, another AI algorithm created by Northpointe stands for Correctional Offender Management Profiling for Alternative Sanctions. It is used by courts in several U.S. states to predict which criminals are more likely to re-offend in the future. Research into the algorithm showed that black criminals were unfairly judged to be much more likely to re-commit crimes in the future than white criminals.


PredPol also known as Geolitica, is a predictive policing company used by police departments in several U.S. states, to predict where crimes will occur in the future. It attempts to predict property crimes using predictive analytics based on the crime data collected by the police such as the arrest counts, number of police calls in a place, etc. It further aims to reduce the human bias in the police department. Unfortunately, research revealed that PredPol itself was biased. It repeatedly directed police officers to specific neighborhoods inhabited by a larger number of racial minorities, regardless of how much crime happened in the area.


In another instance, Google Photos labelled a photograph of two black people as gorillas. Google claimed to be appalled by this instance and promised to rectify the algorithm. However, several years since, all Google has done is remove gorillas and other types of monkeys from its Convolutional Neural Network’s vocabulary so that it would not identify any photo as such.


Bias not only hurts those being discriminated against but, it has a negative impact on society as it prevents equal participation. Bias in AI results in distorted inaccurate results reducing the potential of AI for business and society. Thus, business owners and organizational leaders have a duty and responsibility to encourage progress on research and standards that will reduce bias in AI. Tech giants like Google, Microsoft and the like periodically make statements expressing their commitment to making AI fair and unbiased, but they tend to elide a fundamental reality.



Barriers To Bias Mitigation

The first step to mitigate bias is to acknowledge that algorithmic bias exists and creates risk. A recent report published by Baker McKenzie suggested that executives did not consider algorithmic bias as a high risk to their organization.


Further, the report suggests that organizations are grappling with how to deal with this bias. While organizations are now becoming aware that bias in AI exists, they still fail to implement oversight programs, processes, and tools to reduce algorithmic bias and risks.


At a basic organizational level, companies struggle with how to set up their internal boards to monitor AI biases, as well as how organizations will engage with third-party developers and vendors of algorithms:


1. Internal teams often lack diversity and inclusion: Tech companies remain primarily male, white and affluent. Even today, software development is predominantly a male oriented industry — only about one-quarter of computer scientists in the United States are women — and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too. This dynamic often reflects in the values of corporate governance artefacts that contribute to policies that guide algorithmic decisions within companies (e.g. who the “best” employees are which determines who becomes leadership, hiring practices which determines homogeneity of a company, if data will be interrogated for gender, race or socioeconomic biases). Such gender, racial and socio-economic imbalance within internal teams is further embedded in both implicit and explicit biases when developing AI tools.


2. Insufficient knowledge of social science at the development stage: The teams responsible for developing artificial intelligence within an organization are comprised of mainly engineers, data scientists, and/or computer scientists. Due to their STEM background, these individuals are more scientifically and mathematically inclined and do not possess the knowledge or skills to deal with social problems like biases. However, social science knowledge is pertinent when attempting to overcome biased algorithms and grapple with fairness considerations. Today, while larger companies have made a headway in this aspect by involving social scientists at the development stage, emerging companies and start-ups are hesitant in spending resources for integrating multidisciplinary knowledge.


3. Uncertainty of data output and persistence of black box models: At the time of model construction, the presence of bias is difficult to ascertain. Developers may not truly understand the impacts of their data choices till much later into the process of model construction. For example, in the case of the Amazon hiring algorithm discussed above, it was difficult for Amazon to retroactively correct its model once it discovered that its algorithm was biased towards a particular gender. Amazon was unable to fix this issue discovered during a later stage and thus, ended up scrapping the AI altogether. Moreover, the workings of “black box” models and its intricate algorithms are opaque, and developers may not be able to predict the output. Due to this lack of clarity and complexity of neural networks involved in black box models, there is an urgent need for development of more explainable “white box” models to keep the organizations developing AI systems more accountable for biases.


4. Lack of supervision in an organization and lack of accountability within internal teams: Most organizations do not have a dedicated officer in charge of ensuring smooth functioning of AI or overlooking the functioning of models in general. The report published by Baker McKenzie reveals that 64% of organizations do not have a dedicated Chief Artificial Intelligence Officer (CAIO) within the company. Moreover, at the development stage, internally, teams work on different components of an algorithm or AI system. Thus, there is a serious lack of internal accountability as to biases existing in the AI. Further, when working with third-party vendors, organizations may not be able to ascertain if the algorithm used by these third-parties is unbiased. This is exacerbated by black box models which are inexplicable, further complicating organizations’ efforts to understand them.


5. Lack of regulations and actionable guidance: To recognize, rectify and avoid such biases in the future, there is an urgent need for strict regulatory framework governing artificial intelligence. While efforts are being made to lay out guidelines and parameters from a variety of sources, they tend to be vague and unclear.



What can organizations do to reduce biases?

The first step to mitigate biases in AI systems is for organizations to set up a dedicated internal ethics board to create policies, training and governance procedures to ensure ongoing mitigation of biases and review new and existing algorithms for consistency against those policies. To carry out such responsibilities, internal ethics boards can take the following steps.


1) Appoint a Chief Artificial Intelligence Officer (CAIO): The most successful way to mitigate AI bias, is to recognize its presence. Internal ethics boards must have a CAIO who will need to be well-versed with the latest developments in this fast-moving field of research. Such an officer must be in charge of educating others in the team. They must be skilled at identification and recognition of biases at the development stage.


2) Design for Inclusion: Internal boards within organizations must make efforts to include members from diverse backgrounds in terms of gender, race, socio-economic background, education and the like. Companies must invest more in education, research, development, and policy to sketch out a plan to ensure that their internal teams developing the AI are diverse in terms of gender, race, socio-economic background, education and the like. This would enable the company to be better equipped to review and spot bias in AI and engage communities affected. Further, collaboration amongst engineers, data scientists, ethicists, social scientists and lawyers will help gather different viewpoints, and incorporating different preferences and life experience at the initial development stage, can help mitigate biases. Companies could increase their dedication to diversity and inclusion in AI by taking a look at organizations like AI4ALL, a non-profit that is dedicated to increasing diversity and inclusion in AI education, research, development, and policy.


3) Internal practices to mitigate biases such as training systems with better data: Once organizations have recognized the existence of biases in AI, internal ethics boards must ensure they deploy responsible processes that can help mitigate bias. Internal development teams must consider using varied technical tools or some form of operational practices such as internal “red teams” to keep a check on potential biases that may creep into the AI systems. Currently, IBM has a set up a framework called “Fairness 360” that pulls together common technical tools useful for mitigating bias. Further, to build AI systems free of bias cannot be achieved only by having more diverse design teams. Developers will have to train the programs to behave inclusively. Most data sets used to train AI systems consist of some form of historical artifacts of biases — for instance the term “woman” is automatically more associated with a nurse rather than with a doctor. Thus, there is a growing need for developers to identify such associations and remove them so that they are not perpetuated and reinforced. Though AI systems learn by finding patterns in data, such systems need human guidance to ensure that the software does not make incorrect conclusions. Organizations must thus use their internal teams to train data in the best way possible to promote diversity and inclusion and avoid biases. Microsoft, for example, has set up the Fairness, Accountability, Transparency, and Ethics in AI team, which is responsible for revealing biases existing in data used by the company’s AI systems.


4) Testing before deploying at the development stage: It is pertinent that organizations ask additional questions and test the algorithms very early on at the development stage before deploying it. In our conversation with Christina Montogmery, Chief Privacy Officer at IBM, Christina mentioned that pushing the vetting process all the way at the end right before the final product is to be released is a bad idea because dollars and time would already have been wasted. The end of the AI development process right before production was the legal team's most impactful catch point.


5) Periodic audits: Further, the internal board at organizations must carry out periodic audits of the AI systems to detect any potential bias. They must carefully test their products for faulty data, bias, safety risks, performance gaps and other problems and work towards their mitigation.


6) Develop policies and programs based on response to AI: A pertinent step for businesses to mitigate AI bias is to have policies and internal programs in place based on response to AI. It is critical to continuously develop the vetting programs for algorithms, not just at the development stage but, continue such vetting even at a later stage after careful analysis of the responses received to such algorithms. Vetting programs created after taking into account the responses received proves to be more effective in regulating such biases from creeping into the systems at a later stage.



Examples of Organizations Mitigating Bias and Designing For Inclusion


LivePerson, a company that develops online messaging, marketing, and analytics products, places its customer service staff—a profession that is 65% female in the United States— alongside its predominately male coders to ensure collaboration during the development process to incorporate larger and different perspectives.


Microsoft’s Design services group created a program and framework called “Inclusive Design” for assembling design teams that are more considerate of the needs and sensitivities of varied kinds of customers, including those with physical disabilities.


IBM has made efforts to train skin-cancer AI computer vision tool on a dataset of various skin types and make diagnoses efficient. IBM is using a computer vision technology with AI to prevent misdiagnosis of skin cancer that disproportionately impacted dark-skinned people. While light-skinned Americans are more susceptible to developing skin cancer than African Americans (who are 22 times less likely), the survival rates for African-Americans are abysmally lower (77% vs. 91% for Caucasians). Currently, this tool is equally as accurate as a specialist at recognizing melanoma across a visual dataset.


DefinedCrowd offers an all-in-one platform that leverages machine learning technology and human intelligence to deliver high-quality training data for AI systems. Its services include mitigating bias from AI voice technologies by making it possible to improve and expand language capabilities of a voice assistant or a chatbot systems. It helps robots and machines to speak better, to understand humans better, and to see and understand motion better. The company is focused on improving the human-computer interaction. Several big companies like Sony have adapted this system to keep a check on bias arising from voice technology.


Further, in governance and justice, The Clooney Foundation and Microsoft’s AI For Good initiative launched a program in 2019 called TrialWatch. This program is used to keep a check on trials globally to bring to light any injustices and attempts to rally support for defendants whose rights have been violated in the criminal justice process. This program recruits and trains people to monitor trials, and then these individuals use an AI-enabled app to capture audio, photos and questionnaires. TrialWatch then uses such data collected to work with human rights experts to assess fairness of trials and shares reports and dossiers with other stakeholders.


The University of Washington runs a project called The Diverse Voices project where the goal is to develop a technology incorporating input from multiple diverse stakeholders to better represent the needs of nonmainstream populations.



Regulations Are Coming, Start Mitigating Bias Now


New York City Artificial Intelligence Law on Employment Practices

New York City is one of the first jurisdictions to pass a law aimed at reducing bias in automated employment decisions, which becomes effective on January 1, 2023. The law prohibits employers from making an employment decision by using an automated tool to screen an employee or candidate unless such tool has been subject of bias audit conducted no more than one year prior to the use of such tool. The law also requires that the summary of the results of such bias audit must be made publicly available on the employer’s website. Additionally, the law requires that the employers provide notice to employees and candidates giving explanations of what factors go into the AI’s decision.


Algorithmic Accountability Act

U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J., and Representative Yvette Clarke, D-N.Y., recently introduced a landmark bill to bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions regarding human lives. If passed by Congress, the Algorithmic Accountability Act would require companies to take accountability of the automated systems they use and sell. Companies will be required to conduct impact assessments for bias existing in their automated systems, create new transparency about when and how automated systems are used, and empower consumers to make informed choices about the automation of critical decisions.


Stop Discrimination by Algorithms Act In that same month, Washington, DC, Attorney General Karl Racine introduced the first-in-nation bill to ban “algorithmic discrimination”. If passed, the Stop Discrimination by Algorithms Act will make it illegal for companies to use algorithms that discriminate against marginalized groups to make decisions about key areas of life such as education, jobs, healthcare, access to credit, loans, insurance and housing. The said legislation would build on D.C.'s existing Human Rights Act, prohibiting discrimination of varied characteristics such as race, gender, national origin, sexual orientation and the like, and extend it into the technological realm.

Organizations must keep in mind that despite varied measures taken to mitigate biases, AI systems can still be exposed to certain pathways for bias to enter – for instance, they can be at risk for using datasets that are accurate but representative of an unjust society. Thus, it is critical that organizations set up internal ethics boards to monitor biases in AI on a regular basis, and ensure corporate governance of such AI systems. It can be challenging to know where and how social prejudices have been built into a technology and thus, these internal ethics boards must pay close attention to this risk involved in AI biases and assess the various potential ways in which organizations could mitigate such biases.


Find Out More About Algorithmic Bias There are several resources provided by organizations to learn more about AI, ethics and biases such as the Alan Turing Institute’s Fairness, Transparency, Privacy group, the Partnership on AI, Google AI, IBM’s AI Fairness 360, IBM’s AI Explainability 360; AI Education Project; AI for Anyone; AI Incident Database; Alethea Framework; The Algorithmic Justice League; All Tech Is Human; EAIGG: Ethical AI Governance Group; EqualAI; Ethical AI Database (EAIDB); EthicalOS ; EthicsGrade; Responsible AI self-assessment (by KOSA AI) ; Responsible Innovation Labs;


bottom of page