top of page
Search

The Top 3 Hindrances to Developing Responsible AI



Artificial intelligence is the future. AI is revolutionizing virtually every industry in the world, and by extension, impacting every human being. Why is it then that 87% AI projects never make it into production? Why is it that people don’t trust AI, even if they believe it’s essential? The gap between the AI buzzwords and their actual implementation is glaring. The reason for this gap is that the AI being produced isn’t workable for the real world, partly because it’s too biased.


As an IP and technology attorney from India currently pursuing a Master of Law (LLM) program at the University of California, Berkeley, these findings were astounding to me. In my time contributing to the development of the California Consumer Protection Act of 2018 and its amendments, I came across many intersections between privacy law and AI, including the ethics surrounding the procurement of personal data to train AI systems.


Through my upcoming project with AI Truth, I’ve spoken to the movers and shakers in the industry - data scientists, engineers, and experts from major NASDAQ companies in the United States about this problem. I’ve quizzed them about how they approach ethical AI, what internal protocols they adopt and what their greatest challenges are.


Here’s the top three gaps in the industry that I have observed so far:


1. Error 404: Acknowledgement not found


The primary gap is the lack of acknowledgement for the need of responsible AI at the highest levels of the company. Many founders and CEOs do not grasp the importance of developing ethical AI, as the focus is usually on meeting deadlines under pressure, thus compromising a thorough ethical analysis of their algorithms.


Even if company heads do acknowledge the importance of ethical AI in principle, in practice they do not have any systems in place to train everyone in the company in a meaningful way. This is highly problematic as most AI frameworks are bult by junior engineers, who must be trained to look at the development of AI from an ethical lens. Further, ethics must be threaded through the entire process in the lifecycle of the development of AI, instead of being a mere checklist to be ticked at the end.




2. Bias by design, not just data


To weed out bias from AI, it’s important that the data being fed to the AI is unbiased. There have been many scandalous mishaps regarding biased AI recently. In 2021, Facebook apologized as its facial recognition technology labelled Black men as “primates”. Stanford researchers found that AI systems which generate text had a blatant anti-Muslim bias, generating text disproportionately associating Muslims with violence. Researchers found that seatbelts and airbags used in cars are designed for a “standard physique”, i.e. the measurements of an average man, leading to life threatening results for women drivers. These incidents were attributed to the fact that datasets used to train the AI were biased, using largely Caucasian, English language and men centric data respectively to train the AI. These systems were trained on data which inherently reflect the biases and systemic discrimination in our society. Due to the backlash against these incidents, the industry is recognizing the need for unbiased data and working towards this goal.


However, many companies mistakenly believe that if the data to train the AI is unbiased, the AI itself will be unbiased. This is not always the case. AI can be biased by design itself. It can be created in a way which does not account for the lived experiences and access-barriers of all its users. For example, the City of Boston launched an app called StreetBump in 2011, which drew on accelerometer and GPS data to help detect potholes on roads and report them to the city. This was a well-intentioned idea in principle. However, the city quickly realized that smartphone penetration was much lower in low-income neighborhoods, which meant that potholes in these areas were less likely to be reported and consequently, less likely to be fixed.


To fix this problem, intersectionality of professional backgrounds and identities in companies is essential. This will help design more inclusive systems as everyone will bring their perspective to the table and catch bias in existing systems earlier on in the development process. However, most companies face difficulties in ensuring diversity within their teams, often due to lack of availability of qualified candidates from diverse backgrounds.


3. Be better every day


Professionals in the industry often speak of ethical AI as an end goal to be achieved based on a fixed set of parameters. Once the AI is certified as being unbiased or ethical before it goes into production, it is treated as kosher. Chapter closed. However, over and above building ethical AI, developers must also ensure that the AI remains unbiased during all phases of its lifecycle by monitoring it continuously for fairness. In this context, ethical AI must be viewed in the same way cybersecurity is today- as a continuous and ongoing process to bridge vulnerabilities in systems and strengthened, rather than an end goal.


To find out more about AI Truth's Corporate AI Ethics Best Practices research, please visit: https://www.aitruth.org/corporate-ai-ethics-best-practices



bottom of page