top of page
Search

AI Regulation – Why We Can’t Wait

The last two decades have shown us the unfettered potential of technology. We have gone from a world of yellow pages and dial-up internet to a world where everything from our name to our food preferences are out in the open. Siri, Alexa, and Bixby are our closest friends, and our Instagram knows our vacation plans, often even before we do. We all hear about minor glitches and funny stories – Alexa sending a couple’s argument to their friends, or an AI operated camera mistaking the referee’s bald head for the soccer ball. These instances seem like funny “AI gone-wrong” moments but after listening to Cortnie Abercrombie and Bradford Newman, I have started to fear them. Along with these two experts, a few LL.M students from UC Berkeley have come together to analyse the problems AI presents and how to fix this, at its source. Through this research, I have started seeing AI in its truest form - a human organization – an organization with issues of transparency, bias, accountability, security, and privacy. We unknowingly feed this organization with data from our day-to-day lives and businesses. This biased data, taken from a small segment of the world population, is then analyzed, and used - as if their limited pool represented all of mankind. Suddenly, I saw how a “glitch” in a self-driving car could kill and how a facial recognition software’s biased data set can cause the arrest of an innocent man. AI is an end-to-end human product i.e., the developer and user are human, with one key difference i.e., the “human element”. We can assess our environment based on our past experiences and human instinct. This allows us to make split second decisions to slow a car or stop a drone. So, if we can have a plethora of well-established laws for humans, why are we so slow in regulating an end-to-end human product which causes harm but unlike humans, cannot tell the difference between right and wrong. What is the delay? Can we afford it?


Often, by the time we see the need for legislative change, the problem seems insurmountable. Take environmental regulations and sexual harassment regulations as an example – environmental legislations came after a measurable and visible effect of environmental degradation was experienced by the common man. Strict sexual harassment regulations came at a time where a workplace was no longer a safe space for women. Somehow the structural change must wait until the effects of the problem are grave enough to be felt by the common public. With artificial intelligence, we can’t afford to wait for this. Technology evolves on a day-to-day basis and with this advancement in technology, companies have become better and better at hiding their mistakes. Tech companies have mastered making us dependent on their systems and programs – to such an extent where we filter the bad and ONLY accept the good. We don’t use AI with the assumption of it being wrong – on the contrary, we assume it to be full proof. We have seen several instances of biased facial recognition and privacy violations, to which authorities depend on a corporation’s self-regulation. Is that enough? The answer is a resounding “NO”.


From speaking to experts and understanding the process of AI generation, 90% of the work and fault lies within data. Data is created every second of everyday and tech companies have learned to capitalize on this data in a two-sided market. Their profit margin is seen as inversely proportional to a restriction on their data collection. Instead of looking at long term financial losses through lawsuits, companies see short term financial gain in selling data sets. Some states are becoming more cognizant of this problem and are bringing regulation to this effect. However, this is an incredibly small percentage of the country and regulation cannot cover every issue AI has to offer. Through our research, we are looking into the need of regulatory bodies and boards to research and review AI and the restrictions required therein. Our lives have become increasingly dependent on devices and programs that have popularly tracked our location without our knowledge, recorded our conversation and intimate moments, confused athletes with mugshots in Massachusetts and conversed with the public through Microsoft’s offensive Tay Chatbot.


This is not to say technology is bad, just like industries and factories are not bad. But left unregulated, they have uncontrollable capability to violate our privacy, cause severe injury and lead to grave human rights violations. So next time you laugh at a funny instance where the photo app doesn’t recognize your friend’s face, think of the biased data that causes this and how this can eventually lead to your friend being confused with a wanted criminal.


For more information about AI Truth's Corporate AI Ethics Research visit: https://www.aitruth.org/corporate-ai-ethics-best-practices

bottom of page