AI & Ethics

By: Bing Cao

Image by Gerd Altmann from Pixabay

We will be able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.”
– The Megatron Transformer, developed by Nvidia

The word “Robot” originated from Karel Čapek’s classic play in 1920. Ever since then, the central theme of the play has captured human imagination – the fear that the robots we created will take over the world.

The Cambridge Dictionary defines “artificial” as an adjective to describe what is “made by people, often as a copy of something natural”; while “intelligence” as a noun referring to “the ability to learn, understand, and make judgements or have opinions that are based on reason”. Together, Artificial Intelligence (AI), a term coined by Stanford Professor John McCarthy in 1955, was defined as “the science and engineering of making intelligence machines”. Simply put, humans made AI to learn, understand, make decisions, and interact, in the same way that humans are intelligent.

It is fascinating how our perception of AI has evolved. In a few decades, we went from “AI is impossible” (1972)1 to “AI will solve all problems” (1992)2 to “AI will kill us all” (2014)3. Today, scientists believe that AI will soon breakthrough from a tool to a self-improving higher intelligence (The Singularity) with a mind of its own (Conscious AI). 

Here is an incomplete list of examples of AI: 

  1. Face filters (Snapchat, Instagram, TikTok); 
  2. Chatbots (Siri, Alexa, Google Home); 
  3. Recommendation engines (Netflix, Google, Amazon); 
  4. Game playing algorithms (Deep Blue, IBM Watson, and AlphaGo);  
  5. Interactive robots (Sophia, who addressed the UN, obtained Saudi citizenship, and sang with Jimmy Fallon); 
  6. Language models (GPT-3 by OpenAI, Megatron Transformer by Nvidia)

AI is becoming ubiquitous in its influence, impacting nearly every aspect of life. Around the world, governments, companies, and institutions are rapidly developing intelligent robots and algorithms that can recognize faces, give speeches, automate warehouses, paint and compose music and poetry, and so much more. However, with every technology advancement, there are risks of unintended consequences. You probably have seen the following topics: privacy vs surveillance, intrinsic bias in data and predictive analytics, manipulation of public opinion and addiction, robot rights, autonomous vehicles and autonomous weapons, … the list goes on. 

How do we prepare our future leaders to harness the power of AI while managing its risks? There will not be one magic ethic framework that fits all. To inspire and empower students to apply AI to improve the lives of their communities, ethics must be integrated from the very beginning, throughout the entire design, build, and implementation process. We need to educate our children to go under the hood, understand the data, and ask the ethical questions, who will it benefit? How does the algorithm work? what may be the sacrifice and tradeoff, in justice, equity, democracy, and privacy? Humans are problem solvers, if we build AI as a copy of ourselves, we should give it a moral compass.


  1. Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason, second edition, Cambridge, MA: MIT Press 1992.
  2. Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, London: Penguin.
  3. Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press.