Why We Should Be Learning About AI
Kent Place School Teachers’ Perspectives on Artificial Intelligence
To introduce our new theme of Ethics and Artificial Intelligence (AI), we asked several Kent Place community members to share their thoughts on AI and why it is important to think critically about the role AI plays in our world.
Our contributors this week are:
Elena Iannuzzi, Upper School Math teacher
Mark Semioli, Middle School History teacher
Dr. Evelyn Hanna, STEM, Computer Science, Engineering & Mathematics Chair
What is your background in AI? What interests you most about AI?
Elena Iannuzzi (EI): Just like most people, I interact with AI on a daily basis! Outside of my everyday interactions with technology and AI, I have experience working with some AI programming languages. I am mostly interested in the mathematical and algorithmic structures that are used to develop AI. I think neural networks are fascinating and the level to which they have advanced in the past few years is astounding.
Mark Semioli (MS): I actually have very little practical experience with AI; as a young student, a professional in a field not related to STEM studies, and now, a teacher, I have always sat more in the fuzzy space of the humanities. More recently, however I have tried to expose myself to the newest trends and technologies as they come up in our lives, and, perhaps more significantly, in the lives of my own children and my students, who I know will be inhabiting an entirely different world than the one I grew up in and even the one I know now. I want to be able to speak their language, interact with them on a relatively equal plane, so that motivates me to stay current and not just rely on the “back in the day, we did it this way, and it was so much better” mantra that all parents default to when they discuss emergent technologies. Finally, I am also particularly interested in how the STEM world, and more specifically AI, connects with the more personal elements of our lives on a daily basis; it is my belief that the nexus points where these emerging areas of innovative thought and design crossover and interconnect with users in their own lives, is where real positive effects can happen and where real progress can take place.
Evelyn Hanna (EH): As a STEM educator, my background in AI consists of providing opportunities for students to explore the role of engineering and mathematics in developing AI.
What are some of the ways – expected or unexpected – that AI impacts everyday life? Why do you think it’s important for people to be aware of the roles AI has in our world?
EI: In our daily lives, AI impacts almost all of our interactions with technology. It perhaps most expectedly allows us to have self-driving cars and virtual assistants like Siri and Alexa. It also determines the ads that we see on our social media accounts, as well as the posts that we see from people that we follow, and it even helps our phones identify familiar faces in the pictures that we take. It determines the results that we see when we search something in Google, the songs, movies, or tv shows that streaming services recommend to us, or the location that we might drive to next. Perhaps in a more unexpected way, location services on our phones “learn” who we spend our time in close proximity with and tailor our online experience based on the habits of our closest friends and family. Awareness of the roles that AI has in our world is crucial in order for us to be able to interact with technology mindfully.
MS: One way that AI is affecting all our lives is its central role in developing, building, and implementing ranking algorithms to improve search engines to the point where they make our lives so darn surgical and efficient; they literally buy you back time and reduce the effort you need to exert to complete everyday tasks, so that you now can put all that energy into other aspects of your lives–i.e your family, your job, your hobbies, your passions. On the flip (as the kids say) these algorithms have this real spooky quality to them that makes it seem like someone (or something) is really inside your head, reading and then rereading your mind, or at the very least, dragging your brain through their codes in order to think and do what it wants you to think and do. The very intrusive nature of these search algorithms is something that I find uncomfortable, and I wonder if our own individual private lives and those of the people who we care about the most–family members, friends, and colleagues–will ever be able to get back to living experiences that are both more authentic and volitional. Yet, I acknowledge that it is important to understand AI since it will dramatically change our lives on a daily basis (it has already!) in ways we will not be able to decipher or adjust to until the negative effects and/or ethical concerns can be identified, vetted, and relayed to the layperson. Until then, the euphoria (and the money and capital) behind AI development seems to be like a freight train plowing forward without much to stop its progress. I only hope we can take some time to understand all aspects of the journey.
EH: As with most innovations, it is difficult to predict all the unintended uses/consequences of advancements in technology. Because AI has become such a part of our everyday lives, we might not even realize that any one particular action is leaning on AI. For example, the customized ads that populate our web browsers or the use of Waze to find the most efficient route to a destination. I think it’s important for people to be aware of the roles of AI because it impacts something as innocent as finding the quickest way home to something as troubling as manipulating human behavior.
What do you think is the most pressing ethical issue related to AI?
EI: One of the most pressing ethical issues that I’m thinking about is the implicit bias that gets coded into the training of neural networks and other deep learning algorithms. While we may think that the computer is “learning” our behavior and the behavior of others independent from human biases, we have to remember that humans develop the framework for these algorithms and essentially tell the computers how to learn. More specifically, this means that people build the framework of the AI machinery and then feed the program massive data sets to “train” the neural network how to respond to certain actions and inputs before putting it into use publicly. Training these pre-existing biases into neural networks could end up perpetuating those same biases in areas such as healthcare, housing markets, and lending services to name a few. I worry about the ethical consequences of people believing that technology, and AI in particular, is free from human bias or error.
MS: Bias is a real ethical concern related to AI. Machine learning systems can, intentionally or inadvertently, result in both repeating, and rapidly disseminating already existing biases; after all, AI still remains, at base level, a product of human endeavor, and we now know more than ever how we, humans, by our very nature are extremely biased actors on this Earth, no matter how noble our intention; and the unconscious biases we carry with us, many which are still to be uncovered by researchers and lay people alike, will only serve to reinforce existing patterns of bias, prejudice, and discrimination that ultimately will hinder the growth of individuals and communities. I think this will be a fresh area of focus for ethicists in the years to come.
EH: There are many pressing ethical issues related to AI. In my classes, I have students focus on coding bias and the implications of biased AI on systemic injustices.
What is essential for K-12 educators to know, teach, or make students aware of in terms of AI and their future?
EI: Over the past 20 or so years, computer technology has expanded at a rapid pace that doesn’t seem to be slowing down anytime soon — AI technology is no different. Because of this, we can only speculate as to what AI will look like and be capable of when many of our younger students graduate. More than anything, as students begin to interact with technology more and more in their daily lives, they need to be aware of how AI might affect their decision-making process. If biases are built into these neural networks, students need to be aware of this potential so that they are able to recognize and counteract this bias. Additionally, as students develop into consumers, they need to be aware how different applications of AI push them towards making certain purchasing decisions. Overall, students need to be taught how to interact with AI technology with a mindful and critical eye.
MS: Much like any other revolutionary innovation (It seems like we have one every 5 years now), it is important to take a balanced approach in the thought, implementation, and use of AI technologies. If past practice can predict future action, it appears that individuals adopt and consume innovative products and thoughts extremely quickly, and, sometimes, without thinking about all the consequences and effects that comes with early adoption. While it is important to be open-minded to the risks that come with any worthwhile innovation, it is not an easy thing to do in our fast-paced, what’s-the-next-big-thing world we inhabit; the institutional and social pressures to stay current and at the forefront of this revolution have never been more intense, and young people are the ones who are the most vulnerable.
So, we, as educators, need to be the individuals in the lives of young people that demand reflection, call for objectivity, and expose students to multiple lenses around AI, allowing for practical and balanced application in this changing world. Too much of a good thing can seem to always be a “bad thing” in one way or the other, but it is integral as educators to allow for reasoned debate and argument around AI and ultimately lead students to think more deeply and broadly about it. So, while we need to celebrate the innovators, the STEM inventors, and the other designers and thinkers who are at the forefront of revolutionizing the societies we live in–they have seemingly erased the line as to what is possible, which is a tough message to contextualize to young people–it is also integral to recognize that there are lessons to be learned from the past and present that can help mold how we approach this new future together.
EH: My goal is to cultivate many accessible ways for students to unpack the complexity of AI. As an educator, I believe it is imperative for students to become active innovators instead of passive consumers of technology. When it comes to AI, whether or not a student pursues a career in STEM, my goal is to have them feel confident in their ability to understand the impact of computer science (including AI) in their lives and to be able to make informed decisions based on their understanding of any advancement in technology.