Considerations for ChatGPT

By Cassie M. ’24

As you probably know by now, ChatGPT is a chatbot built by OpenAI that can write essays, song, poetry, and plays based on a certain prompt; chat in a conversational way while considering previous elements of the conversation; write lines of code; and solve math problems.  ChatGPT is a type of generative AI (the G in GPT stands for generative), meaning that it will respond to written prompts from the user with human-like text. GPT-3.5 – a massive AI neural network with approximately 175 billion parameters – is the basis of ChatGPT. GPT-3.5 was trained on a massive amount of digital text data from the internet; with all of this text input, GPT-3.5 created a large language model (LLM), which is like a mathematical map of human language. This LLM allows ChatGPT to interpret user inputs and generate conversational replies. 

Regarding how ChatGPT will affect students, I’ve been thinking about other tools we use to assist our learning – like calculators, spellcheck, and Grammarly – and how ChatGPT compares to such tools. For example, when we are younger, we learn how to do addition, subtraction, multiplication, and division by ourselves. As we get older, we use a calculator to handle these mundane tasks, so we can move to higher level thinking. Alternatively, ChatGPT can generate essays and can therefore take away the critical thinking components of writing. This makes me wonder if there are ways for ChatGPT to support student learning while still allowing students to perform the critical thinking components of an assignment, as a calculator does.  

I’ve also been thinking about how ChatGPT will affect other companies and how this may affect our lives in the future. In early February, Microsoft released a new version of its search engine, Bing, that contains a chatbot similar to ChatGPT. This release seems rushed and, given its timing, was likely done to help Microsoft compete with ChatGPT. As Microsoft shows, bots may be released too early because of competition in the field of chatbots, which could result in bots that produce more disinformation and have more inherent biases than bots who spent longer in the development.