On Generative AI
40 The Degenerative, part 2
The dangers that are particular to Generative AI include:
Inaccuracies and hallucinations: generative models are a marvel in churning out coherent, fluent, human-like language. In all that glibness are hidden factual errors, limited truths, fabricated references and pure fiction – referred to as “hallucinations”1,2. At the bottom of the ChatGPT interface, underlining all conversations, is the notice that ‘ChatGPT may produce inaccurate information about people, places, or facts1. The accuracy of ChatGPT could be around 60% or worse, depending on the topic2,3.
To make things worse, ChatGPT has a tendency to present truths without evidence or qualification. When asked specifically for references, it can conjure sources that do not exist or support no such truth as presented in the text4,2. Yet, many users tend to use it like an “internet search engine, reference librarian, or even Wikipedia”5. When a teacher or student uses it to get information on which they have no prior knowledge, they run the risk of learning the wrong thing or presenting false knowledge to others1,5.
The success of today’s LLMs lies in the sheer number of parameters and amount of training data, which they use to model how words are stitched together in human communication. Teachers and students should always keep in mind that the text generated by conversational models is not connected to understanding of this text by those models, or even a notion of reality1. While they can manipulate linguistic form with varying degrees of success, they don’t have access to the meaning behind this form6. “Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered… Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round”7 .
Shifting or worsening power and control: generative AI is dependent on huge amounts of data, computing power and advanced computing methods. Only a handful of companies, countries and languages have access to all of these. Yet, as more people adopt these technologies, much of humanity is forced to toe their line, and thus is alienated and forced to lose their expressive power1.
While the creators keep the power, they outsource the responsibility. The onus of sanitising the output of ChatGPT, for example, was put on Kenyan workers “who had to sift through violent and disturbing content like sexual abuse, hate speech, and violence”4.
Copyright and intellectual property infringement: Much of the technological know-how of generative systems is guarded behind corporate walls. Yet, the data is taken from the general public1. Is it ok to take pictures that were made public on some platform and use them without the knowledge or consent of the subject? What if someone’s face is used for racist propaganda, for example8? Is the only way to block Gen AI to make content private?
Beyond public data, language models can take content behind paywalls and summarise them for the user. Image models have been known to put together pictures where pieces clearly showed watermarks. There is also the issue of creative common licences where an author makes their work open to public but has to be cited every time it is used, which models may or may not do.
For teachers, this raises moral, ethical and legal issues. If they take content generated by models, are they free to use it and publish it as they wish? Who is liable if it is copyrighted or licenced under the creative commons9? How is the user even to know they are using other people’s property1? Unfortunately, there are no clear guidelines on the topic. We have to wait and watch and tread with care until a directive comes about.
Long-term effects of using Gen AI in education: For all the ways generative AI could be used in education, it is not clear what the long-term effects of such use would be:
- Since the act of writing also structures thinking, how would writing to Gen AI’s outlines affect students1?
- Would it affect scope of thinking, critical thinking, creativity and problem-solving skills1?
- Will it make students over-reliant on it because of the effortlessness with which information and solutions could be accessed1,10,9?
- Would students still be motivated to investigate the world and come to their own conclusions10?
- Would it suck us into a world view which is disconnected with the reality around us?
- How many skills would we lose for every step towards mastery in prompting techniques?
Concentrating on higher-order skills and leaving grunt work to AI might sound like a good idea, but repeated practice of certain foundational, lower-order skills is indispensable, because the perseverance and even frustration that comes with this are often needed for acquiring higher-order skills1,8. This is also necessary to decrease learners’ reliance on technology for performing basic calculations, as these undermine human agency and their confidence to face the world alone.
Some counter measures to guard against potentially long term harms could be:
- Using language models as a starting point only, to generate possibilities and explore different perspectives, rather than as a one-stop solution for all needs10;
- Verifying the output of the models with direct experiments or alternative sources;
- Always putting the teacher in the loop10;
- Promoting social learning and increasing exposure to creative human output1;
- Actively seeking out other educational resources and off-the-screen activities10;
- Trying to find other explanations and modes of thinking and approach.
It is always good to watch out for the tendency to assign false equivalences between humans and machines and even concede superiority to Gen AI. For example, it is often stated that humans cannot crunch as much data as AI. Is crunching of gigabytes and gigabytes of data even necessary for humans, given our skills in pattern identification, extrapolation and creativity? Because AI can analyse the content of 100 books in a moment, does it necessarily follow that a student won’t enjoy or benefit from one of those books? Is doing something faster necessarily even a good thing and a measure that we want to adopt8?
We have to bear in mind that children are not taught for the world and the technologies that exist today. They are prepared for, or given the skills to prepare themselves for, a world that will come about in 10–15 years8. The way ChatGPT revolutionised so much in just one year makes more of a case for education beyond ChatGPT rather than education for ChatGPT. Students need to be able to think for themselves, be resilient to adapt to change and grow with new challenges that life throws at them.
The ultimate goal of education cannot be to produce efficient operators of intelligent machines or worker ants for the production line, but to help form free-thinking, creative, resilient and fully rounded citizens. There are critical questions to be mulled over, and long-term effects to be screened, before deciding how best to adopt a technology to achieve this goal. This important task cannot be relegated to AI, generative or not.
1 Holmes, W., Miao, F., Guidance for generative AI in education and research, UNESCO, Paris, 2023.
2 Tlili, A., Shehata, B., Adarkwah, M.A. et al, What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education, Smart Learning Environments, 10, 15 2023.
3 Lewkowycz, A., Andreassen, A., Dohan, D. et al, Solving Quantitative Reasoning Problems with Language Models, Google Research, 2022.
4 Cooper, G., Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence, Journal of Science Education and Technology, 32, 444–452, 2023.
5 Trust, T., Whalen, J., & Mouza, C., Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education, Contemporary Issues in Technology and Teacher Education, 23(1), 2023.
6 Bender, E.M., et al, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, 610–623, 2021.
7 Chomsky, N., Roberts, I., Watumull, J., Noam Chomsky: The False Promise of ChatGPT, The New York Times, 2023.
8 Vartiainen, H., Tedre, M., Using artificial intelligence in craft education: crafting with text-to-image generative models, Digital Creativity, 34:1, 1-21, 2023.
9 Becker, B., et al, Programming Is Hard – Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation, Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023), Association for Computing Machinery, New York, 500–506, 2023.
10 Enkelejda, K., et al, ChatGPT for Good? on Opportunities and Challenges of Large Language Models for Education, EdArXiv, 2023.