37 Extrapolation in Higher Education

Generative AI tools have the ability to make educated guesses about a user’s learning style, how they interact with the system, and their academic results, even when users try to prevent this. While this information can be valuable for educators it raises several issues.

The key concern is that for each piece of information an educator directly provides to an AI tool (which are designed to make inferences), it can deduce multiple additional pieces of information. This process becomes more accurate as more data is supplied in each interaction. With each exchange, the AI builds a more refined profile of the user.

The AI’s ability to make predictions beyond its initial training data can unintentionally expose private information. This is especially problematic in education, where protecting student data is crucial. For example, an AI might conclude that a student has a learning difficulty or is dealing with mental health challenges just by analyzing how they use the system, even if this wasn’t explicitly stated. While this predictive capability could enhance personalized learning, it also risks invading privacy. As a result, strict guard rails about data handling and use are necessary.

 

This work was generated in collaboration with ChatGPT, an AI language model developed by OpenAI.

This work is licensed under CC BY 4.0

Share This Book