Personalising Learning

26 The Flip Side of ALS: Some Paradigms to take note of

Despite the promised potential of adaptive learning systems, many questions remain unanswered. There is not yet enough research or documentation of classroom practices that help broach these issues:

  • Recommendation systems are used for suggesting movies to Netflix users. They help consumers home in on the right choice of, say Audio Speakers on Amazon. But can they actually improve learning outcomes for each student in the classroom1?
  • Does focussing all the time on performance and individualisation affect a student’s psychological well being2?
  • Individualisation demands a lot of discipline and self-regulation from a student. They have to start working by themselves and continue working till they finish all assigned activities. Are all students able to do this without help2?
  • How do we balance individualisation with social learning opportunities3?
  • How do we go from using ALS as a support for a single topic, to using these systems systematically, across topics and subjects2? What about the curriculum change that will be required for such an incorporation of adaptivity3?
  • What about the required infrastructure? What needs to be done about data and privacy, as well as bias and reinforced stereotypes3?

When developing ALS, some principles are used either directly or implicitly. These are not always without consequences.

A paradigm of ALS: old is gold

What do machine learning systems do when they predict or recommend something? They use the student’s past experiences, preferences and performance in order to choose what to recommend to them; they look to the past in order to predict the future. Thus, these systems are always biased towards the past4. Machine learning works best in a static and stable world where the past looks like the future5. ALS, based on machine learning models does more or less the same thing, but now with the addition of pedagogical considerations.

As a consequence, these systems are not able to account for fluctuations in normality like the COVID pandemic, health issues and other problems. They can struggle to take account of age, growth, mastery of new competencies and personal evolution of young humans.

Is student behaviour even predictable? How many times can we repeat a formula that worked well in the past, before it becomes boring, repetitive and impedes progress6? Even if such a prediction were possible, is it even prudent to expose students only to that which they like and are comfortable with? How much novelty is overwhelming and counterproductive6?

It is difficult to decide how similar recommended activities should be, how many new types of activities should be introduced in one session and when would it be productive to push a student to face challenges and explore new interests. The answers do not lie in the students’ pasts alone.

A paradigm of ALS: the explicit reflects the implicit

Even where the past can be used reliably to predict the future, the past itself could be difficult to capture accurately. How can Youtube know a user liked a video? It is easier where they clicked the Like button or subscribed to the parent channel after watching it. But such explicit behaviour is often rare. Recommendation systems have to regularly resort to implicit signals that may or may not fully reflect the truth4. For example, Youtube uses the time a user spent watching the video as an implicit signal that they liked the video and would like to watch similar content. But, just because a video played on someone’s computer till the end hardly means the person liked it, or even watched it7.

What about how feedback is recorded in an adaptive learning system? To gauge, for example, if a student was attentive during an activity, the system might record the number of digital resources they clicked on, and when and for how long they accessed them. But these cannot accurately reflect their level of attention1.

For example, if the student is clear about what to do for an activity, they might consult a few resources and zero in on the critical points quickly. Someone who is not as clear might open and spend time on all the listed resources without learning much1. It is possible that the first student is wrongly flagged for lack of motivation and made to do additional work.

Also, the machine learning models can only note that two things happened – a student clicking on a resource and a student scoring high on the associated exercise. They cannot infer that the student scored high because they consulted the resource – they can infer correlation but not causation5.

The unfair expectation of some ALS is that the teacher will dive in and remedy such errors. In other systems, the teacher does not even have the option to do so.

The paradigm of ALS: everything can be replaced by this one question

Recommendation systems cannot handle multiple goals. The aim of the ALS is often put forth in a form of a single question: the surrogate question. What rating did a user give a movie, how long did they watch a video, what is the score of the student in a quiz, how well did they satisfy the criteria the machine used to measure attentiveness… The systems are then trained to attain these goals and tested based on whether they was achieved. Their performance is constantly adjusted to maximize their score with respect to these goals.

If scoring on the quiz is the goal, certain content is recommended in a certain way. That way, exam performance is the surrogate problem that is solved. If the goal is just to make them click on many resources, recommendation would be tailored to push them to do just this. In this instance, making resources adequately attractive is the problem.

The  choice of the surrogate question has an outsized importance on how the ALS works. What is more, contrary to the promotion of ALS as objective systems, there is more art than science in selecting the surrogate problem for recommendations4.

All tech is not hi-tech

As we have seen, many decisions go into the making of ALS – what data is measured, how this data is used to gauge feedback and other information, what goals are optimised, and what algorithms are used to optimise these goals. Often it is programmers, data scientists, finance and marketing experts that are involved in making these decisions. The input of teachers and pedagogical experts in the development process is rare and often comes after the designing process2. Products are not field-tested before adoption in schools, and often their proclaimed effectiveness is based on testimonials and anecdotes, instead of scientific research2.

As a result, what a school needs and is familiar with has little impact on what software companies are building. Cost, availability and infrastructure can have a major say on what schools can buy. It is important to bear this in mind while deciding if or how to use a particular product. Perhaps it is better not to think of them all as adaptive learning systems or AI, but individual systems with wildly different objectives, designs and capabilities.

ALS as a whole can be used for personalising feedback, scaffolding and practice. They can find gaps in learning and remedy it within the limits of programming and design. They cannot detect ‘teachable moments’ or when it is right to capitalise on the mood of the class to introduce a new idea or example. These capabilities that make learning magical and which help the lesson endure in the student’s mind are solely the forte of the teacher.


1 Bulger M., Personalised Learning: The Conversations We’re Not Having, Data & Society Working Paper, 2016.

2 Groff, J., Personalized Learning: The state of the field and future directions, Center for curriculum redesign, 2017.

3 Holmes, W., Anastopoulou S., Schaumburg, H & Mavrikis, M., Technology-enhanced personalised learning: untangling the evidence, Stuttgart: Robert Bosch Stiftung, 2018.

4 Covington, P., Adams, J., Sargin, E., Deep neural networks for Youtube Recommendations, Proceedings of the 10th ACM Conference on Recommender Systems, ACM, New York, 2016.

5 Barocas, S.,  Hardt, M., Narayanan, A., Fairness and machine learning Limitations and Opportunities, MIT Press, 2023.

6 Konstan, J., Terveen, L., Human-centered recommender systems: Origins, advances, challenges, and opportunities, AI Magazine, 42(3), 31-42, 2021.

7 Davidson, J., Liebald, B., Liu, J., Nandy, P., Vleet, T., The Youtube Video Recommendation System, Proceedings of the 4th ACM Conference on Recommender Systems, Barcelona, 2010.

Licence

Icon for the Creative Commons Attribution 4.0 International License

AI for Teachers: an Open Textbook Copyright © 2024 by Colin de la Higuera and Jotsna Iyer is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book