Ethics and Analytics

Here are some examples of ethically questionable practices in education, drawn from actual examples taking place today:

  • A patient is required to see a healthcare robot instead of a human (Bresnick, 2018); students have been taught by robot tutors (Eicher, Polepeddi, & Goel, 2018) without being told that they are robots.
  • Google reveals ‘Project Nightingale’ after being accused of secretly gathering personal health records (Griggs, 2019); Google also offers a ‘Classroom’ application.
  • Analytics data is being used to adjust health insurance rates (Davenport & Harris, 2007); it is no stretch to imagine learning analytics data being used for this purpose.
  • A company experiments on the use of news feeds and other data to alter the emotional states of users (Kramer, Guillory & Hancock, 2014); we can foresee similar experiments aimed at keeping classes in order.
  • A cafe in Delhi uses facial recognition software to bill its customers. (Sullivan & Suri, 2019); a school district in New York has started doing the same “for security” (Klein, 2020).
  • A physician refuses to apply a certain life-saving technology because of his religion (Kemp, 2013). Educators may refuse to use learning analytics for similar reasons.

All of these are cases where advanced computing applications and learning analytics (herein called ‘analytics’ for brevity) are being used. How do we address these practices? Are they ethically acceptable in education? What would constitute ‘ethically acceptable’? On what basis should we decide, one way or another? That is one topic of this essay.

Additionally, the use of analytics and the infrastructure that supports them may be pushing society in a direction that educators, among others, find uncomfortable. Analytics offer powerful new tools for teaching, but these tools may be misused by those with unethical intent. As Sasha Baron Cohen argued recently, the platforms created by Facebook, Google, Twitter, and other companies constitute “the greatest propaganda machine in history”  (Baron Cohen, 2019). Where do we draw limits – if any – on the social impact of analytics, how it is used, and who can use it? That is another topic of this essay.

Technology today is either already able or soon to be able to perform many of the functions currently performed by humans. The examples offered in the introduction are all cases where this intervention becomes potentially risky or unwanted. There may be as many reasons for this as there are examples. And for many such cases, there may be as strong an argument in favour of the intervention as against.

This essay does not propose to resolve such issues. Indeed, it remains an open possibility that some of these issues may have no resolution. Rather, it is an exploration of how we as a society ought to address such issues. What is the scope of the problem facing us? What are the historically relevant approaches to social dilemmas, and why do they seem unequal to the task before us today? What counts as evidence for our discourse, and what would an appropriate resolution look like?

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Ethics, Analytics and the Duty of Care Copyright © by National Research Council Canada is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book