There are several challenges that are specific to the higher education. Three questions worth considering are:
- How do we redefine academic integrity?
- How do we protect our intellectual property?
- How do we ensure equitable access to genA.I. tools?
Redefining Academic Integrity
The growing popularity of A.I. has brought with it increased attention and conversation around academic integrity in higher education. However, violations of academic integrity happened before ChatGPT, whether students copied from each other, bought a paper or assignment (aka, “contract cheating”), or copied verbatim from the Internet without attribution. The introduction of gen A.I. tools brings with it opportunities to initiate conversations with students and bolster classroom messaging around academic integrity, as well as redesign assessments that are authentic, aligned with course learning outcomes, and emphasize process over product.
As generative artificial intelligence continues to proliferate, the line between “original” and “plagiarized” gets fuzzier. For example, because gen A.I. models are not people, simplistic definitions of plagiarism (i.e., “copying others’ words”) do not capture the complexity of using A.I.-powered tools. Sarah Elaine Eaton offers a “postplagiarism” paradigm that recognizes the hybridity of human writing and the possibility for the enhancement of human creativity [1]. Though her postplagiarism tenets may lean too heavily toward technological optimism, they suggest a tempered and open orientation toward A.I. tools, to allow everyone to find their footing in an ever-shifting environment.
Protecting Our Intellectual Property
Generative A.I. introduces complex challenges in the realm of intellectual property and copyright—both in terms of the use of copyrighted materials in training datasets and the security risks associated with inputting your own, your students’, or another person’s intellectual property into generative A.I. models.
When users enter unpublished research, confidential data, or original ideas into these systems—say, as part of a prompt around summarization, revision, or feedback—they may inadvertently expose sensitive information to third parties or compromise their own copyright. The terms of service for many A.I. tools may grant the service providers broad rights to use and analyze user inputs, potentially jeopardizing intellectual property rights.
This situation necessitates careful consideration and clear institutional guidelines to protect the integrity of scholarly work and maintain the privacy of sensitive academic information in an increasingly generative A.I.-integrated educational landscape.
Ensuring Equitable Access
Cost of tools poses a barrier for many students in accessing generative AI tools. With many tools currently available for free, some, like ChatGPT, have paid tiers with significant improvements in functionality and performance for paid subscribers. Those students who can afford to pay for upgraded models may be disproportionally advantaged in assignments that incorporate the use of generative A.I.
When designing activities or assessments that incorporate student use of gen A.I., encourage the use of free versions. For example, Microsoft’s Copilot (formerly Bing) can be used in “protected” mode (private, more secure) and draws on GPT-4, the same model that powers the paid versions of ChatGPT. Designing assessments that draw on these free versions will make access for all students easier, even while there are continuing inequities in terms of internet availability, cost, and speed.
That said, if students are learning online from other countries, some gen A.I. tools may be restricted due to government regulation or censorship. Attention to this possibility may mean allowing some students to opt-out of assignments that use generative AI, or providing alternatives for their engagement.