Statement on Our Generative A.I. Use
Generative A.I. tools were used in the creation of this book, primarily for brainstorming ideas, providing definitions, and, as mentioned previously in our Accessibility Statement, producing images. In line with the Committee on Publication Ethics position statement on the use of generative A.I. in manuscript preparation, we’ve listed a few detailed examples on how gen A.I. was used.
Example 1.
Prompt to Claude 3.5 Sonnet: “We were hoping to build a very simple glossary of generative a.i. terms for university faculty members new to these technologies. We’d like to include LLM, hallucination, at least, but maybe we can brainstorm some others together? We’d like to cap it at 25 or so terms.”
Claude replied with 25 terms, of which we kept chatbot, fine-tuning, Generative A.I., GPT (Generative Pre-trained Transformer), Natural Language Processing (NLP), prompt, and training data. After further prompting, Claude generated one-sentence definitions of these terms, which we verified, revised, and made more robust.
Example 2.
Prompt to Canva’s Magic Media: “university student sitting at a computer prompting generative A.I.”
Magic Media produced a number of images, some of which are shown below.
Example 3.
Prompt to Claude 3.5 Sonnet: “What are the top risks and limitations to using generative AI?”
Claude replied with a list ten items, nine of which we already had on a list. The tenth item, “the risk of over-relying on A.I.-generated content without critical evaluation” inspired us to include a reflective exercise in our Limitations and Risks chapter. The list concluded with the sentence, “These risks and limitations highlight the need for careful consideration and responsible implementation of generative AI technologies.” We followed up with this prompt:
Prompt to Claude 3.5 Sonnet: “Can you say more about ‘These risks and limitations highlight the need for careful consideration and responsible implementation of generative AI technologies’?”
Claude replied with a “a more comprehensive explanation” whereby it grouped the previous list of risks and limitations into broader categories, for example, “Transparency and accountability” and “Legal and regulatory considerations.” We skimmed the output, but didn’t do anything with it nor continue the interaction with the A.I.
Example 4.
Prompt to Claude 3.5 Sonnet: “We’d like to explore the idea of faculty using gen A.I. to design assessments. If I gave you a learning outcome, could you design three similar assessments—one that is vulnerable to having students use A.I. to complete it, one that might mitigate A.I. use, and one that requires students A.I. for part of the assignment?”
Claude used the learning outcome, to “analyze the impact of different leadership styles on employee motivation and productivity in diverse organizational settings” and developed three assignments:
- a standard essay assignment with a general prompt (i.e., “Write a 1500-word essay analyzing the impact of different leadership styles on employee motivation and productivity…)
- an assignment involving the interview of a classmate, personal reflection, and in-class presentation (i.e., “Interview a classmate about their personal experience with different leadership styles…)
- an assignment that requires students to use A.I. and critically evaluate its output (i.e., “Use a generative AI tool to produce a 500-word analysis…)
These three assignments, although revised from the original Claude output, are used as examples in the Examples of Assessment Designs chapter of this book—both in how instructors could use generative A.I. for lesson planning and assessment design, but also how assignments can be adjusted to account for individual instructors’ comfort levels and preferences around student gen A.I. use.
An AI application designed to interact with users through text-based conversations. Modern chatbots, powered by generative AI, can engage in more natural, context-aware dialogues compared to their rule-based predecessors.
The process of further training a pre-trained AI model on a specific dataset or for a particular task. Fine-tuning allows generative AI models to be customized for specialized applications while benefiting from their broad pre-training.
Artificial intelligence systems capable of creating new content, such as text, images, or audio, based on patterns learned from existing data. This term encompasses various technologies that can produce original outputs rather than just analyzing or categorizing existing information.
A specific type of language model architecture that has been pre-trained on a large corpus of text. GPT models, such as those developed by OpenAI, have demonstrated impressive capabilities in various language tasks and form the basis for many popular generative A.I. applications.
A branch of AI focused on enabling computers to understand, interpret, and generate human language. NLP techniques are fundamental to many generative AI applications, especially those dealing with text and speech.
An input or instruction given to a generative A.I. system to guide its output. Prompts can range from simple questions to complex descriptions or requirements, and the quality of the prompt often significantly influences the quality and relevance of the A.I.'s response.
The vast collection of information used to teach an AI model. For generative AI, this often includes large datasets of text, images, or other media. The quality, diversity, and volume of training data significantly impact the model's performance and potential biases.