PUBLISHING PRACTICES

13 Coda: Artificial Intelligence

Kathleen DeLaurenti

Folksong group at center stage with multiple items (e.g., t-shirts, scores, streaming files, recordings, etc.)

“I’ve also gone from feeling like I was the only person using some of these tools to being in a sea of people using them. Right now, when we think of AI, the general public is using natural language processing, and it’s generative adversarial networks and their offspring. Artificial Intelligence is theoretical and, in my opinion, does not actually exist. What we’re dealing with is really big statistics—really, really big—but beyond that, it’s not artificial—it’s drawing from human activity, and it’s not intelligent.

I’m not very interested in those things anymore.  Because they’ve taken out all the unpredictability, and without unpredictability, it’s no fun as an artist. And so now, that leaves me with it to go back to the drawing board with building artificial intelligence tools that say: What about the types of artificial intelligence we’ve left behind and the pursuit of mimicry? What about the actual materials themselves and their intelligences in the communities they’re from, such as Stones, which is part of my research? I think there are still a lot of questions to be answered. So I am currently very bored by AI, but hopefully it’ll make some new problems for me to be concerned with.” Suzanne Kite


In 2022, OpenAI made its ChatGPT tool widely available to the public for free for the first time. Since then, news outlets and college campuses have been buzzing with conversations about how artificial intelligence tools built on large language models, LLMs, such as ChatGPT and Google’s Gemini, will fundamentally change the ways we create and access knowledge.

AI tools are built on computer processing engines called neural networks. Modelled on the node-like structure of the human brain, neural networks were developed to process language naturally. These new AI tools utilize neural networks to analyze data and produce results based on your prompts and inputs. In the Chapter 4: Preparing to Search, we note that computers don’t speak English, Mandarin, or Spanish. But computer scientists have been working for some time to create better tools to translate the language humans use to the ones and zeros that power computer programs. ChatGPT, and other AI tools are the first publicly available attempts at making it easier for computers to understand how you speak and respond in a way that a person understands.

Responses to using these generative tools for research and music have been mixed. Some artists find benefits to expanding their practice, while others are concerned about computers getting more commissions than artists.[1] Librarians, students, and faculty have also been concerned about hallucinations in LLM tools, where the tools display outputs that try to fill in the gaps in training or demonstrate bias that has occurred in a data set.[2] For example, students who have tried to take shortcuts on assignments have asked ChatGPT to provide a bibliography on a research topic; sometimes ChaptGPT provides lists of articles that don’t exist.  As recently as February 2024, the authors and their students were surprised to read citations to articles provided by ChatGPT that don’t exist. Perhaps more troubling, these articles seem like they could exist because they feature known scholars or journals with titles to research articles those scholars work in. However, the articles or even books simply don’t exist.

Google has announced plans to implement additional LLM technology in its primary search results. While the goal is to provide more context and specific results, there are some things to be wary of when using these tools. Some Google users are already seeing the snippet results that try to provide direct answers in your Google results page returning different answers to the same question asked by different people. This technology is new and it is important to be a little skeptical of changes you notice in tools you have used for a long time.

In music, AI is being used for everything from having your favorite artist perform someone else’s songs[3] to creating and mastering original music. These AI-based tools have learned faster than a human ever could from listening to—or more accurately, training on—massive data sets of music. An astounding 43 years after John Lennon’s death, a new song by The Beatles, “Now and Then,” was made possible by noise reduction and feature extraction models trained by AI.[4]

These uses seem like good applications of AI tools. Some artists including Troye Sivan, John Legend, and Charlie Puth have even collaborated with Google on Dream Track, so other creators can use their voices to make new music and post it on Google platforms.[5] About his decision to participate, Legend notes, “As an artist, I am happy to have a seat at the table, and I look forward to seeing what the creators dream up during this period.”[6]

It can be fun to play with this technology and imagine what might be possible. Yet, many advocate for balance and caution as powerful new technologies are introduced. In their book Key Changes, Bill Rosenblatt and Howie Singer outline the ways that technology has influenced music in the last 125 years. The transitions from radio to CDs to streaming have impacted not only the way people consume music but also how music is made. The exponential growth of the streaming industry and the formula for calculating royalties has led musicians to make songs 30 seconds shorter on average.[7]

Considering both present and anticipated technological advances, we encourage musician-scholars to reflect on these questions:

  • How will these new technologies influence music?
  • What kinds of bias might we see introduced by these tools into the system?
  • Will every song start to sound the same?
  • How do we continue to find ways to pursue mission-driven creative research and sustainable artistic careers in a world where algorithms are deciding what is popular and who gets funded?

It is an important time to play, explore, investigate, and participate in conversations where you can voice concerns. Researchers including Timnit Gebru, Safiya Noble, and Joy Buolamwini have been strong advocates for thoughtful development that ensures new technologies do not perpetuate institutional bias.[8] For instance, Adobe sold fake images of the Israeli-Palestine conflict. [9] When asked “What is an African country starting with the letter ‘K’?”, Google recently responded “None.” The over 50 million people living in Kenya would disagree.[10] While mistakes are expected as a new technology becomes widespread, Google’s decision not to adjust its algorithm to a   ddress search results that prioritize factually incorrect responses is concerning.

It is easy to feel overwhelmed by—and then disengage from—the complexities and potential risks of generative AI. Despite these challenges, understanding the fundamentals of information systems empowers us to make ethical and informed decisions. This knowledge is crucial for efficient research, copyright, and publishing in the creative field. We look forward to future updates to this open textbook that reflect the evolving impact of these advances in our systems and provide new approaches to creating artistic work.

Media Attributions


  1. Spring 2023 AI Listening Sessions | U.S. Copyright Office,” accessed November 27, 2023, https://www.copyright.gov/ai/listening-sessions.html.
  2. “What Are AI Hallucinations? | IBM,” accessed November 27, 2023, https://www.ibm.com/topics/ai-hallucinations.
  3. “AI Kim Seokjin / Jin (BTS) Voice Generator | Voicify AI Cover Generator,” Voicify AI, accessed November 29, 2023, https://www.voicify.ai/custom-kim-seokjin--jin-bts.
  4. “Listen to the Last New Beatles’ Song with John, Paul, George, Ringo and AI Tech,” AP News, November 2, 2023, https://apnews.com/article/beatles-last-song-now-then-release-fbce70071b4624f0d90bd18347f20fc6.
  5. “Transforming the Future of Music Creation,” Google DeepMind, November 16, 2023, https://deepmind.google/discover/blog/transforming-the-future-of-music-creation/.
  6. Chloe Veltman, “Google’s Latest AI Music Tool Creates Tracks Using Famous Singers’ Voice Clones,” NPR, November 17, 2023, sec. Music, https://www.npr.org/2023/11/17/1213551049/googles-latest-ai-music-tool-creates-tracks-using-famous-singers-voice-clones.
  7. Howie Singer, Key Changes: The 10 Times Technology Transformed the Music Industry (New York, NY: Oxford University Press, 2023).
  8. Lorena O’Neil, “These Women Tried to Warn Us About AI,” Rolling Stone (blog), August 12, 2023, https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/.
  9. Kylie Kirschner, “Adobe Is Selling AI-Generated Images of the Israel-Hamas War, and Some Websites Are Using Them without Marking That They’re Fake,” Business Insider, accessed November 29, 2023, https://www.businessinsider.com/adobe-stock-ai-generated-images-israel-hamas-war-2023-11.
  10. “Google’s Relationship With Facts Is Getting Wobblier - The Atlantic,” archive.is, November 22, 2023, https://archive.is/jI933.
definition

About the author

Kathleen DeLaurenti is the Director of the Arthur Friedheim Library at the Peabody Institute of The Johns Hopkins University. She holds an MLIS from the University of Washington and a BFA in vocal performance from Carnegie Mellon University.

definition

License

Icon for the Creative Commons Attribution 4.0 International License

Coda: Artificial Intelligence Copyright © 2024 by Kathleen DeLaurenti is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book