31 Questions to Consider

Concerns and Questions

Copyright, and intellectual property law in general, are only one lens to think about AI: It’s still important to grapple with legitimate concerns about this technology and consider what responsible development and use should be.
  • What impact will these tools have on artists and creators’ jobs and compensation?
  • How can we ensure that AI that is trained on the commons contributes back to the commons as well, supporting all types of creators?
  • What about the use of these tools to develop harmful misinformation, to exploit people’s privacy (eg, their biometric data), or in ways that perpetuate biases?
  • How can we ensure human oversight and responsibilities to ensure that these tools work well for society?

These are just some of the tricky issues that will need to be worked out to ensure people can harness AI tools in ways that support creativity and the public interest. Along with other policy and legal approaches to governing AI, it’s important to look to community-driven solutions that support responsible development and use. Already, StabilityAI will let artists opt-out of its training data set, as well as opt in to provide greater information about their works.

While this precise approach raises a variety of views, indexing of the web has functioned well using a similar sort of opt-out approach — set through global technical standards and norms, rather than law. Creators of some generative AI tools are using licenses that constrain how they are deployed, which also carries various trade- offs.

 

“Better Sharing for Generative AI.” Creative Commons, 6 Feb. 2023, https://creativecommons.org/2023/02/06/better-sharing-for-generative-ai/. CC BY 4.0.

Share This Book