As the year comes to a close, we’re spotlighting Creative Commons’ public policy work, recapping what we’ve done and looking ahead to the new year. In this edition, we turn to our work on artificial intelligence (AI).
Recently, you might have seen the news headline “Art Made With Artificial Intelligence Wins at State Fair,” or even played with one of many tools that enable people to generate content through artificial intelligence. These are just some of the ways AI is becoming increasingly connected with our daily lives. AI is a broad, evolving concept that relates to a number of technologies (such as machine learning and neural networks) that enable computers to analyze and make assessments about data on a scale that was never possible before.
At Creative Commons, we’ve focused on how the development of AI intersects with the commons. Generating content with AI is particularly relevant to CC’s mission, our better sharing strategy, and our participation in the Movement for a Better Internet. AI systems are often trained by analyzing copyrighted works — for instance, the coronavirus was detected in its early stages via large-scale analysis of news articles, and similar sorts of analysis are key to vaccine research. What’s more, AI is used in systems people use to find, access and share information — consider, for example, the recommendation systems used to find information across social media. Across all the complexity of new AI systems and their uses, we’re guided by two questions: How does the proliferation of AI connect to better sharing: sharing that is inclusive, just and equitable — where everyone has wide opportunity to access content, to contribute their own creativity, and to receive recognition and rewards for their contributions? And how does the proliferation of AI connect to a better internet: a public interest vision for an internet that benefits us all?
CC’s Approach to AI & Public Policy
Creative Commons’ advocacy on AI centers around our core beliefs about building the commons and around addressing copyright barriers to better sharing.
Originality and human authorship must remain essential to the granting of copyright or other related exclusive rights over creative works, and AI-generated content by itself does not meet those standards. Copyright exists to incentivize human creativity and, where the hallmarks of creative choices don’t exist, protection should not arise. Giving exclusive rights for works produced by AI would impede copyright’s core purposes and impoverish the commons without any recognizable public benefit in the form of bolstered creativity. Thus, AI-generated output should be presumed to be in the public domain, insofar as the content was generated autonomously without human, creative choices.
Copyright shouldn’t provide a broad monopoly over complementary technologies, and the use of lawfully-accessed creative works to train AI should not be constrained by copyright. The use of creative works to train AI, such as through text and data mining (TDM), does not compete with the original markets for the work, and, in fact, may enhance them by increasing demand for the work. Meanwhile, restricting such non-consumptive and non-expressive uses is not necessary to incentivize creation of the underlying works and thus would not create public benefit.
To take a holistic approach to supporting better sharing, we have also expanded the focus of our advocacy to consider other key issues.
The capacity to develop and use AI to support better sharing should be widely distributed, rather than concentrated among a narrow few: Better sharing depends on a dynamic environment, where people are not locked into a narrow set of services for sharing knowledge and creativity. While it is crucial to ensure commercial viability and incentives for investment in helpful services, it is also crucial to ensure that developments in AI do not come at the expense of robust competition, consumer protection, and the public interest more broadly.
The risks of AI should be addressed through tailored means, without wide-sweeping, generic restrictions on general purpose technology and sharing: AI does not always generate unqualified good or harm. Harmful uses are best addressed through a contextual approach, based on clear principles that can adapt to new developments in how AI is designed and implemented for different use cases based on their risks to society. Such a tailored approach can and should avoid overly broad restrictions on general purpose technology, in particular the sharing and use of open source tools.
Online platforms’ management of content sharing should be coupled with transparency and accountability measures: Large social media and user-generated content platforms use automated tools, increasingly driven by AI, in order to manage the vast scale of information produced and shared on their services. CC has long opposed mandatory, automated filtering for copyright infringement, because of the ways filtering can negatively impact free speech, as well as the sharing of culture and knowledge. When it comes to voluntary measures taken to manage content on sharing platforms, we believe it is important, at a minimum, to ensure people are well-informed about platforms’ practices, and to hold platforms accountable to their promises.
CC’s Engagement on AI & Public Policy in 2022
- Engaging on AI regulation in EU and UK: Starting in 2021, the European Union has been considering a new AI Act, which would regulate certain uses of AI. In the EU, Creative Commons has proactively worked with policymakers and other key stakeholders, creating a constructive dialogue to inform both the content of the text and the context of the debate. We have focused on ensuring the intersection of copyright and AI is well understood. Moreover, we have advocated for the benefits of supporting open data, open source, and interoperability as means to support a healthy marketplace for and robust competition in the development and use of AI. Similarly, we responded to the UK’s consultation on their white paper: Establishing a pro-innovation approach to regulating AI.
- Supporting transparency around content moderation and opposing filtering: The EU passed landmark legislation, the Digital Services Act, regarding how online platforms moderate user generated content. We raised concerns about proposals to mandate filters, which fortunately did not make it into the final text, and we supported the enhanced accountability and transparency measures included in the text. Meanwhile, in the USA, Creative Commons joined a coalition of groups concerned with legislation that would have mandated copyright filters.
- Policy paper on copyright, including AI: In our new policy paper titled “Towards Better Sharing of Cultural Heritage — An Agenda for Copyright Reform,” we advanced proposals to support cultural heritage institutions in making use of their collections for AI-training purposes, and in making AI-generated content available. In addition, CC’s Copyright Platform Working Group is working to update previous work on the intersection of copyright and AI.
- Webinars: AI Inputs, Outputs and the Public Commons: We also sought to bring more visibility to issues around AI and copyright, by hosting a series of webinars with experts, one focused on how open works and better sharing intersect with AI inputs — works used in training and supplying AI — and another focused on how open works and better sharing intersect with AI outputs — works generated by AI that are, could be, or should be participating in the open commons.
Next year, we plan to build on this work in a number of ways. First, as AI tools for generating content have become more widely available, there are more and more questions about how copyright should apply. We will continue to engage in this debate to support better sharing. Second, as the EU’s institutions negotiate the final text of the AI Act, we will continue to actively engage and comment on the draft texts. We’ll also engage in similar debates in the UK, at the World Intellectual Property Organization, and elsewhere as opportunities arise.