Beginning in 2021, the European Union has been considering a new AI Act, which would regulate certain uses of AI. In particular, it seeks to ban certain uses of AI, such as broad-based real-time biometric identification for law enforcement in public places, and it seeks to ensure that certain precautions are taken before deployment of uses deemed ‘high-risk,’ such as the use of AI for access to education, employment, financial credit, or other essential services.
Creative Commons has proactively worked with policymakers and other key stakeholders, creating a constructive dialogue to inform both the content of the text and the context of the debate. We agree with the objectives of the Act: ensuring AI systems placed on the Union market are used in a way that respects fundamental rights and Union values; providing legal certainty to facilitate investment and innovation in AI; and facilitating the development of a single market for lawful, transparent, and trustworthy AI applications to prevent market fragmentation.
While the proposal covers a broad range of topics, we have focused on two areas that are pertinent to our strategy, better sharing.
First, we have advocated for the benefits of supporting open data, open source, and interoperability as means to support a healthy marketplace for and robust competition in the development and use of AI.
Harmful uses of AI are best addressed through a contextual approach, based on clear principles that can adapt to new developments in how AI is designed and implemented for different use cases based on their risks to society. Such a tailored approach can and should avoid overbroad restrictions on general purpose artificial intelligence (GPAI) and the sharing and use of open source tools.
The European Commission’s original draft proposal did not directly address GPAI, and we agree that this is an appropriate approach. Requirements on GPAI creators are unnecessary because the follow-on developers of high-risk systems will already be covered by this Act. To the extent GPAI creators wish to serve that market, they already have incentives to cooperate with high-risk users to ensure broad compliance.
We also recognize that the EU legislators are currently considering ways to address responsibilities related to GPAI, including open source tools. If that moves forward, we recommend adding language to ensure that GPAI regulations are tailored and proportionate. These regulations should not constrain open source tools, and should focus on ensuring cooperation between GPAI creators and users with whom they have an ongoing relationship.
We believe this is important to get right in part because it’s vital that the capacity to develop and use AI not be concentrated in the hands of a small number of large commercial operators. Lowering barriers to the development and use of AI — such as by supporting the availability of open data and open source tools, including as part of General Purpose AI (GPAI) — can spur innovation in services and lead to major social benefits. Empowering people to share their data among services and enabling them to move between services (ie, through data portability and interoperability) also can play a role in facilitating innovation and inhibiting user lock-in. While it is crucial to ensure commercial viability and incentives for investment, it is also crucial to ensure that this does not come at the expense of robust competition, consumer protection, and the public interest more broadly.
In addition to these topics, we’ve also taken this opportunity to ensure the intersection of copyright and AI is well understood. While copyright’s relationship to AI is not at core to this proposal, it is important that policymakers understand that appropriate limits on copyright are also core to serving the public interest. As we’ve explored in the past, AI generated content should not be copyrightable, and training AI on copyrighted works should not be limited.