Skip to content

Sharing in the Age of AI

Melies color Voyage dans la lune, by Georges Méliès, Public Domain.

AI is here and it has already had a profound impact on how we access, share, and interpret knowledge from the commons. We continue to believe that current advances in AI pose a serious threat to the future of the commons.

As guided by our mission, we are taking a community-informed lens when evaluating the risk and opportunities of AI when it comes to protecting and strengthening the commons. At this time, we do not believe that a blanket anti-AI or pro-AI organizational stance is in service of the commons. This is because, ultimately, we want human knowledge to be used to develop AI in the public interest. This is how AI becomes a tool that improves human flourishing.

However, we do not believe in giving over to the current market-driven AI approaches that are causing real harm to people and the planet. We believe that those who contribute to the commons should set the norms and drive the adoption of safeguards so that AI uses human knowledge in service of the public good. Any involvement that CC has in regards to AI is done so with the goal of a commons that is safe, just, and sustainable. 

CC’s Stance on AI 

  1. Ideas and facts belong to everyone

Ideas and facts are not copyrightable. Machine-assisted analysis is a legitimate and necessary extension of human inquiry.

  1. Copyright is not equipped for the age of large-scale AI

Current copyright frameworks fail to address the ethical, social, and economic challenges posed by large AI models. New governance structures must center community rights, consent, and collective benefit rather than corporate control.

  1. Community-led AI governance is a prerequisite

AI governance should be community-led. The power to set rules must rest with the people and communities who create, curate, and maintain the data that fuels AI.

  1. Proprietary AI can cause harm

We acknowledge that large-scale, proprietary AI systems can contribute to cultural and economic imbalances: they risk diminishing the role of human creativity, affecting livelihoods, extracting human content, and flooding digital spaces with lower-quality, non-representative content.

  1. Redefining openness: open as equitable

A new, intersectional definition of “open” is needed, one that recognizes the power imbalances between corporate entities and the commons. True openness is not free-for-all extraction, but equitable participation in shared knowledge. 

  1. Prioritizing small-scale, value-aligned AI

AI development should prioritize smaller, targeted, and values-driven models that address real-world problems, such as advancing public health, education, or climate research, without the extractive footprint of “scale at all costs.”

Current AI Policy Positions

Investment in AI must create societal benefit. 

We oppose the uses of AI tools that harm people. Public and philanthropic resources should prioritize AI that advances collective well-being, not large, extractive models or military/surveillance technologies. We do not support the use of open data to train AI models developed by companies that fail to mitigate their environmental impact. We stand in international solidarity and recognize how big data systems exploit precarious labor, particularly in the Global South. Fair labor and data justice are foundational to ethical AI. We oppose large technology companies that treat open data as a resource for profit, dominance, and control, rather than as a shared public good.

Restoring community agency.

We advocate for communities to regain agency and autonomy over how their works are used in AI systems. Decision-making power must rest with communities who create and maintain data. Community-led governance ensures that AI serves public interests and shared values.

Increasing reciprocity and responsibility in the commons

Those who benefit most from the commons must contribute back in ways that sustain and strengthen it. We call on all actors in the AI value chain to uphold attribution, consent, transparency, and documentation as essential tools for accountability and trust.

AI training can be lawful under and consistent with with copyright

Studying and analyzing materials to derive facts, ideas, and other uncopyrightable elements or make other non-infringing uses should be permitted, even where that analysis implicates an act of reproduction as an intermediate step. In some countries, AI training is already generally lawful under copyright. In considering legal and technical approaches to addressing the challenges and opportunities of sharing in the age of AI, we must think beyond the limits of copyright. 

Against binary opt-in/opt-out

A binary opt-in or opt-out system of contributing content to AI models is not nuanced enough to represent the spectrum of choice a creator may wish to exercise. Instead, it is important to develop more flexible choices that take into consideration the values and social norms embedded in sharing content on the web.