Skip to content

Sharing in the Age of AI

Melies color Voyage dans la lune, by Georges Méliès, Public Domain.

AI is here and it has already had a profound impact on how we access, share, and interpret knowledge. Like any technology, AI poses both risks and opportunities.

We believe that current approaches to AI pose a threat to the future of the commons. Profit-driven AI development is causing real harm to people and the planet. These systems are being developed at a speed and scale that leaves contributors without a meaningful voice.

As a result, human knowledge is being locked up in ways that damage scientific research, knowledge preservation efforts, and other activities that benefit the public interest. 

We see another way. Creative Commons believes in an ecosystem where the humans who make and share knowledge play a meaningful role in shaping how that knowledge is reused. Our goal is a commons that is reciprocal and sustainable. 

CC’s Stance on AI 

  1. The dominant, profit-driven approaches to AI are causing harm

The dominant large-scale, proprietary AI systems contribute to cultural and economic imbalances: they risk diminishing the role of human creativity, affecting livelihoods, extracting human content, and flooding digital spaces with lower-quality, non-representative, biased content. Outside of CC’s immediate scope of work, we also share concerns with our fellow civil society groups around military and surveillance use of AI.

  1. Copyright is not equipped for the age of large-scale AI

Current copyright frameworks alone fail to address the ethical, social, and economic challenges posed by large AI models. New governance structures must center community rights, consent, and collective benefit rather than corporate control.

  1. Community-led AI governance is a prerequisite

AI governance should be community-led. The power to set rules must rest with the people and communities who create, curate, and maintain the data that fuels AI. AI relies on the commons, not the other way around. The sum of human knowledge is not an abstract asset merely for AI.

  1. We need to redefine openness

A new, intersectional definition of “open” is needed, one that recognizes the power imbalances between corporate entities and the commons. True openness is not free-for-all extraction but rather equitable participation in shared knowledge. 

  1. Prioritizing small-scale, value-aligned AI

AI development should prioritize smaller, targeted, and values-driven models that address real-world problems, such as advancing public health, education, or climate research, without the extractive footprint of “scale at all costs.”

Current AI Policy Positions

Systems for opting out of contributing content to AI models must balance individual agency with public interest needs

A binary opt-in or opt-out system of contributing content to AI models risks preventing scientific research and other public interest reuses that are critical to the public good. Systems for expressing preferences around AI use must be nuanced enough to account for the values and social norms embedded in the commons, and they must allow for public interest exceptions within their scope.

Investment in AI must create societal benefit

We oppose the use of AI tools that harm people. Public and philanthropic resources should prioritize AI that advances collective well-being, not large, extractive models or military/surveillance technologies. We do not support the use of open data to train AI models developed by companies that fail to mitigate their environmental impact. We stand in international solidarity and recognize how big data systems exploit precarious labor, particularly in the Global South. Fair labor and data justice are foundational to ethical AI. We oppose large technology companies that treat open data as a resource for profit, dominance, and control, rather than as a shared public good.

Restoring community agency

We advocate for communities to regain agency and autonomy over how their works are used in AI systems. Decision-making power must rest with communities who create and maintain the human knowledge that is used by AI as data. Community-led governance ensures that AI serves public interests and shared values.

Increasing reciprocity and responsibility in the commons

Those who benefit most from the commons must contribute back in ways that sustain and strengthen it. We call on all actors in the AI value chain to uphold attribution, consent, transparency, and documentation as essential tools for accountability and trust.

Get the latest CC news, and join the community to empower individuals and communities around the world.

Sign up for CC’s Newsletter