CC signals builds on Creative Commons’ mission[link?] to keep the internet open and equitable. In an era where AI systems depend on vast amounts of human-created content, CC signals help ensure that creators’ voices and values shape how their work is used.
CC signals is a framework that helps creators and content custodians express how they want their works to be used in AI development. The goal of CC signals is to uphold reciprocity, recognition, and sustainability in the way human creativity fuels machine learning.
Safeguarding the Commons
AI systems depend on vast amounts of human-created content, often collected without the awareness or involvement of those who made it. Most AI developers are not giving back to the shared ecosystem from which they benefit. This dynamic has concentrated power in a few hands and undermined trust in the social contract of the commons.
In response, CC signals offers a flexible, values-driven framework for communicating expectations around AI use of content or data.
Our goal is to restore autonomy to creators while preserving open access to knowledge. True openness means participation with fairness and respect, not unchecked extraction.
Why CC Signals
When large tech companies treat data as a private resource for profit and control, they weaken the commons and endanger the human right to access knowledge.
Copyright alone cannot fix this. The CC licenses alone can’t fix this. Protecting the commons in the AI era requires new legal, technical, and social approaches that:
- Protect the public interest as technology reshapes culture and knowledge
- Ensure sharing thrives beyond copyright frameworks
- Challenge the monopolization of data by concentrations of power
- Empower a global movement to sustain the commons as a cornerstone of democracy
Just as our licenses offer choices between all and no rights reserved, our hope is that CC signals will offer creators a spectrum of preferences for when their work is used in AI beyond a binary opt-in vs opt- out.
About CC Signals
CC signals is an evolving, values-driven framework–currently being tested through a series of pilot implementations. Each implementation will explore legal, technical, and normative paths to encourage responsible AI use.
The CC signals framework could be implemented in any situation where traditional data or content-as-data is being processed by machines. Each implementation explores paths toward compliance—legal, technical, and normative—without expanding copyright.
The Suite of CC Signals




This project draws inspiration from fundamental concepts often referenced in the AI debate—consent, compensation, and credit— but with a particular angle. Our approach is driven by the goal of increasing and sustaining public access to knowledge.
Each signal includes the conditions by which content can be used for machine reuse. These are criteria that AI developers must meet in order to use the content for AI development. All of the criteria are designed to promote reciprocity in ways that are both meaningful and practical given the scale of machine reuse. Our initial proposal includes the following signal elements:
Credit: You must give appropriate credit based on the method, means, and context of your use.
Direct Contribution: You must provide monetary or in-kind support to the Declaring Party for their development and maintenance of the assets, based on a good faith valuation taking into account your use of the assets and your financial means.
Ecosystem Contribution: You must provide monetary or in-kind support back to the ecosystem from which you are benefiting, based on a good faith valuation taking into account your use of the assets and your financial means.
Open: The AI system used must be open. For example, AI systems must satisfy the Model Openness Framework (MOF) Class II, MOF Class I, or the Open Source AI Definition (OSAID).
🗒️Note: Credit is included in each signal because we believe it is a fundamental form of reciprocity, one that benefits the broader knowledge cycle. In this proposal, the other signals are mutually exclusive. The list of signals is intentionally limited so that the collective of data stewards and their communities data holding communities can align in calling for their adoption with AI developers. This will ultimately build networks for collective action, requiring reciprocity within the AI ecosystem.