Skip to content

Update on CC Signals: What Changed and Why

Licenses & Tools, Policy

It’s been a while since we last shared an update on CC signals and our work around AI and the commons. Over the past several months, we’ve been deep in research, in conversation, and in active collaboration with communities, policymakers, and practitioners. At the same time, we kicked off our 25th anniversary celebrations, which gives us a rare opportunity to reflect on where we’ve been and, more importantly, where we need to go next.

The biggest reason for the gap between updates is timing. We are deliberately resisting the pressure to move quickly simply because the broader technology landscape rewards speed. Our work touches the infrastructure of the commons. That requires care, consultation, and a willingness to sit with complexity.

So we slowed down. We let the first wave of AI development crest without rushing to respond. We took the time to understand where power is consolidating, where harms are emerging, and where meaningful intervention is actually possible. We are now at a point where we believe we can act in ways that will have real impact.

This post is meant to bring you into that journey. Our destination has not changed, but the path we are taking to get there has. Come along!

From Signals to Agency

When we first introduced CC signals, the idea was relatively straightforward. We proposed a set of preferences that creators could use to communicate with AI developers, relying on shared norms to guide behavior. It reflected how CC has historically operated. For 25 years, we have worked within copyright, building tools that expand access while maintaining a balance between creators and reusers. That history shaped our instincts. We assumed that a carefully calibrated, norms-based approach would move the ecosystem in a better direction.

But as we began consulting with our community, it became clear that this approach was not enough. The feedback was direct and consistent in stating that preference signals without enforcement do not meaningfully shift power. Signals alone cannot create agency in a system that many people did not choose to participate in. 

That feedback forced us to confront some of our own assumptions. For a long time, copyright has been our primary tool, and with good reason. CC licenses have enabled the sharing of tens of billions of works and have helped build a more open internet. But relying on copyright as the default lens for every problem has its limits, especially in an AI-mediated environment. 

Beyond Copyright

Over the past four months, we have been reexamining what it means to support the commons in this new context. 

CC licenses remain essential. They will continue to play a critical role in enabling human access to knowledge. However, when it comes to AI, copyright operates in a landscape that is uneven and often unclear. In many cases, CC license conditions do not apply to AI training. In others, they might. In some jurisdictions, broad exceptions mean that using CC-licensed works for AI development is lawful regardless of license conditions. At the same time, the presence of a CC license is often interpreted as permission to use the work in this way. That interpretation follows from how the licenses were designed; they grant broad permissions with limited conditions. 

The CC licenses were not designed with the scale and growing harms caused by the dominant, profit-driven approaches to AI in mind. And CC licenses do not capture the full range of intentions creators have in this AI-mediated world. Some creators are comfortable with their work being used in AI systems; others are not, and many fall somewhere in between.

Why New Tools Are Necessary

We also explored whether updating the CC licenses themselves, in the current paradigm, could provide a solution. Versioning has helped us adapt to new contexts before. But in this case, there are two novel factors at play.

The first is structural. CC licenses were designed not to enable control beyond copyright. They are intentionally scoped to copyright and related rights, and they explicitly do not allow additional restrictions that would limit uses outside that scope.

Our current trademark policy reinforces this. If restrictions are added that limit the permissions granted by a CC license, the work can no longer be presented as CC-licensed. This reflects the critical role that standardization has played in the success of open licensing. When you access a CC-licensed work, you should be able to rely on the terms and conditions written in the license to determine what your reuse obligations are. Expanding the CC license suite beyond its original focus on copyright would represent a significant change to how the licenses operate, and it could have unintended consequences on the existing license ecosystem.

This brings us to the second factor, which is that CC licensors have such a wide spectrum of needs and values about how and whether their works are used in AI. It is possible that new tooling would better address what may be irreconcilable within the open movement: some see any tool that attempts to control AI uses that fall outside of copyright as a betrayal; others see it as an imperative.

With the future of the commons in mind, at this time we believe that the best approach is to innovate with the development of new tools, where we can test and explore more freely. The CC licenses are one part of a larger strategy needed to meet this moment, which is evolving in an undefined legal landscape, just as it was 25 years ago when the CC licenses were first developed. 

The Stakes for the Commons

Our north star remains the same: sustain access to human knowledge. Today, that means more than enabling sharing. It means questioning long-held assumptions, and ensuring communities are in control of their own data. It means holding the tension that, in some cases, conditional access is better than no access. The commons needs guardrails in order to thrive. 

AI systems are being built on an unprecedented scale of knowledge extraction, drawing heavily from the commons. The governance systems that made open sharing possible have not kept pace with this shift. There are limited mechanisms for attribution in AI systems, few pathways for consent, and little transparency.

When the commons weakens, power over information becomes more concentrated. Knowledge moves into private datasets and proprietary systems controlled by a small number of actors. That limits who can access, verify, and build on information. Democracies depend on broad access to reliable knowledge. Public interest AI depends on diverse, high-quality data. 

A healthy commons is governed and sustained through systems that balance access with agency, openness with accountability. AI relies on the commons, not the other way around. If we want a future where knowledge is shared and where AI serves the public good, we need to ensure that the commons can thrive. This is the context in which we evolve CC signals. 

Strengthening CC Signals

Our problem statement has not changed, and neither has our end goal. But what we are building to get there has.

What began as a relatively narrow, tool-focused approach has evolved into something broader and more structural. CC signals is no longer limited to signaling preferences. CC signals is about addressing the underlying conditions that have made creator preferences so easy to ignore. This shift has led us toward work that is more ambitious, and necessarily more disruptive, in confronting the real harms to the commons emerging from dominant, profit-driven approaches to AI.

Check back with us next week, when we’ll share more about the specific interventions we are building from this foundation.

Posted 23 April 2026