Help us protect the commons. Make a tax deductible gift to fund our work. Donate today!
Last month, we published a preview of what we intended to bring to the AI Impact Summit in Delhi: a focus on data governance, shared infrastructure, and democratic approaches to AI that genuinely advance the public interest rather than replicate existing power imbalances. That piece outlined our core interventions and the principles that have guided our thinking as we grapple with how to ensure openness, agency, and equity in the age of AI.
Since then, the Summit—a major global gathering of policymakers, technologists, civil society leaders, and researchers—unfolded against the backdrop of widespread calls for cooperative frameworks and measurable outcomes. For an excellent summary of the highs and lows of the Summit, take a look at this article by CC Board Member Jeni Tennison.
From CC’s perspective, what became clear in Delhi is that AI governance is shifting. The conversation is moving beyond high-level principles and into harder, more structural questions about infrastructure, stewardship, and power.

Data as a Leverage Point
Concerns about data capture and extraction abounded at the Summit. But alongside those concerns, a persistent theme emerged: data scarcity.
Participants repeatedly pointed to the lack of high-quality, localized, representative datasets as a fundamental constraint on public interest AI. The call for “really good data” came from startups, researchers, governments, and civil society actors alike—many working to build contextually grounded systems. Without accessible datasets, cultural representation is limited, competition falters, open-source development slows, and meaningful innovation remains concentrated in the hands of those with the most resources.
The gaps are especially pronounced across Global South languages and cultural contexts. Researchers are working to supplement large models with local norms and knowledge to address bias and misrepresentation. This is particularly urgent in sectors such as health, agriculture, climate, and development, where high-quality open datasets could unlock substantial public benefit.
There is a real tension here. High-quality open data is required to power public interest AI. At the same time, without guardrails, open data can be exposed to extraction and misuse. Communities are often presented with a false choice: open their data and risk exploitation, or close their data and risk exclusion from shaping AI systems that affect them. Addressing this tension is essential if governance frameworks are to support both individual agency and shared stewardship. In essence, we need to:
- Fill existing gaps in shared governance infrastructure through collaborative frameworks and development of globally accessible tools that balance the tension between agency and access;
- Uphold an understanding of data governance as something that is deeply participatory and democratic, and an absolute necessity for any AI system that becomes part of the public infrastructure, whether privately held or not;
- Rebalance the power inequities in the current landscape overall, with our focus being on the data layer.
We believe that the path forward is not enclosure. It is stewardship. Governance mechanisms, interoperability standards, and access frameworks will determine who participates in the AI ecosystem and who does not. If we want AI systems that reflect diverse knowledge and lived realities, we must build the infrastructure that makes responsible openness durable.
Openness as a Method for Collaboration
At the Summit, openness was not framed as a philosophical preference. It was framed as a structural necessity and a baseline condition for equity, competition, collaboration, and democratic accountability.
But the mental models we use to think about open versus closed must evolve. Openness cannot stop at model weights. It must extend across code, data, infrastructure, tooling, standards, and usability. And, crucially, openness and guardrails are not opposites. Responsible governance is not in tension with open systems; it is what makes them sustainable.
In this sense, openness is no longer the ceiling of ambition. It is the floor.
The Implementation Gap
Despite widespread agreement on concentration risks, data bottlenecks, and the speed of AI development, there was palpable exhaustion with principles that lack implementation pathways. Participants pointed to attempts like the Hiroshima AI Process and statements from past Summits as being great in theory but missing in practice. What’s missing are durable intermediaries capable of stewarding shared resources and translating shared values into operational systems.
This is where the conversation becomes especially consequential for Creative Commons.
For more than two decades, CC has built legal and social interoperability at global scale. We have designed data governance frameworks that allow sharing of knowledge to function across jurisdictions and sectors. We have stewarded a commons model that balances openness with structure, enabling participation and mutual benefit through principles like attribution.
While debates about the limits of copyright were not central to most discussions in Delhi, there was significant interest in expanding high-quality open data, strengthening digital public infrastructure, and supporting community-led AI development—all areas deeply aligned with our expertise.
AI governance must move from principles to infrastructure. Shared, open digital infrastructure that works across borders is what Creative Commons is known for building. We believe that building the next generation of infrastructure for sharing—which would support the data layer of public interest AI—is not a departure from our mission. It is a timely extension of it and builds on the groundwork we have been laying for the past few years.
An infrastructure like this could include identifying high-impact open dataset initiatives in sectors such as health, agriculture, climate, and education to be opened up and prepared for machine reuse. It would require developing safe and trusted data-sharing models, with nuanced approaches depending on what data are being shared. This isn’t just about legal tools absent the context in which they are used; it is about comprehensive data governance mechanisms that balance openness with accountability and ensure interoperability across jurisdictions.
Collaborative Construction
As we’ve talked about before, a central challenge in AI governance is avoiding false choices. Overly restrictive guardrails risk enclosing the commons, limiting access to knowledge, and stifling innovation and scientific discovery. Yet the absence of guardrails undermines trust, enables exploitation, and erodes the foundations of openness itself. Creative Commons operates in this critical middle space.
Our interventions at the Summit focused on advancing governance frameworks that protect human agency, cultural context, and trust in information while preserving openness, access, and reuse. An AI ecosystem that serves the public interest must be standardized where possible and contextual where required, especially across diverse linguistic, cultural, and regional settings.
If the Summit made one thing evident, it is that there is readiness for partnership. Policymakers, funders, technologists, and civil society leaders are looking for institutions capable of translating shared values into durable systems.
If We Do Not Intervene
It is worth being explicit about the alternative trajectory.
If sharing of data is only driven by commercial markets and not the public interest, and if data infrastructure consolidates in the hands of a few actors, “sovereignty” risks becoming a commercial product rather than a public capacity. Cultural representation will become extractive rather than participatory. Open models may technically exist, but without access to high-quality datasets, they will struggle to compete. The language of openness could persist while the data infrastructure beneath it quietly closes. What is the value of open weights and open code when the very essence of our cultures and languages isn’t carefully and deliberately shared, through robust open datasets?
The infrastructure phase of AI governance has begun. Creative Commons intends to help build what comes next—in partnership with those who share a commitment to an AI ecosystem that is open, inclusive, and grounded in the public interest.
A huge thank you to our partners, event organizers, and co-panelists who helped to shape a meaningful engagement for CC during the Summit. We are particularly grateful for the thoughtful welcome provided by CivicDataLab, who ensured balanced dialogue and representation between those attending from elsewhere and those actively engaged on the ground in India. If we chatted during the Summit, we look forward to ongoing discussions. If we didn’t have a chance to connect, our doors are always open—send us a note!
Posted 04 March 2026