Creative Commons has responded to the European Commission’s consultation on recommended standard licenses, datasets and charging for the re-use of public sector information (PSI). See our response here. The Commission asked for comments on these issues in light of the adoption of the new Directive on re-use of public sector information. The Directive 1) brings libraries, museums, and archives under the scope of the Directive, 2) provides a positive re-use right to public documents, 3) limits acceptable charging to only marginal costs of reproduction, provision, and dissemination, and 4) reiterates the position that documents can be made available for re-use under open standards and using machine readable formats. CC recognizes the high value of PSI not only for innovation and transparency, but also for scientific, educational and cultural benefit for the entire society.
The Commission has not yet clarified what should be considered a “standard license” for re-use (Article 8). The dangers of license proliferation–which potentially leads to incompatible PSI–is still present. But it’s positive that the Commission is using this consultation to ask specific questions regarding legal aspects of re-use.
Part 3 of the questionnaire deals with licensing issues. One question asks what should be the default option for communicating re-use rights. We believe that there should be no conditions attached to the re-use of public sector information. The best case scenario would be for public sector information to be in the public domain, exempt from copyright protection altogether by amending national copyright laws. If it’s not possible to pass laws granting positive re-use rights to PSI without copyright attached, public sector bodies should use the CC0 Public Domain Dedication (CC0) to place public data into the worldwide public domain to ensure unrestricted re-use.
Another question first states that the Commission prefers the least restrictive re-use regime possible, and asks respondents to choose which condition(s) would be aligned with this goal. Again, we think that every condition would be deemed restrictive, since ideally PSI would be removed from the purview of copyright protection through law or complete dedication of the PSI to the public domain using CC0. If the Commission were to permit public sector bodies to incorporate limited conditions through licensing, then they should be expected to use standard public licenses aligned with the Open Definition (with a preference for “attribution only” licenses). A simple obligation to acknowledge the source of the data could be accomplished by adopting a liberal open license, like CC BY. Such a license would also cover other issues, such as acknowledging that an adaptation has been made or incorporating a waiver of liability. Some of the conditions listed would be detrimental to interoperability of PSI. An obligation not to distort the original meaning or message of public sector data should be deemed unacceptable. Such an obligation destroys compatibility with standard public licenses that uniformly do not contain such a condition. The UK’s Open Government License has already removed this problematic provision when it upgraded from OGL 1.0 to OGL 2.0.
In addition to mentioning CC licensing as a common solution, the questionnaire notes, “several Member States have developed national licenses for re-use of public sector data. In parallel, public sector bodies at all levels sometimes resort to homegrown licensing conditions.” In order to achieve the goals of the Directive and “to promote interoperable conditions for crossborder re-use,” the Commission should consider options that minimize incompatibilities between pools of PSI, which in turn maximize re-use. As far as we are concerned that means that governments should be actively discouraged from developing their own licenses. Instead, they should be encouraged to adopt standard public licenses aligned with the Open Definition. But even better would be to consider removing copyright protection for PSI by amending copyright law or waiving copyright and related rights using CC0.1 Comment »
Last week, indie videogame designer Nick Liow launched the Open Game Art Bundle. It’s a simple idea: independent videogame designers contribute game assets – animations, soundtracks, character designs – and customers can pay any price they want to access them. Nick describes it as a sort of cross between Kickstarter and Humble Bundle, and like Humble Indie Bundle, the income is split between the developers themselves and charities (including Creative Commons). But there’s one big twist: if the bundle reaches its goal of $10,000 by July 15, all assets will become public domain under the CC0 public domain declaration.
This is actually the third bundle Nick has put out under the brand Commonly. It’s the most ambitious bundle to date, but it’s really just the beginning. What Nick’s really interested in isn’t just about videogames; it’s about changing how people think about the public domain. I met up with him a few days ago to chat about videogames, public domain, and the open source movement.
We also talked about the increasing rift in the videogame world between the indie developers like himself and the high-budget, “Triple A” games of the big-name studios. There’s been a lot of talk recently about the videogame industry’s occasional tone-deafness to issues like race and sexuality. Nick told me that he sees a parallel conflict over issues like intellectual property and digital rights management (DRM). While many young developers like Nick share his opinions, some big-name developers are sticking to what he sees as a more old-fashioned view.
“The triple-A industry has to reach out to as massive an audience as possible,” Nick said. “They close things off because they can’t afford the risk. You notice that indie games tend to be for a more a more open ecosystem. With the Humble Indie Bundle, “DRM-Free” is a part of their tag line. Indie games go with the more open ecosystems… while triple-A’s create their own walled gardens with game consoles.” And, he was quick to add, “The iPhone counts as a [closed] console.”
Nick recently moved to the San Francisco Bay Area – he’ll be here for the next two years as a part of the Thiel Fellowship. “You have to have a big vision [for the fellowship],” he told me, “and my big vision was a thriving public domain.”
He originally applied for the fellowship with his project Craftyy, an open source game-development platform and social network. Although the Thiel judges liked Nick’s ideas, “It wasn’t clear to them how Craftyy would lead to a thriving public domain.” That was when Nick started to shift to the idea of crowdfunding for public domain creative works. He told me that his plan for the next two years is to expand the Commonly concept beyond the world of videogame developers into the broader creative community. I can’t wait to see where Commonly goes next and what awesome stuff it brings into the public domain with it.No Comments »
Throughout the #cc10 celebrations, we’re highlighting different CC-enabled media platforms, to show the breadth and diversity of the CC world. Today, as we’re talking about governmental and institutional adoption of CC tools, it seemed appropriate to discuss Europeana, the massive digital library of European history and culture.
For people who get excited about open cultural data, one of the most exciting moments of 2012 came in September, when Europeana announced that it was releasing its metadata to the public domain under the CC0 waiver. This release of 20 million records represents one of the largest one-time dedications of cultural data to the public domain.
While the data was previously available through the Europeana website, dedicating it to the public domain multiplies its usability. From the press release:
This release, which is by far the largest one-time dedication of cultural data to the public domain using CC0 offers a new boost to the digital economy, providing electronic entrepreneurs with opportunities to create innovative apps and games for tablets and smartphones and to create new web services and portals.
No Comments »
Europeana’s move to CC0 is a step change in open data access. Releasing data from across the memory organisations of every EU country sets an important new international precedent, a decisive move away from the world of closed and controlled data.
Importantly, the change represents a valuable contribution to the European Commission’s agenda to drive growth through digital innovation. Online open data is a core resource which can fuel enterprise and create opportunities for millions of Europeans working in Europe’s cultural and creative industries. The sector represents 3.3% of EU GDP and is worth over €150 billion in exports.
Europeana’s announcement was praised by Neelie Kroes, Vice-President of the European Commission, who said:
Open data is such a powerful idea, and Europeana is such a cultural asset, that only good things can result from the marriage of the two. People often speak about closing the digital divide and opening up culture to new audiences but very few can claim such a big contribution to those efforts as Europeana’s shift to creative commons.
The Creative Commons Affiliate teams in the Netherlands and Luxembourg, through partner organizations Institute for Information Law (IViR), Kennisland, and the Bibliothèque nationale de Luxembourg provided expert support to Europeana during this process. Europeana has been at the forefront of exploring ways to share the European cultural record. They are one of the first adopters of CC’s Public Domain Mark and continue to support a vibrant, healthy public domain.5 Comments »
As reported a few weeks ago, OCLC has recommended that its member libraries adopt the Open Data Commons Attribution license (ODC-BY) when they share their library catalog data online. The recommendation to use an open license like ODC-BY is a positive step forward for OCLC because it helps communicate in advance the rights and responsibilities available to potential users of bibliographic metadata from library catalogs. But the decision by OCLC to recommend the licensing route — as opposed to releasing bibliographic metadata into the public domain — raises concerns that warrants more discussion.
OCLC says that making library data derived from WorldCat available under an open license like ODC-BY complies with their community norms. There are other options, however, that are equally compliant. Harvard Library, for example, developed an agreement with OCLC earlier this year that makes its metadata available under the CC0 Public Domain Dedication. This means that Harvard relinquishes all its copyright and related rights to that data, thereby enabling the widest variety of downstream reuse. Even though it puts this information into the public domain, Harvard requests that users provide attribution to the source as a best practice without making attribution a legally binding requirement through a license.
There are good reasons for relying on community norms for metadata attribution instead of requiring it as a condition of a licensing agreement. The requirement to provide attribution through a contract like ODC-BY is not well-suited to a world where data are combined and remixed from multiple sources and under a variety of licenses and other use restrictions. For example, the library community is experimenting with new technologies like linked data as a means of getting more value from its decades-long collective investment in cataloging data. And we’re happy to see that OCLC has released a million WorldCat records containing 80 million linked data triples in RDF. However, we believe that requiring attribution as a licensing condition introduces complexity that will make it technically difficult — if not impossible — for users to comply.
Then there is the question of how to properly attach attribution information to a discrete bit of data (e.g. a single field, subfield, or triple). OCLC has helpfully provided guidelines around attribution for its linked data, but how would these work for member libraries that follow OCLC’s recommendation to adopt the ODC-BY license when they publish their own data? Library linked data collections are often derived from small subsets of many large collections and recombined with new relationships, potentially requiring separate attribution for each data element. In the case of OCLC’s data release, imagine that a user downloads the OCLC file containing 80 million linked data triples, extracts the ones she’s interested in, and then links them to her own catalog data to create a new linked dataset. The guidelines for the WorldCat data include the option of considering a WorldCat URI to be sufficient attribution, but how would that work for the library’s own bibliographic data or for additional data drawn from non-OCLC sources? The guidelines do not include recommendations for how libraries should implement their own data in such a way that reusers can comply with the attribution requirements imposed by the ODC-BY license. The community norms and best practices for reusing library linked data are not yet well defined, so relying on them in the context of a legally binding license is troubling.
Another question arises about the scope of the ODC-BY license with its focus on European database rights in addition to copyright — database rights that do not apply in the U.S. and that cover the database in its entirety but not its contents, making it uncertain whether it can be applied to a simple file of bibliographic data. And the question of whether copyright applies at all to bibliographic data, given its mainly factual nature, is doubtful and differs depending on legal jurisdiction. While the ODC-BY license may make good sense for OCLC to apply to WorldCat itself, it would be a questionable choice for a U.S. library that is looking to share some of its catalog data as a downloadable file.
Moreover, because most countries outside of the European Union — including the United States — do not grant protection to non-creative databases, the ODC-BY license does not operate except at best as a contractual restriction on those downloading directly from the licensor’s website. So this restriction, which is not based on any underlying exclusive property right, is unlikely to bind reusers that do not obtain the data directly from the original data provider. The absence of a binding contract coupled with the lack of any underlying property right means licensors may be surprised to learn they do not have a strong and effective remedy such as a claim of infringement against those downstream users. This is a known concern with the Open Database License, ODC-BY’s sister license that has the same license + contract design feature. Thus, the license in many instances simply will not protect the library that shared the data, or OCLC, in the manner they expect.
Another more general concern about using a licence to share bibliographic metadata has to do with its technical feasibility. This is evident in the Model language that OCLC recommends, which includes links to the WCRR Record Use Policy (WorldCat Rights and Responsibilities), community norms and an FAQ. Following these links takes readers to pages with yet more information about the requirements expected for members and non-members. The concern is not so much the opaqueness of the rules, but that they may become linked to a great number of records which have nothing to do with OCLC. For example, many members may only have started fairly recently to re-use records from OCLC, yet in the model language no distinction is made between OCLC and non-OCLC sourced records, again, because there is no feasible technical solution to differentiate between these. The result: attribution is (wrongly) given to OCLC for the whole database, and a large number of OCLC principles linked to the library database’s complete contents. While the ODC-BY and WCRR may well be well-intentioned instruments to turn the WorldCat data into a “Common Pool Resource” for OCLC members, it certainly lacks the technical solutions to demarcate where it begins and ends, potentially resulting in confusion and overreaching requirements for members that try to comply. Fundamentally, this begs the question whether library records shouldn’t just be public goods released into the public domain.
For all of the above reasons, cultural institutions including The British Library, Europeana, the University of Michigan Library, Harvard and others have adopted the CC0 Public Domain Dedication for publishing their catalog data online. From this, we see that a truly normative approach for the library community would be a public domain dedication such as CC0, coupled with requests to provide attribution to the source (e.g. OCLC) to the extent possible. Such an approach would maximize experimentation and innovation with the cataloging data, in keeping with the mission and values of the library community, while respecting the investment of OCLC and the library community in this valuable resource.
Contributors to this post: Timothy Vollmer, MacKenzie Smith, Paul Keller, Diane Peters.6 Comments »
Indie musician Dan Bull released “Sharing is Caring” into the public domain using CC0. Recently, “Sharing is Caring” reached #9 on the UK independent chart and #35 on the UK R&B Chart. Creative Commons United Kingdom interviewed Dan about why he chose to release his music for free:
“It’s up to the individual musician what they want to do and it depends on their principles. In the past I have gone the way of having no licensing on my music at all, or where licensing is necessary, I make it known that I have no problem personally with people copying or remixing the music. If you want to encourage fans to engage with your music, re-interpret it and redistribute it on your behalf, then Creative Commons is a good direction to look in.”
For those who don’t know, CC0 is not a license, but a universal public domain dedication that may be used by anyone wishing to permanently surrender the copyright they may have in a work, thereby placing it as nearly as possible into the public domain. As far as we know, Dan is the first musician to break into top music charts with music that is free from copyright restrictions. Let us know if we’re wrong!
Read the full interview with Dan over at the CC UK blog.No Comments »
The last few months has seen a growth in open data, particularly from governments and libraries. Among the more recent open data adopters are the Austrian government, Italian Ministry of Education, University and Research, Italian Chamber of Deputies, and Harvard Library.
The Italian Ministry of Education, University and Research launched its Open Data Portal under CC BY, publishing the data of Italian schools (such as address, phone number, web site, administrative code), students (number, gender, performance), and teachers (number, gender, retirement, etc.). The Ministry aims to make all of its data eventually available and open for reuse, in order to improve transparency, aid in the understanding of the Italian scholastic system, and promote the creation of new tools and services for students, teachers and families.
Lastly, Harvard Library in the U.S. has released 12 million catalog records into the public domain using the CC0 public domain dedication tool. The move is in accordance with Harvard Library’s Open Metadata Policy. The policy’s FAQ states,
“With the CC0 public domain designation, Harvard waives any copyright and related rights it holds in the metadata. We believe that this will help foster wide use and yield developments that will benefit the library community and the public.”
Harvard’s press release cites additional motivations for opening its data,
John Palfrey, Chair of the DPLA, said, “With this major contribution, developers will be able to start experimenting with building innovative applications that put to use the vital national resource that consists of our local public and research libraries, museums, archives and cultural collections.” He added that he hoped that this would encourage other institutions to make their own collection metadata publicly available.
We are excited that CC tools are being used for open data. For questions related to CC and data, see our FAQ about data, which also links to many more governments, libraries, and organizations that have opened their data.2 Comments »
Yesterday, Nature Publishing Group announced the launch of a new linked data platform, providing access to “20 million Resource Description Framework (RDF) statements, including primary metadata for more than 450,000 articles published by NPG since 1869. The datasets include basic citation information (title, author, publication date, etc) as well as NPG specific ontologies.” All datasets are published using the CC0 public domain dedication, which is not a license, but a legal tool that may be used by anyone wishing to permanently surrender the copyright and database rights (where they exist) they may have in a work, thereby placing it as nearly as possible into the public domain.
This is an excellent move by NPG, especially following an opinion piece they published in 2009 explicitly recommending open sharing and the use of CC0 to put data in the public domain, entitled, “Post-publication sharing of data and tools”:
“Although it is usual practice for major public databases to make data freely available to access and use, any restrictions on use should be strongly resisted and we endorse explicit encouragement of open sharing, for example under the newly available CC0 public domain waiver of Creative Commons.”
Many more organizations and institutions are using CC0 to release their data, which you can peruse at our wiki page for CC0 uses with data and databases. CC licenses are also used for data; read more about this and other issues plus an FAQ on CC and data at http://wiki.creativecommons.org/Data.2 Comments »
CC0 has been getting lots of love in the last couple months in the realm of data, specifically GLAM data (GLAM as in Galleries, Libraries, Archives, Museums). The national libraries of Spain and Germany have released their bibliographic data using the CC0 public domain dedication tool. For those of you who don’t know what that means, it means that the libraries have waived all copyrights to the extent possible in their jurisdictions, placing the data effectively into the public domain. What’s more, the data is available as linked open data, which means that the data sets are available as RDF (Resource Description Framework) on the web, enabling the data to be linked with other data from different sources.
The National Library of Spain teamed up with the Ontology Engineering Group (OEG) to create the data portal: datos.bne.es. The datasets can be accessed directly at http://www.bne.es/es/Catalogos/DatosEnlazados/DescargaFicheros.
The National Library of Germany, aka Deutsche Nationalbibliothek (DNB), has documentation on its linked open data under CC0 here. CC Germany reported the move, and a post in English can be found over at Open GLAM.
Relatedly, the Smithsonian Cooper-Hewitt Museum, a major design museum in New York, has released the collection data for 60% of its documented collection into the public domain, also using CC0. The data set is available on a repository in Github; you can read more about the move at http://www.cooperhewitt.org/collections/data.
To learn more about Creative Commons and data, including a recently updated FAQ, check out http://wiki.creativecommons.org/Data.2 Comments »
One week after the nuclear disaster at the Fukushima Diachi plant in March, the Safecast project was born to respond to the information needs of Japanese citizens regarding radiation levels in their environment. Safecast, then known as RDTN.org, started a campaign on Kickstarter “to provide an aggregate feed of nuclear radiation data from governmental, non-governmental and citizen-scientist sources.” All radiation data collected via the project would be dedicated to the public domain using CC0, “available to everyone, including scientists and nuclear experts who can provide context for lay people.” Since then, more than 1.25 million data points have been collected and shared; Safecast has been featured on PBS Newshour; and the project aims to expand its scope to mapping the rest of the world.
“Safecast supports the idea that more data – freely available data – is better. Our goal is not to single out any individual source of data as untrustworthy, but rather to contribute to the existing measurement data and make it more robust. Multiple sources of data are always better and more accurate when aggregated.
While Japan and radiation is the primary focus of the moment, this work has made us aware of a need for more environmental data on a global level and the longterm work that Safecast engages in will address these needs. Safecast is based in the US but is currently focused on outreach efforts in Japan. Our team includes contributors from around the world.”
To learn more, visit http://safecast.org. All raw data from the project is available for re-use via the CC0 public domain dedication, while other website content (such as photos and text) are available under CC BY-NC.No Comments »