Throughout the #cc10 celebrations, we’re highlighting different CC-enabled media platforms, to show the breadth and diversity of the CC world. Today, as we’re talking about governmental and institutional adoption of CC tools, it seemed appropriate to discuss Europeana, the massive digital library of European history and culture.
For people who get excited about open cultural data, one of the most exciting moments of 2012 came in September, when Europeana announced that it was releasing its metadata to the public domain under the CC0 waiver. This release of 20 million records represents one of the largest one-time dedications of cultural data to the public domain.
While the data was previously available through the Europeana website, dedicating it to the public domain multiplies its usability. From the press release:
This release, which is by far the largest one-time dedication of cultural data to the public domain using CC0 offers a new boost to the digital economy, providing electronic entrepreneurs with opportunities to create innovative apps and games for tablets and smartphones and to create new web services and portals.
Europeana’s move to CC0 is a step change in open data access. Releasing data from across the memory organisations of every EU country sets an important new international precedent, a decisive move away from the world of closed and controlled data.
Importantly, the change represents a valuable contribution to the European Commission’s agenda to drive growth through digital innovation. Online open data is a core resource which can fuel enterprise and create opportunities for millions of Europeans working in Europe’s cultural and creative industries. The sector represents 3.3% of EU GDP and is worth over €150 billion in exports.
Europeana’s announcement was praised by Neelie Kroes, Vice-President of the European Commission, who said:
Open data is such a powerful idea, and Europeana is such a cultural asset, that only good things can result from the marriage of the two. People often speak about closing the digital divide and opening up culture to new audiences but very few can claim such a big contribution to those efforts as Europeana’s shift to creative commons.
The Creative Commons Affiliate teams in the Netherlands and Luxembourg, through partner organizations Institute for Information Law (IViR), Kennisland, and the Bibliothèque nationale de Luxembourg provided expert support to Europeana during this process. Europeana has been at the forefront of exploring ways to share the European cultural record. They are one of the first adopters of CC’s Public Domain Mark and continue to support a vibrant, healthy public domain.5 Comments »
As reported a few weeks ago, OCLC has recommended that its member libraries adopt the Open Data Commons Attribution license (ODC-BY) when they share their library catalog data online. The recommendation to use an open license like ODC-BY is a positive step forward for OCLC because it helps communicate in advance the rights and responsibilities available to potential users of bibliographic metadata from library catalogs. But the decision by OCLC to recommend the licensing route — as opposed to releasing bibliographic metadata into the public domain — raises concerns that warrants more discussion.
OCLC says that making library data derived from WorldCat available under an open license like ODC-BY complies with their community norms. There are other options, however, that are equally compliant. Harvard Library, for example, developed an agreement with OCLC earlier this year that makes its metadata available under the CC0 Public Domain Dedication. This means that Harvard relinquishes all its copyright and related rights to that data, thereby enabling the widest variety of downstream reuse. Even though it puts this information into the public domain, Harvard requests that users provide attribution to the source as a best practice without making attribution a legally binding requirement through a license.
There are good reasons for relying on community norms for metadata attribution instead of requiring it as a condition of a licensing agreement. The requirement to provide attribution through a contract like ODC-BY is not well-suited to a world where data are combined and remixed from multiple sources and under a variety of licenses and other use restrictions. For example, the library community is experimenting with new technologies like linked data as a means of getting more value from its decades-long collective investment in cataloging data. And we’re happy to see that OCLC has released a million WorldCat records containing 80 million linked data triples in RDF. However, we believe that requiring attribution as a licensing condition introduces complexity that will make it technically difficult — if not impossible — for users to comply.
Then there is the question of how to properly attach attribution information to a discrete bit of data (e.g. a single field, subfield, or triple). OCLC has helpfully provided guidelines around attribution for its linked data, but how would these work for member libraries that follow OCLC’s recommendation to adopt the ODC-BY license when they publish their own data? Library linked data collections are often derived from small subsets of many large collections and recombined with new relationships, potentially requiring separate attribution for each data element. In the case of OCLC’s data release, imagine that a user downloads the OCLC file containing 80 million linked data triples, extracts the ones she’s interested in, and then links them to her own catalog data to create a new linked dataset. The guidelines for the WorldCat data include the option of considering a WorldCat URI to be sufficient attribution, but how would that work for the library’s own bibliographic data or for additional data drawn from non-OCLC sources? The guidelines do not include recommendations for how libraries should implement their own data in such a way that reusers can comply with the attribution requirements imposed by the ODC-BY license. The community norms and best practices for reusing library linked data are not yet well defined, so relying on them in the context of a legally binding license is troubling.
Another question arises about the scope of the ODC-BY license with its focus on European database rights in addition to copyright — database rights that do not apply in the U.S. and that cover the database in its entirety but not its contents, making it uncertain whether it can be applied to a simple file of bibliographic data. And the question of whether copyright applies at all to bibliographic data, given its mainly factual nature, is doubtful and differs depending on legal jurisdiction. While the ODC-BY license may make good sense for OCLC to apply to WorldCat itself, it would be a questionable choice for a U.S. library that is looking to share some of its catalog data as a downloadable file.
Moreover, because most countries outside of the European Union — including the United States — do not grant protection to non-creative databases, the ODC-BY license does not operate except at best as a contractual restriction on those downloading directly from the licensor’s website. So this restriction, which is not based on any underlying exclusive property right, is unlikely to bind reusers that do not obtain the data directly from the original data provider. The absence of a binding contract coupled with the lack of any underlying property right means licensors may be surprised to learn they do not have a strong and effective remedy such as a claim of infringement against those downstream users. This is a known concern with the Open Database License, ODC-BY’s sister license that has the same license + contract design feature. Thus, the license in many instances simply will not protect the library that shared the data, or OCLC, in the manner they expect.
Another more general concern about using a licence to share bibliographic metadata has to do with its technical feasibility. This is evident in the Model language that OCLC recommends, which includes links to the WCRR Record Use Policy (WorldCat Rights and Responsibilities), community norms and an FAQ. Following these links takes readers to pages with yet more information about the requirements expected for members and non-members. The concern is not so much the opaqueness of the rules, but that they may become linked to a great number of records which have nothing to do with OCLC. For example, many members may only have started fairly recently to re-use records from OCLC, yet in the model language no distinction is made between OCLC and non-OCLC sourced records, again, because there is no feasible technical solution to differentiate between these. The result: attribution is (wrongly) given to OCLC for the whole database, and a large number of OCLC principles linked to the library database’s complete contents. While the ODC-BY and WCRR may well be well-intentioned instruments to turn the WorldCat data into a “Common Pool Resource” for OCLC members, it certainly lacks the technical solutions to demarcate where it begins and ends, potentially resulting in confusion and overreaching requirements for members that try to comply. Fundamentally, this begs the question whether library records shouldn’t just be public goods released into the public domain.
For all of the above reasons, cultural institutions including The British Library, Europeana, the University of Michigan Library, Harvard and others have adopted the CC0 Public Domain Dedication for publishing their catalog data online. From this, we see that a truly normative approach for the library community would be a public domain dedication such as CC0, coupled with requests to provide attribution to the source (e.g. OCLC) to the extent possible. Such an approach would maximize experimentation and innovation with the cataloging data, in keeping with the mission and values of the library community, while respecting the investment of OCLC and the library community in this valuable resource.
Contributors to this post: Timothy Vollmer, MacKenzie Smith, Paul Keller, Diane Peters.6 Comments »
Indie musician Dan Bull released “Sharing is Caring” into the public domain using CC0. Recently, “Sharing is Caring” reached #9 on the UK independent chart and #35 on the UK R&B Chart. Creative Commons United Kingdom interviewed Dan about why he chose to release his music for free:
“It’s up to the individual musician what they want to do and it depends on their principles. In the past I have gone the way of having no licensing on my music at all, or where licensing is necessary, I make it known that I have no problem personally with people copying or remixing the music. If you want to encourage fans to engage with your music, re-interpret it and redistribute it on your behalf, then Creative Commons is a good direction to look in.”
For those who don’t know, CC0 is not a license, but a universal public domain dedication that may be used by anyone wishing to permanently surrender the copyright they may have in a work, thereby placing it as nearly as possible into the public domain. As far as we know, Dan is the first musician to break into top music charts with music that is free from copyright restrictions. Let us know if we’re wrong!
Read the full interview with Dan over at the CC UK blog.Comments Off
The last few months has seen a growth in open data, particularly from governments and libraries. Among the more recent open data adopters are the Austrian government, Italian Ministry of Education, University and Research, Italian Chamber of Deputies, and Harvard Library.
The Italian Ministry of Education, University and Research launched its Open Data Portal under CC BY, publishing the data of Italian schools (such as address, phone number, web site, administrative code), students (number, gender, performance), and teachers (number, gender, retirement, etc.). The Ministry aims to make all of its data eventually available and open for reuse, in order to improve transparency, aid in the understanding of the Italian scholastic system, and promote the creation of new tools and services for students, teachers and families.
Lastly, Harvard Library in the U.S. has released 12 million catalog records into the public domain using the CC0 public domain dedication tool. The move is in accordance with Harvard Library’s Open Metadata Policy. The policy’s FAQ states,
“With the CC0 public domain designation, Harvard waives any copyright and related rights it holds in the metadata. We believe that this will help foster wide use and yield developments that will benefit the library community and the public.”
Harvard’s press release cites additional motivations for opening its data,
John Palfrey, Chair of the DPLA, said, “With this major contribution, developers will be able to start experimenting with building innovative applications that put to use the vital national resource that consists of our local public and research libraries, museums, archives and cultural collections.” He added that he hoped that this would encourage other institutions to make their own collection metadata publicly available.
We are excited that CC tools are being used for open data. For questions related to CC and data, see our FAQ about data, which also links to many more governments, libraries, and organizations that have opened their data.2 Comments »
Yesterday, Nature Publishing Group announced the launch of a new linked data platform, providing access to “20 million Resource Description Framework (RDF) statements, including primary metadata for more than 450,000 articles published by NPG since 1869. The datasets include basic citation information (title, author, publication date, etc) as well as NPG specific ontologies.” All datasets are published using the CC0 public domain dedication, which is not a license, but a legal tool that may be used by anyone wishing to permanently surrender the copyright and database rights (where they exist) they may have in a work, thereby placing it as nearly as possible into the public domain.
This is an excellent move by NPG, especially following an opinion piece they published in 2009 explicitly recommending open sharing and the use of CC0 to put data in the public domain, entitled, “Post-publication sharing of data and tools”:
“Although it is usual practice for major public databases to make data freely available to access and use, any restrictions on use should be strongly resisted and we endorse explicit encouragement of open sharing, for example under the newly available CC0 public domain waiver of Creative Commons.”
Many more organizations and institutions are using CC0 to release their data, which you can peruse at our wiki page for CC0 uses with data and databases. CC licenses are also used for data; read more about this and other issues plus an FAQ on CC and data at http://wiki.creativecommons.org/Data.2 Comments »
CC0 has been getting lots of love in the last couple months in the realm of data, specifically GLAM data (GLAM as in Galleries, Libraries, Archives, Museums). The national libraries of Spain and Germany have released their bibliographic data using the CC0 public domain dedication tool. For those of you who don’t know what that means, it means that the libraries have waived all copyrights to the extent possible in their jurisdictions, placing the data effectively into the public domain. What’s more, the data is available as linked open data, which means that the data sets are available as RDF (Resource Description Framework) on the web, enabling the data to be linked with other data from different sources.
The National Library of Spain teamed up with the Ontology Engineering Group (OEG) to create the data portal: datos.bne.es. The datasets can be accessed directly at http://www.bne.es/es/Catalogos/DatosEnlazados/DescargaFicheros.
The National Library of Germany, aka Deutsche Nationalbibliothek (DNB), has documentation on its linked open data under CC0 here. CC Germany reported the move, and a post in English can be found over at Open GLAM.
Relatedly, the Smithsonian Cooper-Hewitt Museum, a major design museum in New York, has released the collection data for 60% of its documented collection into the public domain, also using CC0. The data set is available on a repository in Github; you can read more about the move at http://www.cooperhewitt.org/collections/data.
To learn more about Creative Commons and data, including a recently updated FAQ, check out http://wiki.creativecommons.org/Data.2 Comments »
One week after the nuclear disaster at the Fukushima Diachi plant in March, the Safecast project was born to respond to the information needs of Japanese citizens regarding radiation levels in their environment. Safecast, then known as RDTN.org, started a campaign on Kickstarter “to provide an aggregate feed of nuclear radiation data from governmental, non-governmental and citizen-scientist sources.” All radiation data collected via the project would be dedicated to the public domain using CC0, “available to everyone, including scientists and nuclear experts who can provide context for lay people.” Since then, more than 1.25 million data points have been collected and shared; Safecast has been featured on PBS Newshour; and the project aims to expand its scope to mapping the rest of the world.
“Safecast supports the idea that more data – freely available data – is better. Our goal is not to single out any individual source of data as untrustworthy, but rather to contribute to the existing measurement data and make it more robust. Multiple sources of data are always better and more accurate when aggregated.
While Japan and radiation is the primary focus of the moment, this work has made us aware of a need for more environmental data on a global level and the longterm work that Safecast engages in will address these needs. Safecast is based in the US but is currently focused on outreach efforts in Japan. Our team includes contributors from around the world.”
To learn more, visit http://safecast.org. All raw data from the project is available for re-use via the CC0 public domain dedication, while other website content (such as photos and text) are available under CC BY-NC.Comments Off
Yesterday, Europeana — Europe’s digital library, museum and archive, and the first major adopter of the Public Domain Mark for works in the worldwide public domain — published and made available The Europeana Licensing Framework using the CC0 public domain dedication. The licensing framework encompasses and is a follow-on to the recent Data Exchange Agreement that Europeana adopted in September, and which Europe’s national librarians publicly supported weeks later.
In Europeana’s own words, the licensing framework “underpins Europeana’s Strategic Plan” for 2011-2015:
“The goal of the Europeana Licensing Framework is to standardize and harmonize rights-related information and practices. Its intention is to bring clarity to a complex area, and make transparent the relationship between the end-users and the institutions that provide data.”
“Users need good and reliable information about what they may do with [content]. Whether they can freely re-use it for their educational, creative or even commercial projects or not. The Europeana Licensing Framework therefore asks data providers to provide structured rights information in the metadata they provide about the content that is accessible through Europeana. Doing so makes it easier for users to filter content by the different re-use options they have – by ‘public domain’, for example and hence easier for users to comply with licensing terms.”
The framework supports re-use of data and content through CC legal tools (CC0 public domain dedication, the Public Domain Mark, and CC BY-SA), providing guidelines for their appropriate applications. Download the European Licensing Framework (pdf) or peruse the full set of resources at Europeana Connect.
Relatedly, see Europeana’s white paper no. 2 published last month, The Problem of the Yellow Milkmaid: A Business Model Perspective on Open Metadata (pdf). The white paper “explore[s] in detail the risks and rewards of open data from different perspectives” after “extensive consultation with the heritage sector, including dozens of workshops.” It opens:
1 Comment »
“‘The Milkmaid’, one of Johannes Vermeer’s most famous pieces, depicts a scene of a woman quietly pouring milk into a bowl. During a survey the Rijksmuseum discovered that there were over 10,000 copies of the image on the internet—mostly poor, yellowish reproductions1. As a result of all of these low-quality copies on the web, according to the Rijksmuseum, “people simply didn’t believe the postcards in our museum shop were showing the original painting. This was the trigger for us to put high-resolution images of the original work with open metadata on the web ourselves. Opening up our data is our best defence against the ‘yellow Milkmaid’.”
In other news: