In the past few weeks, the Foundation Center and the philanthropic world have taken two big steps forward in transparency. First, 15 of the nation’s largest foundations joined the “Reporting Commitment,” agreeing to release grant information regularly through Foundation Center’s Glasspockets repository. Then last week, the Foundation Center relaunched IssueLab, an extensive repository of third-sector research. IssueLab’s mission is to “gather, index, and share the collective intelligence of the social sector” more effectively.
All of the IssueLab metadata is licensed under CC BY-NC-SA and all of the content is accessible (for reading, if not necessarily for other uses) for free. Everything released to Glasspockets under the Reporting Commitment is licensed under BY NC.
Taken together, these initiatives present some interesting possibilities for the future of open data in the foundation space. Foundation Center president Bradford K. Smith discussed the implications of both initiatives in a blog post:
If you think foundations are only ATM machines and nonprofits just service providers, think again. With the launch of IssueLab, there is one place you can go to find more than eleven thousand knowledge products published, funded, produced, and/or generated by foundations and nonprofits in the U.S. and around the globe.
Last month, the Foundation Center announced the Reporting Commitment, an effort by fifteen of America’s largest philanthropic foundations to make their grants data — who they give money to, how much, where, and for what purpose — available in an open, machine-readable format. Starting today, through IssueLab, the social sector can also access what it knows as a result of that funding. A service of the Foundation Center, IssueLab gathers, indexes, and shares the sector’s collective intelligence on a free, open, and searchable platform, and encourages users to share, copy, distribute, and even adapt the work. It’s a big step for philanthropy and “open knowledge.”
Smith went on to explain why it’s important that these resources aren’t just freely available; they’re openly licensed too:
Free is good, but IssueLab promotes openness in a number of other ways. First, the metadata — the abstracts and “tags” developed for all reports in the collection — is available under a Creative Commons license and can be grabbed and/or remixed by anyone as long as they use it for non-commercial purposes. Second, only work that is available for free is included in the IssueLab collection. These are public “assets,” in that the organizations which produced them already have tax-exempt status and/or have received government funding, and they should be easy for the public to find. Sorry but Kardashian Konfidential will not be found on IssueLab. Third, IssueLab itself is an open-source platform whose underlying codebase/framework is continually being improved by a community of developers. And fourth, our own developers embrace the Open Archives Initiative (OAI), which develops and promotes interoperability standards to facilitate the efficient dissemination of online content.
Here at Creative Commons, we’re big proponents of foundations and other institutions sharing their data — and the works they produce or fund — under an open license. It makes sense for foundations to reciprocate the public’s trust by showing how philanthropic dollars have been spent, and the foundations that join in the Reporting Commitment make that information available much sooner and much more easily than it is under the federally-required information returns. By use of Glasspockets, the public can see and compare the activities of the participating foundations. Private foundations are tax-exempt because they are dedicated to the public benefit; those that share their data and research in ways that invite the reuse and contributions of others add a valuable new dimension to their public service.4 Comments »
Some of these developments may be dated by a month or more, but we want to make sure they are on your radar by pointing them out here.
Several open data portals have launched, including a Brazilian Open Data portal powered by the open-source data cataloguing software CKAN (run by the Open Knowledge Foundation – OKFN). The Ministry of Planning in Brazil worked with the OKFN to develop the portal, cultivating citizen participation through an open and transparent development process. Furthermore, the portal itself carries a default license of CC BY-SA. Since its May 4 launch, the portal has grown and now hosts 79 data sets and 893 resources. As noted on the OKFN blog, “the portal is part of a larger project called the National Infrastructure Open Data, or INDA. The general idea of INDA is to establish technical standards for open data, promote training and support public bodies in the task of publishing open data. This entire process is done through intra-government cooperation and cooperation between government and citizens, always aiming to achieve a real platform for open government.”
You should also take note of the Open GLAM data portal. This portal also runs on CKAN and is a hub for open data sets from GLAM institutions, aka Galleries, Libraries, Archives, and Museums. The datasets are licensed under various open licenses, and some with no rights attached thanks to the use of the CC0 public domain waiver.
In addition to open data portals, open data initiatives like the School of Data and the Open Data Institute are taking off. The School of Data is a collaboration between the OKFN and the Peer 2 Peer University (P2PU) to “create a set of courses for people to learn how to do interesting things with data, from beginners to experts.” In late May, the School of Data held a week-long kick-off sprint in Berlin with a virtual component, which I participated in by helping to start an open data challenge with virtual colleagues. The challenge is still in development, and once completed it will be a part of the School of Open as well as the School of Data. You can help to build it at the P2PU platform.
The kick-off yielded a great foundation for many other data tracks as part of the School of Data, which you can read about here.
The Open Data Institute is an initiative by the UK government to “incubate, nurture and mentor new businesses exploiting Open Data for economic growth” and to “promote innovation driven by the UK Government Open Data policy.” £10m will be invested over five years by the Technology Strategy Board, a non-departmental public body. The UK government has published its implementation plan as a pdf online. You can learn more at The Guardian article from last May.
The data-driven economy is also a hot topic within the EU, with the emergence of a data session at the European Commission’s 2nd Digital Agenda Assembly taking place today and tomorrow. The workshop will “explore the potential of data, some of the most promising economic and business aspects involved, and discuss how policy for data and our investment in R&D can better address the challenges of businesses and the public sector and further support innovative business development.”
Lastly, to put all the current activity around data into perspective, is a thoughtful article by the OKFN’s Jonathan Gray on “What data can and cannot do.” The Guardian article reinforces the point that data, while valuable, when divorced from context and without interpretation, is not very effective. He encourages us to “cultivate a more critical literacy” towards data:
“Data can be an immensely powerful asset, if used in the right way. But as users and advocates of this potent and intoxicating stuff we should strive to keep our expectations of it proportional to the opportunity it represents.”
Essentially, opening up data is just the first step — and arguably, a necessary step to ensuring that data can be reused, contextualized, and interpreted in meaningful ways.
To learn more about how CC tools may be applied to data, see our landing page and FAQ on data.1 Comment »
This Saturday’s International Journalism Festival in Perugia, Italy will unveil a months-long collaborative effort — the Data Journalism Handbook, a free, CC BY-SA licensed book to help journalists find and use data for better news reporting.
A joint initiative of the European Journalism Centre and the Open Knowledge Foundation, the collaborative book effort was kicked off at the 2011 Mozilla Festival: Media, Freedom and the Web — which gathered reporters, data journalism practitioners, advocates, and journalism and related organizations from around the globe. Over three days, participants researched, wrote, and edited chapters of the handbook. Contributors include the Australian Broadcasting Corporation, the BBC, the Chicago Tribune, Deutsche Welle, the Guardian, the Financial Times, La Nacion, The New York Times, ProPublica, The Washington Post, and many others — including Creative Commons. Creative Commons contributed to various pieces of the “Getting Data” section, including “Using and Sharing Data: the Black Letter, Fine Print, and Reality.” You can preview the outline here.
From the announcement,
Now more than ever, journalists need to know how to work with data. From covering public spending to elections, the Wikileaks cables to the financial crisis – journalists need to know where to find and request key datasets, how to make sense of them, and how to present them to the public.
Jonathan Gray, lead editor for the handbook, says: “The book gives us an unprecedented, behind-the-scenes look at how data is used by journalists around the world – from big news organisations to citizen reporters. We hope it will serve to inform and inspire a new generation of data journalists to use the information around us to communicate complex and important issues to the public.
You can sign up to get the handbook when it goes live at http://www.datajournalismhandbook.org. The entire handbook will be available for free under CC BY-SA, with an alternative printed version and e-book to be published by O’Reilly Media.2 Comments »
Creative Commons is seeking a Project Coordinator for Science and Data! The Project Coordinator will organize, coordinate and manage projects related to data policy and governance and perform research and analysis on data governance topics across relevant sectors — particularly for science — and communicate results and recommendations from the project via writing and related outreach.
We are looking for someone who is experienced in policy analysis, development and processes, in addition to Open Source Software, Open Access/Open Data and other Open content projects. A science and/or legal background with international experience is highly desirable — especially as the position will be representing Creative Commons at global events in the Open Data and Open Science communities! See the job posting and apply at our opportunities page.
We will stop accepting applications after 11:59 p.m. PDT, May 25, 2012.Comments Off
The last few months has seen a growth in open data, particularly from governments and libraries. Among the more recent open data adopters are the Austrian government, Italian Ministry of Education, University and Research, Italian Chamber of Deputies, and Harvard Library.
The Italian Ministry of Education, University and Research launched its Open Data Portal under CC BY, publishing the data of Italian schools (such as address, phone number, web site, administrative code), students (number, gender, performance), and teachers (number, gender, retirement, etc.). The Ministry aims to make all of its data eventually available and open for reuse, in order to improve transparency, aid in the understanding of the Italian scholastic system, and promote the creation of new tools and services for students, teachers and families.
Lastly, Harvard Library in the U.S. has released 12 million catalog records into the public domain using the CC0 public domain dedication tool. The move is in accordance with Harvard Library’s Open Metadata Policy. The policy’s FAQ states,
“With the CC0 public domain designation, Harvard waives any copyright and related rights it holds in the metadata. We believe that this will help foster wide use and yield developments that will benefit the library community and the public.”
Harvard’s press release cites additional motivations for opening its data,
John Palfrey, Chair of the DPLA, said, “With this major contribution, developers will be able to start experimenting with building innovative applications that put to use the vital national resource that consists of our local public and research libraries, museums, archives and cultural collections.” He added that he hoped that this would encourage other institutions to make their own collection metadata publicly available.
We are excited that CC tools are being used for open data. For questions related to CC and data, see our FAQ about data, which also links to many more governments, libraries, and organizations that have opened their data.2 Comments »
Today we’re pleased to announce that Athabasca University, BCcampus, and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic have joined together to re-establish a CC affiliate team in Canada. All three organizations will take part in the official relaunch at the Creative Commons Salon Ottawa: Open Data on Friday, March 30.
This is not a new affiliate so much as a re-ignition of our existing Canadian community. Since 2004, a number of volunteers, interns and affiliate leads have supported and promoted CC and the use of open licenses generally in a Canadian context. This new team, representing three organizations spread across the geographic and cultural expanse of Canada, will be a key asset to support and lead the CC activities of this community.
Through public outreach, community building, tools, research, and resources this team will work with a network of open supporters to maximize digital creativity, sharing and innovation across Canada. The work of CC Canada is aligned with the overarching vision of Creative Commons — to help provide universal access to research and education, and full participation in culture to drive a new era of development, growth and productivity.
Whether you’re an artist, teacher, scientist, librarian, policymaker or just a regular citizen, Creative Commons provides you with a free, public, and standardized set of tools and licenses that create a balance between the reality of the Internet and the reality of copyright laws. CC Canada joins over four hundred other affiliates working in seventy-two jurisdictions around the world in supporting the use of Creative Commons infrastructure. Collectively this global network is creating a vast and growing digital commons of content that can be copied, distributed, edited, remixed, and built upon, all within the boundaries of copyright law.
Be sure to check out the CC Canada roadmap on the wiki. Congratulations to the CC Canada affiliate team!3 Comments »
In November we wrote that the White House Office of Science and Technology Policy (OSTP) was soliciting comments on two related Requests for Information (RFI). One asked for feedback on how the federal government should manage public access to scholarly publications resulting from federal investments, and the other wanted input on public access to the digital data funded by federal tax dollars.
Creative Commons submitted a response to both RFIs. Below is a brief summary of the main points. Several other groups and individuals have submitted responses to OSTP, and all the comments will eventually be made available on the OSTP website.
- The public funds tens of billions of dollars in research each year. The federal government can support scientific innovation, productivity, and economic efficiency of the taxpayer dollars they expend by instituting an open licensing policy.
- Scholarly articles created as a result of federally funded research should be released under full open access. Full open access policies will provide to the public immediate, free-of-cost online availability to federally funded research without restriction except that attribution be given to the source.
- The standard means for granting permission to the public aligned with full open access is through a Creative Commons Attribution (CC BY) license.
- If the federal government wants to maximize the impact of digital data resulting from federally funded scientific research, it should provide explicit, easy-to-understand information about the rights available to the public.
- The federal government should establish policies that insure the public has cost-free, unimpeded access to the digital data resulting from federally funded scientific research. Access to this data should be made available as soon as possible, with due consideration to confidentiality and privacy issues, as well as the researchers’ need to receive credit and benefit from the work.
- The federal government can grant these permissions to the public by supporting policies whereby 1) data is made available by dedicating it to the public domain or 2) data is made available through a liberal license where at most downstream data users must give credit to the source of the data. CC offers tools such as the CC0 waiver and CC BY license in support of these goals.
These FAQs are intended to:
(1) alert CC licensors that some uses of their data and databases may not trigger the license conditions,
(2) reiterate to licensees that CC licenses do not restrict them from doing anything they are otherwise permitted to do under the law, and
(3) clear up confusion about how the version 3.0 CC licenses treat sui generis database rights.
To develop FAQs to meet these goals, we focused on the following considerations:
- We cannot answer the question of whether and to what extent data and databases are subject to copyright as a general matter. Instead, we can arm licensors and licensees with the questions to ask to make their own determination.
- Complex legal questions about copyright law are not unique to data and databases. (Copyright exceptions and limitations raise similar quandaries, as does the question of what constitutes an adaptation, etc.) We should keep this in mind before we over-complicate and over-explain the nuances of CC licenses as they relate to data. On the other hand, it is important to acknowledge there are significant limitations of copyright law as it applies to purely factual data and databases, so CC licensors are not misled about what they get by applying a CC license to their works.
- We need to make clear that, unless the licensor chooses to delineate, CC licenses don’t distinguish between data and databases. All copyrightable content within the scope of the license is treated the same; the only difference is how the law operates with respect to different types of content. Nonetheless, if we over-emphasize this point we risk misleading the public about the practical application of CC licenses to data and databases.
- CC’s interpretation of how its licenses apply to data and databases raises intricate policy decisions for CC. Specifically, CC has to navigate the inherent tension between, on the one hand, arguing against the current international regime of overly restrictive copyright control and, on the other, advocating an interpretation of copyright law that maximizes proprietary control over factual data. CC has made policy decisions about data in the past after extensive deliberation with our community. Now, as we prepare for version 4.0, we ask our community to help us re-examine prior decisions in light of policy developments over the past five years. Please contribute to the discussions about licensing database rights in 4.0, as well as other related issues.
For those of you who have watched or participated in CC’s work in the data arena over the years, these FAQs update and now fully replace the original data FAQs published by Science Commons. While the law has not changed materially since those original FAQs were first published, Creative Commons (which now fully integrates Science Commons) has worked to clarify how its 3.0 licenses work with databases in practice, rather than focusing on the normative question of whether and how users should apply (or not apply) our licenses in that regard, which was clearly the focus of the earlier FAQs.
We hope this new resource will be useful to those of you grappling with data licensing and helps to clarify how our licenses operate in practice. We welcome your feedback.4 Comments »
One week after the nuclear disaster at the Fukushima Diachi plant in March, the Safecast project was born to respond to the information needs of Japanese citizens regarding radiation levels in their environment. Safecast, then known as RDTN.org, started a campaign on Kickstarter “to provide an aggregate feed of nuclear radiation data from governmental, non-governmental and citizen-scientist sources.” All radiation data collected via the project would be dedicated to the public domain using CC0, “available to everyone, including scientists and nuclear experts who can provide context for lay people.” Since then, more than 1.25 million data points have been collected and shared; Safecast has been featured on PBS Newshour; and the project aims to expand its scope to mapping the rest of the world.
“Safecast supports the idea that more data – freely available data – is better. Our goal is not to single out any individual source of data as untrustworthy, but rather to contribute to the existing measurement data and make it more robust. Multiple sources of data are always better and more accurate when aggregated.
While Japan and radiation is the primary focus of the moment, this work has made us aware of a need for more environmental data on a global level and the longterm work that Safecast engages in will address these needs. Safecast is based in the US but is currently focused on outreach efforts in Japan. Our team includes contributors from around the world.”
To learn more, visit http://safecast.org. All raw data from the project is available for re-use via the CC0 public domain dedication, while other website content (such as photos and text) are available under CC BY-NC.Comments Off
Yesterday, Europeana — Europe’s digital library, museum and archive, and the first major adopter of the Public Domain Mark for works in the worldwide public domain — published and made available The Europeana Licensing Framework using the CC0 public domain dedication. The licensing framework encompasses and is a follow-on to the recent Data Exchange Agreement that Europeana adopted in September, and which Europe’s national librarians publicly supported weeks later.
In Europeana’s own words, the licensing framework “underpins Europeana’s Strategic Plan” for 2011-2015:
“The goal of the Europeana Licensing Framework is to standardize and harmonize rights-related information and practices. Its intention is to bring clarity to a complex area, and make transparent the relationship between the end-users and the institutions that provide data.”
“Users need good and reliable information about what they may do with [content]. Whether they can freely re-use it for their educational, creative or even commercial projects or not. The Europeana Licensing Framework therefore asks data providers to provide structured rights information in the metadata they provide about the content that is accessible through Europeana. Doing so makes it easier for users to filter content by the different re-use options they have – by ‘public domain’, for example and hence easier for users to comply with licensing terms.”
The framework supports re-use of data and content through CC legal tools (CC0 public domain dedication, the Public Domain Mark, and CC BY-SA), providing guidelines for their appropriate applications. Download the European Licensing Framework (pdf) or peruse the full set of resources at Europeana Connect.
Relatedly, see Europeana’s white paper no. 2 published last month, The Problem of the Yellow Milkmaid: A Business Model Perspective on Open Metadata (pdf). The white paper “explore[s] in detail the risks and rewards of open data from different perspectives” after “extensive consultation with the heritage sector, including dozens of workshops.” It opens:
1 Comment »
“‘The Milkmaid’, one of Johannes Vermeer’s most famous pieces, depicts a scene of a woman quietly pouring milk into a bowl. During a survey the Rijksmuseum discovered that there were over 10,000 copies of the image on the internet—mostly poor, yellowish reproductions1. As a result of all of these low-quality copies on the web, according to the Rijksmuseum, “people simply didn’t believe the postcards in our museum shop were showing the original painting. This was the trigger for us to put high-resolution images of the original work with open metadata on the web ourselves. Opening up our data is our best defence against the ‘yellow Milkmaid’.”