This is part three of a five week series on the Affiliate Team project grants. So far, you’ve heard from our affiliates in Africa and the Arab World. Today, we’re showcasing projects in our Asia-Pacific region, including open data workshops from Japan, a media studies textbook from New Zealand, and software tools and guidelines for public domain materials from Taiwan.
Japan: Workshops and Symposium for Open Data in Japan
by Puneet Kishor (project lead: Tomoaki Watanabe)
Last year in June, the CommonSphere, won a grant to hold three workshops and a public symposium on the use of CC tools (licenses and the CC0 Public Domain Dedication) in the context of open data. The aim of the workshops was to respond to informal inputs from government and other stakeholders on their implementation of CC tools in the context of open data, a new frontier of openness in the last few years in Japan. The team was planning to invite involvement from Japanese national and municipal government agencies and Open Knowledge Foundation Japan.
The first event was a workshop at Information Processing Agency, IPA, an independent administrative agency discussing open data licensing. The panel involved a member of Open Knowledge Foundation Japan as well. The whole session was video-recorded by the IPA staff, and it is now available online, along with presentation materials. The attendance was mostly government officials and the agency staff, around 50 people, and an attendant survey indicated a reasonable success.
The second meeting was held among key figures related to open data and other relevant initiatives, as invitation-only discussions on licensing and other legal issues. CCJP provided logistics support and expertise. It was decided by the attendants that the discussion will remain informal and unpublished.
The third was a symposium to discuss implementation issues of open data, including licensing issues organized by the third party, Innovation Nippon, a joint project between Google Japan and GLOCOM. Both CCJP and OKF Japan helped with pre-event publicity and provided expertise. It featured and was attended by local government officials and municipal law makers, along with business people and academics. The event was videocast and the archive is available already, along with the slides.
- Political will, however, key politicians are not necessarily expected to support liberal licensing allowing use that goes against public order.
- Evidence, anecdotal or scientific, showing that more liberal licensing results in better outcomes. However, such evidence is not abundant, and some government agencies have very specific uses in mind that may make them hesitate.
- Evidence showing other governments of developed countries are doing things differently from what Japan is doing or planning to do. UK, FR, US, AU, NZ all are CC-BY compatible or use a CC-BY license. Their licensing all seem to be open in the Open Definition sense. Japan may result a bit differently.
- Prospective users actively asking for a change.
The challenges faced by the team so far have been 1) the above-mentioned development away from CC tools and 2) the lack of availability of licensing and editing talent on a more stable basis.
The team is in talks with a local government to hold at least one more workshop to discuss licensing issues as they relate to local governments. The symposium was originally planned to be at the end, but given the emerging development above, it may be timed differently.
New Zealand: Media Text Hack
by project lead Matt McGregor
In the middle of 2013, a few New Zealand academics and librarians began to toss around an exciting-but-preposterous-sounding idea: what if they could hack a media studies textbook in a weekend, and then release the results to the world under an open Creative Commons license?
The social benefit – the why – was clear. With textbook prices continuing to rise (and rise) well above inflation, and student debt levels ballooning, the Pacific region desperately needs a new model for producing and distributing educational resources. As Dr Erika Pearson, who led the Media Text Hack project, put it, “Textbooks currently available for New Zealand first year students are often produced overseas, usually the US, and can have a cripplingly high price tag.”
The how was a bit more difficult. Academics and librarians are already rather busy people, and the process of building and managing a team of contributors is labor intensive, with plenty of emailing, documenting, cat-herding, and problem-solving. Thankfully, with the help of a $4000 affiliate grant from Creative Commons, the team could hire a project manager — Bernard Madill — to help build the network of contributors, document progress, and make sure the hack weekend progressed smoothly.
Cut to 16-17 November, 2013: the team, largely made up of early career researchers from across New Zealand and Australia, got together and successfully produced the ‘beta’ version of the textbook. For the last few months, they have been progressively editing and re-editing content, to ensure that the textbook is classroom ready in time for the first down-under semester, which starts in late February.
As the book is shared, edited, and reused by students and teachers across the world, the team will incorporate new ideas, explanations, and examples, producing a text that can be hacked and re-hacked over the years ahead.
This is new territory: while there have been a few textbooks hacks in other disciplines – including this inspirational group of Finnish mathematicians – this is of the first (to our knowledge) of this kind of text-hack in the humanities.
For this reason, the team is putting together a parallel ‘cookbook’, to enable other projects to understand what worked – as well as what did not work – about the project. This will be released in the first half of 2014, and will hopefully inspire other projects around the world to attempt open textbook projects of their own.
The team is hopeful that open textbooks will become more prevalent in public higher education. As University of Otago Copyright Officer Richard White, a core member of the text-hack team, puts it, the open textbook marks a return to the “core principles of academia: sharing knowledge, learning from, and building on the work of others.”
Taiwan: Practices and Depositories for The Public Domain
by project lead Tyng-Ruey Chuang
The project “Practices and Depositories for The Public Domain” (PD4PD) aims to develop software tools and practical guidelines to put public domain materials online more easily. This is a joint uptake of the GNU MediaGoblin project , NETivism Ltd. , and Creative Commons Taiwan , with the latter coordinating the team effort. The overall project goal is to firm up access to and reuse of the many digital manifestations of public domain cultural works by means of replicable tools, practices, and communities.
Tools: The plan is to extend the functionality of the GNU MediaGoblin software package so as to make it more suitable for hosting large collections of public domain materials. For this purpose, new features have been suggested to add to GNU MediaGoblin to help users self-hosting their media archives. These features include batch upload of media (with proper metadata annotations), customizable themes and pages, and an “easy install” script (to install GNU Media Goblin itself).
Practices: The plan is to develop guidelines and how-to on self-hosting public domain materials. Two versions are planned: One in English and the other one in the Chinese language used in Taiwan. An educational website on the public domain, and self-hosting, is also planned.
Community: The plan is to outreach to content holders in Taiwan, and to work with them in releasing some of their holdings to the public domain. It will be demonstrated by a website using the tools mentioned above.
This six-month project started in December 2013 and plans to finish in June 2014. The GNU MediaGoblin project has been focusing on tool development while NETivism Ltd. is concentrating on community outreach. Creative Commons Taiwan is working on practical guidelines. Several interns have been recruited to help with this project.No Comments »
Last month, Creative Commons and several other groups responded to the European Commission’s consultation on licensing, datasets and charging for the re-use of public sector information (PSI). See our response here. There were 355 submissions to the questionnaire (spreadsheet download), apparently from all EU Member States except Cyprus. The Commission hosted a hearing (PDF of meeting minutes) on the issue on 25 November.
This week the Commission released a final summary report (PDF) to the consultation. There were several interesting data points from the report concerning licensing. First, the questionnaire respondents preferred a “light-weight approach, limited to a mere disclaimer or consisting of allowing the reuse of data without any particular restrictions…” (pg5). In our submission, we said that there should be no conditions attached to the re-use of public sector information, with the best case scenario being for public sector information to be in the public domain, exempt from copyright protection altogether by amending national copyright laws.
Second, when asked about licensing conditions that would comply with the PSI Directive’s requirement of ‘not unnecessarily restricting possibilities for re-use’, the most respondents indicated support for the requirement to acknowledge the source of data. In our submission we said we believed every condition would be deemed restrictive, since ideally PSI would be removed from the purview of copyright protection through law. At the same time, we realize that if the Commission were to permit public sector bodies to incorporate a limited set of conditions through licensing, then they should be expected to use standard public licenses aligned with the Open Definition. The preference should be for “attribution only” licenses, like CC BY.
The report noted that a majority (62%) of respondents believed that greater interoperability would be best achieved through the use of standard licences. And 71% of respondents said that the adoption of Creative Commons licenses would be the best option to promote interoperability. The report states, “this may be interpreted as both a high awareness of the availability of standard licences and a genuine understanding of their role in ensuring licencing interoperability across jurisdictions” (pg7).
The report also mentions the fact that several respondents chose to provide feedback on which Creative Commons licenses would be deemed suitable for PSI re-use. It noted that the most prevalent licenses mentioned were CC0 and CC BY, while a few respondents suggested BY-SA. Others provided a more general answer, such as “the most open CC license could be used…But [the] BEST OPTION is no use of any of license: public domain” (pg9).
The report concludes (pg16):
No Comments »
There is also a widespread acceptance of the need to offer interoperable solutions, both on the technical and licencing levels. And even if opinions differ as to the exact shape of re-use conditions, the answers show that a general trend towards a more open and interoperable licencing system in Europe, largely based on available standard licences is gaining ground.
now available under
Paleontology, the description and biological classification of fossils, has spawned countless field expeditions, museum trips, and hundreds of thousands of publications. The construction of databases that aggregate these descriptive data on fossils in a way that allows large-scale, synthetic questions to be addressed, such as the long-term history of biodiversity and rates of biological extinction and origination during global environment change, has greatly expanded the intellectual reach of paleontology and has led to many important new insights into macroevolutionary and macroecological processes.
One of the largest compendia of fossil data assembled to date is the Paleobiology Database (PBDB), founded in 1998 by John Alroy and Charles Marshall. These two pioneers assembled a small team of scientists who were motivated to generate the first geographically-explicit, sampling standardized global biodiversity curve. The PBDB has since grown to include an international group of more than 150 contributing scientists with diverse research agendas. Collectively, this body of volunteer and grant-supported investigators have spent more than 9 continuous person years entering more than 280,000 taxonomic names, nearly 500,000 published opinions on the status and classification of those names, and over 1.1 million taxonomic occurrences. Some PBDB data derive from the original fieldwork and specimen-based studies of the contributors, but the majority of the data were extracted from the text, figures, and tables of over 48,000 published papers, books, and monographs that span the range of topics covered by paleontology. Their efforts have been well rewarded by enabling new science. As of December 2013, the PBDB had produced almost two hundred official peer reviewed publications, all of which address scientific questions that cannot be adequately answered without such a database.
|Ptyagnostus atavus or Leiopyge calva Zone (Cambrian of the United States)|
|PaleoDB collection 262: authorized by Jack Sepkoski, entered by Mike Sommers on 20.11.1998|
Shift to CC BY
From its inception, the paleontologists who have invested the most effort in entering data have made decisions about data management and access policies, which ultimately brings up the important questions of proper licensing and citation. In the first application of the PBDB licensing policy, the individual contributors chose their own CC license for each fossil collection record. As a result there were three kinds of contributors: those who didn’t know what to do, didn’t care, or didn’t know about the new policy that required them to specify how existing collections should be licensed (55% of the data), those who selected the most restricted option available to them (34% of the data), and those who selected the most unrestricted option available to them (10% of the data).
This received mostly negative response via social media and other outlets, partly because of the increased attention the database was receiving during a leadership and governance transition. Naturally, the governance group responded to the community feedback. The first actual action was by individual contributors. Many of the contributors who either didn’t know about CC licenses or who didn’t think fully about their meaning and implications changed their own individual licenses. This always went from a more restrictive license to the least restrictive option available to them: CC BY. That wave of individual choices towards the least restrictive license immediately shifted the balance for records in the database. At that point, only one contributor had a restrictive license, and the governance group quickly moved to adopt one single unifying license for the database: CC BY. Now, all new records are explicitly CC BY as part of database policy, although individual contributors still have the option of placing a moratorium on the public release of their own new data so as to protect their individual scientific interests.
Future of PBDB
In addition to being a scientific asset to the field of paleontology, the PBDB and other databases like it provide an addition means by which to participate in rapidly emerging initiatives and developments in cyberinfrastructure. To increase its reach in this area, the PBDB now has an Application Programming Interface (API), which makes data more easily and transparently accessible, both to individual researchers and to applications, such as the open source web application PBDB Navigator and the Mancos iOS mobile application. Both of these applications are built on the public API and are designed to allow the history of life and environment documented by the PBDB to be more discoverable. These new modes of interactivity and visualization highlight unintended, but potentially useful, aspects of the PBDB. The PBDB API has facilitated a loosely coupled integration with other related but independently managed biological and paleontological database initiatives and online resources, such as the Neotoma Paleoecology Database, Morphobank, and the Encyclopedia of Life. The PBDB API can also be harnessed by geoscientists outside of paleontology, thereby facilitating the integration of paleontological data with diverse types of data and model output, such as paleogeographic plate rotation and geophysical models in GPlates. The liberal CC BY license ensures interoperability and data access necessary to facilitate fundamentally new science and because it expands the reach of paleontology to a broader community of researchers and educators than is possible via any single website or application.1 Comment »
A few weeks ago, CC co-hosted an open education meetup in London with P2PU, the Open Knowledge Foundation (OKFN), and FLOSS Manuals Foundation. We also led or participated in sessions and tracks on open science, makes for cultural archives, collaborations across the open space, and open education data at the Mozilla Festival immediately following the meetup. Several interesting projects have arisen from both the meetup and sessions, so we thought it worthwhile to mention here in case others would like to get involved.
Hit the Road Map: A Human Timeline of the Open Education Space
In addition to networking and sharing our common open education interests, participants of the Open Ed Meetup at the William Goodenough house collectively built a timeline of events that they felt marked important (and personal) milestones in the open education space, from the beginning of the Open University in 1969 to Lessig’s countersuit against Liberation Music this year. The timeline was a great collaborative exercise for the group, and one that we hope is only beginning. As Marieke from the OKFN writes in her post,
“…the plan is to digitise what we have by moving all the ideas in to Google Docs and then create a TimeMapper of them. This may form part of the Open Education handbook. At that point we will be able to share the document with you so you can add more information, correct the date and add in your own ideas. We may even try to run more open education timeline events.”
In fact, CC affiliates in Europe will be co-hosting the second Open Education Handbook booksprint with the OKFN and Wikimedia in Berlin as a result!
Getting hands-on with tools on the web for Open Science
by Billy Meinke
In another team-up with the Open Knowledge Foundation (OKFN), we ran a session investigating tools on the web that help make science more open. Hinging on the theme of alternative ways to measure (altmetrics) scholarly impact, collaborators joined us in the session and got hands-on with tools that we can use to see how publications and other research outputs are talked about and shared on the web. To help build content for lessons linked to the Open Science course in the School of Open, participants tested a handful of free tools to see what they were able to measure, how usable the tools were, and considered ways to share this with others who aren’t familiar with altmetrics. We will be organizing the content over the next few weeks, and offering the altmetrics lesson as a standalone exercise once it’s complete. For more information about how the session went, see this blog post.
Collaborations across the Open Space
We also participated in a session with Wikimedia, OKFN, and other orgs to talk about how we could better collaborate and share news among our organizations so we don’t keep reinventing the wheel. I won’t go into detail here, as the wiki session writeup does it much better, and has continued to grow since the festival. For example, something as simple as a blog aggregator for all “open” related news would help those working in this space tremendously. To join our efforts, head over to the wiki and add your thoughts and be notified of follow-up meetings.
Digital Self Preservation Toolkit
One neat thing to come out of this year’s Mozfest was the beginnings of a Digital Self Preservation Toolkit exploring the idea of what happens to your body of creative, educational, or scientific work when you die. Some questions we asked and discussed were: In your country, what happens to your work when you die? What steps can you take to ensure its posterity? How would you want it shared and who would you want to own it? Our initial aim was to develop a set of tools and tips to help people think through how they might want to release their work upon death, building on an idea that the Question Copyright folks had last year around a free culture trust. Skirting the technical and legal issues for the time being, we came up with a prototype IP donor badge that creators might use to signify their intent, a concept form that they would fill out, and a mock-up website where such a toolkit might reside. We are now continuing our efforts in collaboration with folks from numerous organizations interested in the same questions, and you can join us to move the project forward at the Free Culture Trust wiki.
OER Research Hub’s Open Education Data Detective
Lastly, we’d like to highlight our collaboration with the OER Research Hub, who held a “scrum” on visualizing open education data called the Open Ed Data Detective. Participants experimented with open education data that the OER Research Hub made available, including data on School of Open courses.No Comments »
Seal Of The Executive Office Of The President / Public Domain
Yesterday President Barack Obama issued an Executive Order requiring federal government information to be open and machine-readable by default. This Order is the latest in a series of actions going back to 2009 in support of increasing access to and transparency of government information.
In addition to the Executive Order, the White House released a Memorandum (PDF) explaining how federal government agencies will comply with the new open data policy.
This Memorandum requires agencies to collect or create information in a way that supports downstream information processing and dissemination activities. This includes using machine readable and open formats, data standards, and common core and extensible metadata for all new information creation and collection efforts. It also includes agencies ensuring information stewardship through the use of open licenses and review of information for privacy, confidentiality, security, or other restrictions to release.
It provides a forward-thinking set of guidelines for open data to be released by U.S. federal agencies:
Open data: For the purposes of this Memorandum, the term “open data” refers to publicly available data structured in a way that enables the data to be fully discoverable and usable by end users. In general, open data will be consistent with the following principles:
- Public. Consistent with OMB’s Open Government Directive, agencies must adopt a presumption in favor of openness to the extent permitted by law and subject to privacy, confidentiality, security, or other valid restrictions.
- Accessible. Open data are made available in convenient, modifiable, and open formats that can be retrieved, downloaded, indexed, and searched. Formats should be machine-readable (i.e., data are reasonably structured to allow automated processing). Open data structures do not discriminate against any person or group of persons and should be made available to the widest range of users for the widest range of purposes, often by providing the data in multiple formats for consumption. To the extent permitted by law, these formats should be non-proprietary, publicly available, and no restrictions should be placed upon their use.
- Described. Open data are described fully so that consumers of the data have sufficient information to understand their strengths, weaknesses, analytical limitations, security requirements, as well as how to process them. This involves the use of robust, granular metadata (i.e., fields or elements that describe data), thorough documentation of data elements, data dictionaries, and, if applicable, additional descriptions of the purpose of the collection, the population of interest, the characteristics of the sample, and the method of data collection.
- Reusable. Open data are made available under an open license that places no restrictions on their use.
- Complete. Open data are published in primary forms (i.e., as collected at the source), with the finest possible level of granularity that is practicable and permitted by law and other requirements. Derived or aggregate open data should also be published but must reference the primary data.
- Timely. Open data are made available as quickly as necessary to preserve the value of the data. Frequency of release should account for key audiences and downstream needs.
- Managed Post-Release. A point of contact must be designated to assist with data use and to respond to complaints about adherence to these open data requirements.
The Memorandum provides some more information about how U.S. government information will be made reusable:
Ensure information stewardship through the use of open licenses – Agencies must apply open licenses, in consultation with the best practices found in Project Open Data, to information as it is collected or created so that if data are made public there are no restrictions on copying, publishing, distributing, transmitting, adapting, or otherwise using the information for non-commercial or for commercial purposes.
Depending on the exact implementation details, this could be a fantastic move that would remove any legal confusion about using federal government data. By leveraging open licenses, the U.S. federal government would be doing a great service to reusers by communicating those rights available in advance. And, if the U.S. truly wishes to make federal government information available without restriction, it could consider using a tool such as the CC0 Public Domain Dedication. CC0 is used by many data providers to place open data directly in the public domain. We’ve already suggested this (PDF) as an option for sharing federally funded research data.
The White House should be commended for taking another positive step forward to ensure that U.S. government data is made legally and technically accessible and useable.3 Comments »
Celebrating Open Data
Open Data Day 2013 can be described as a success. Why? Because hundreds of people participated in more than 100 events distributed across six continents all over the world, celebrating open data and all that we can do with it. Here at CC, we planned and executed a community-supported event to build open learning resources around the topic of Open Science, done in a hackathon-style sprint event that gathered people with diverse backgrounds and experience levels. An undergraduate student and a post-doc researcher, both from Stanford. An instructional designer from Los Angeles and an associate professor from Auburn University, plus a handful more of very talented people. Oh, and a mother and high school-aged daughter duo that simply wanted to see what “open” is about. We all connected to help build an open course to teach others about Open Science. Here’s how we did it.
Open Content for Learning
It’s worth mentioning that the course materials that were produced during the sprint will be openly licensed CC BY and shared so that their benefit to Open Education and Open Science are not restricted by legal boundaries. The material is being curated and will undergo a review process over the next couple weeks before being ported to the School of Open, a collaborative project by Creative Commons, P2PU, and a strong volunteer community of “open” experts and organizations. Though fitting the content to P2PU’s online course platform was in the back of our minds, time and consideration were largely placed on identifying important ideas that explain what Open Access, Open Research, and Open Data mean for Open Science, and how we can engage more “young scientists” (this is an ever-broadening term) in the ways of Open.
The Net Works Effect*
Adding a layer on top of open content itself, which is elastic in nature, our approach to this hackathon-style event focused on being very lean, the type of event that can be run by anyone, anywhere, and requiring very few resources. We created a Google Drive folder and a set of publicly-editable documents to collect openly-licensed resources, map out a tentative module/lesson plan, coordinate communications between participants, and generally provide a single place to collaborate on Open Science learning materials. Connecting with other event organizers at the OKFN and PLOS, mailing lists, Twitter hashtags, and other forms of communication were established so that there was a support network for those who were organizing events and those who were interested in participating in Open Data Day events on some level. David Eaves, Rufus Pollock, Ross Mounce, and many others were loud and clear on the Open Data Day mailing list, making sure news about each event was passed around.
— creativecommons (@creativecommons) February 22, 2013
Before the event, a registration page was created for the course sprint. We offered a handful of in-person tickets for folks to come down to our office in Mountain View, as well as a number of remote participant tickets for those who were in different geographical locations. Google Hangout “rooms” were set up on laptop computers placed in physical conference rooms at the CC HQ, allowing remote participants to work in real-time with persons on the ground. To see a more detailed description of the day’s event, see the schedule document here.
So what did we make? The sprinters involved in the project collected and organized resources that explain common aspects of Open Science. The main sections (access, methods, data) were helpful in searching for content, but there was a great deal of overlap between sections, which highlighted the relationhips between them. Beyond the collection of resources, sets of tasks were built that are meant to guide learners out beyond the course and into the communities of Open Science, interacting with the ideas, technical systems, and people who are opening up science. The Introduction to Open Science course on P2PU is still in a lightly-framed state, but the plan is to include the course in the launch of the School of Open during Open Education Week, March 11-15. If you’re interested in helping make this transition or to help build or review other courses that we call “open,” come introduce yourself in the School of Open Google Group. Or check out what else is happening on P2PU.
Beyond the course itself, we’re going to take a look at the sprint process we used, and work out some of the kinks. This rapid open-content creation technique is manageable, low-cost, and builds the Commons. There’s enough openly-licensed content existing on the web to produce a range of learning experiences, so now it seems that it’s a matter of developing open technology tools to the point where we can build education on the web together, easily. For more information about this and other Open Education projects being worked on by Creative Commons, see this page.
We Got Together for Open
Thanks to those who were able to participate in the Open Science course, as well as those who contributed the planning documents leading up to the event. We’ve done well.
PLOS Sci-Ed Blog, Guest Post: Open Data Day, Course Sprints, and Hackathons!
David Eaves’ Blog, International #OpenDataDay: Now at 90 Cities (and… the White House)
Debbie Morrison’s Blog, A Course Design ‘Sprint’: My Experience in an Education Hackathon
Also: The Flickr album from the event can be found here.
*This phrase coined by P. Kishor here, describing the interconnectedness of Open Data Day events.2 Comments »
On this 10th anniversary of CC, there’s much to celebrate: Creative Commons licenses and tools have been embraced by millions of photographers, musicians, videographers, bloggers, and others sharing countless numbers of creative works freely online. One area of growth in use of CC licenses and public domain tools is for government works. Government adoption of Creative Commons may prove to be one of the most significant movements looking into the future. Said well by David Bollier, “Governments are coming to realize that they are one of the primary stewards of intellectual property, and that the wide dissemination of their work—statistics, research, reports, legislation, judicial decisions—can stimulate economic innovation, scientiﬁc progress, education, and cultural development.” If governments around the world are going to unleash the power of hundreds of billions of dollars of publicly funded education, research and scientific resources, we need broad adoption of open policies aligned with the belief that the public should have access to the resources they paid for. At a fundamental level, “all publicly funded resources [should be] openly licensed resources.”
CC licenses and tools have been implemented by government entities and public sector bodies around the world. And over the last few years, there’s been an increasing focus in governments aligning to the principle that the public should have access to the materials that it pays for. These funding mandates, which require that grantees release content produced with grant funds under an open license, has been a increasingly commons way for governments to support openness. Legislation involving the open licensing of publicly funded educational materials has been passed in Brazil, Poland, the United States, and Canada. The UK has championed an open access policy for publicly funded research under the Creative Commons Attribution (CC BY) license. Governments in Australia and New Zealand have opted for comprehensive open licensing policies for all government-produced works, by default releasing public information and data under CC BY. The Dutch government has taken this one step further, opting to release government information directly into the public domain using the CC0 Public Domain Dedication.
In addition to governments, other publicly-minded institutions like philanthropic foundations and intergovermental organizations are supporting open licensing. Several foundations have already implemented or are considering requiring open licensing on the outputs of their grant funds, including the William and Flora Hewlett Foundation , the Open Society Foundations, and the Bill & Melinda Gates Foundation already require their grantees to release content they build with grant money under open licenses. And CC continues to explore how to evaluate current copyright policies within the foundation world and suggest how foundations (and their grantees) can benefit from open licensing for their grant funded materials. Intergovernmental organizations like the Commonwealth of Learning and the World Bank have adopted open licensing policies to share their publications too.
Open advocates – whether it be in support of open sharing of publicly funded educational materials, open access to scientific research articles, access to a huge trove of cultural heritage resources from libraries and museums, or open licensing for public sector information and government datasets – have been increasingly active over the last few years, particularly in working to educate policymakers about the importance and benefits of open licensing. These efforts include the development of declarations such as the Budapest Open Access Initiative, Cape Town and Paris Declarations on Open Educational Resources, the Washington Declaration on Intellectual Property and the Public Interest, the Panton Principles, and many others. Advocates have been key in communicating the need for governments to consider open licensing, whether it be for federal agencies, governing bodies like the European Commission, or through multilateral negotiations such as WIPO. And the grassroots open community has been extremely active in raising awareness of open licensing, whether it be through the tireless work of CC Affiliates, the broad network of open data activists from the Open Knowledge Foundation, legal experts championing Open Government Data Principles, and persons participating in events from Open Access Week to Open Education Week to Public Domain Day. All of these actions have rallied around the common theme that governments and public bodies should release content they create or fund under open licenses, for the benefit of all.
Since the beginning of Creative Commons, governments and public sector bodies have leveraged CC licenses and public domain tools to share their data, publicly funded research, educational and cultural content, and other digital materials. Governments are increasingly leveraging CC licenses as part of their strategy to proactively share resources, promote effective spending, and champion innovation. A massive amount of work is ahead, and with a committed community of advocates, interested governmental departments, and open minded policymakers, we can together work toward a close integration of open licensing inside the public sector. If we do so, governments can better support their populations with the information they need, increase the effectiveness of the public’s investment, and contribute to a true global commons.No Comments »
In the past few weeks, the Foundation Center and the philanthropic world have taken two big steps forward in transparency. First, 15 of the nation’s largest foundations joined the “Reporting Commitment,” agreeing to release grant information regularly through Foundation Center’s Glasspockets repository. Then last week, the Foundation Center relaunched IssueLab, an extensive repository of third-sector research. IssueLab’s mission is to “gather, index, and share the collective intelligence of the social sector” more effectively.
All of the IssueLab metadata is licensed under CC BY-NC-SA and all of the content is accessible (for reading, if not necessarily for other uses) for free. Everything released to Glasspockets under the Reporting Commitment is licensed under BY NC.
Taken together, these initiatives present some interesting possibilities for the future of open data in the foundation space. Foundation Center president Bradford K. Smith discussed the implications of both initiatives in a blog post:
If you think foundations are only ATM machines and nonprofits just service providers, think again. With the launch of IssueLab, there is one place you can go to find more than eleven thousand knowledge products published, funded, produced, and/or generated by foundations and nonprofits in the U.S. and around the globe.
Last month, the Foundation Center announced the Reporting Commitment, an effort by fifteen of America’s largest philanthropic foundations to make their grants data — who they give money to, how much, where, and for what purpose — available in an open, machine-readable format. Starting today, through IssueLab, the social sector can also access what it knows as a result of that funding. A service of the Foundation Center, IssueLab gathers, indexes, and shares the sector’s collective intelligence on a free, open, and searchable platform, and encourages users to share, copy, distribute, and even adapt the work. It’s a big step for philanthropy and “open knowledge.”
Smith went on to explain why it’s important that these resources aren’t just freely available; they’re openly licensed too:
Free is good, but IssueLab promotes openness in a number of other ways. First, the metadata — the abstracts and “tags” developed for all reports in the collection — is available under a Creative Commons license and can be grabbed and/or remixed by anyone as long as they use it for non-commercial purposes. Second, only work that is available for free is included in the IssueLab collection. These are public “assets,” in that the organizations which produced them already have tax-exempt status and/or have received government funding, and they should be easy for the public to find. Sorry but Kardashian Konfidential will not be found on IssueLab. Third, IssueLab itself is an open-source platform whose underlying codebase/framework is continually being improved by a community of developers. And fourth, our own developers embrace the Open Archives Initiative (OAI), which develops and promotes interoperability standards to facilitate the efficient dissemination of online content.
Here at Creative Commons, we’re big proponents of foundations and other institutions sharing their data — and the works they produce or fund — under an open license. It makes sense for foundations to reciprocate the public’s trust by showing how philanthropic dollars have been spent, and the foundations that join in the Reporting Commitment make that information available much sooner and much more easily than it is under the federally-required information returns. By use of Glasspockets, the public can see and compare the activities of the participating foundations. Private foundations are tax-exempt because they are dedicated to the public benefit; those that share their data and research in ways that invite the reuse and contributions of others add a valuable new dimension to their public service.4 Comments »
We’re psyched to be a part of OKFestival: Open Knowledge in Action. The OKFestival takes place September 17-22, 2012 in Helsinki, Finland, and features “a series of hands-on workshops, talks, hackathons, meetings and sprints” exploring a variety of areas including open development, open cultural heritage, and gender and diversity in openness. You can buy tickets to the festival for any number of days until September 16 at http://okfestival.org/early-bird-okfest-tickets/. The OKFestival website has all the details, including the preliminary schedule.
We are particularly interested in and helped to shape the Open Research and Education topic stream, where we are leading an “Open Peer Learning” workshop on Wednesday (Sept 19) from 11:30am to 3:30pm. For the workshop the School of Open (co-led by Creative Commons and P2PU) is combining forces with the OKFN’s School of Data to explore, test and develop learning challenges around open tools and practices in data, research, and education. Participation in the workshop is free (you don’t even have to buy a festival ticket), but space is limited, so RSVP at: http://peerlearningworkshop.eventbrite.com/
The workshop will be held in this awesome space, reserved for four HACK workshops:
For those of you able to come to Helsinki, look out for our CC staff reps, Jessica Coates and Timothy Vollmer, along with many of our European affiliates who will be holding a regional meeting on Day four of the fest.
For the rest of you, you can still participate in helping to build initiatives like the School of Open from wherever you are by visiting http://schoolofopen.org/ and signing up for the mailing lists there.No Comments »
Some of these developments may be dated by a month or more, but we want to make sure they are on your radar by pointing them out here.
Several open data portals have launched, including a Brazilian Open Data portal powered by the open-source data cataloguing software CKAN (run by the Open Knowledge Foundation – OKFN). The Ministry of Planning in Brazil worked with the OKFN to develop the portal, cultivating citizen participation through an open and transparent development process. Furthermore, the portal itself carries a default license of CC BY-SA. Since its May 4 launch, the portal has grown and now hosts 79 data sets and 893 resources. As noted on the OKFN blog, “the portal is part of a larger project called the National Infrastructure Open Data, or INDA. The general idea of INDA is to establish technical standards for open data, promote training and support public bodies in the task of publishing open data. This entire process is done through intra-government cooperation and cooperation between government and citizens, always aiming to achieve a real platform for open government.”
You should also take note of the Open GLAM data portal. This portal also runs on CKAN and is a hub for open data sets from GLAM institutions, aka Galleries, Libraries, Archives, and Museums. The datasets are licensed under various open licenses, and some with no rights attached thanks to the use of the CC0 public domain waiver.
In addition to open data portals, open data initiatives like the School of Data and the Open Data Institute are taking off. The School of Data is a collaboration between the OKFN and the Peer 2 Peer University (P2PU) to “create a set of courses for people to learn how to do interesting things with data, from beginners to experts.” In late May, the School of Data held a week-long kick-off sprint in Berlin with a virtual component, which I participated in by helping to start an open data challenge with virtual colleagues. The challenge is still in development, and once completed it will be a part of the School of Open as well as the School of Data. You can help to build it at the P2PU platform.
The kick-off yielded a great foundation for many other data tracks as part of the School of Data, which you can read about here.
The Open Data Institute is an initiative by the UK government to “incubate, nurture and mentor new businesses exploiting Open Data for economic growth” and to “promote innovation driven by the UK Government Open Data policy.” £10m will be invested over five years by the Technology Strategy Board, a non-departmental public body. The UK government has published its implementation plan as a pdf online. You can learn more at The Guardian article from last May.
The data-driven economy is also a hot topic within the EU, with the emergence of a data session at the European Commission’s 2nd Digital Agenda Assembly taking place today and tomorrow. The workshop will “explore the potential of data, some of the most promising economic and business aspects involved, and discuss how policy for data and our investment in R&D can better address the challenges of businesses and the public sector and further support innovative business development.”
Lastly, to put all the current activity around data into perspective, is a thoughtful article by the OKFN’s Jonathan Gray on “What data can and cannot do.” The Guardian article reinforces the point that data, while valuable, when divorced from context and without interpretation, is not very effective. He encourages us to “cultivate a more critical literacy” towards data:
“Data can be an immensely powerful asset, if used in the right way. But as users and advocates of this potent and intoxicating stuff we should strive to keep our expectations of it proportional to the opportunity it represents.”
Essentially, opening up data is just the first step — and arguably, a necessary step to ensuring that data can be reused, contextualized, and interpreted in meaningful ways.
To learn more about how CC tools may be applied to data, see our landing page and FAQ on data.1 Comment »