ccREL

Educational Search and DiscoverEd

Alex Kozak, June 25th, 2010

Last week in the vuDAT building at Michigan State University, a group of developers interested in educational search and discovery got together to contribute code (in what’s commonly called a code sprint) to Creative Commons’ DiscoverEd project. Readers interested in the technical details about our work last week can find daily posts on CC LabsDay 1, Day 2, and Day 3.

DiscoverEd is a semantic enhanced search prototype. What does that mean practically? Let’s say you’re a ninth grade biology teacher interested in finding education resources about cell organelles to hand out to students. How would you go about that?

If you’re web savvy, you might open up a search engine like Google, Yahoo, or Bing and search for “cell organelles”. You’d find a lot of resources (Google alone finds over 11 million pages!), but which do you choose to investigate further? It’s time consuming and difficult to sift through search results for resources that have certain properties you might be interested in, like being appropriate for 9th graders, being under a CC license that allows you to modify the resource and share changes, or being written in English or Spanish, for example. As you throw up your hands in dismay, you might think “Can’t someone do this for me?!”

DiscoverEd is an educational search prototype that does exactly that, by searching metadata about educational resources. It provides a way to sift through search results based on specific qualities like what license it’s under, the education level, or subject.

Compare search results for “cell organelles” in Google, Yahoo, Bing, and now in DiscoverEd. You can see that finding CC licensed educational resources is friendlier because of the available metadata accompanying each result.

While most search engines rely solely on algorithmic analyses of resources, DiscoverEd can incorporate data provided by the resource publisher or curator. As long as curators and publishers follow some basic standards, metadata can be consumed and displayed by DiscoverEd. These formats (e.g. RDFa) allow otherwise unrelated educational projects, curators, and repositories to express facts about their resources in the same format so that tools (like DiscoverEd) can use that data for useful purposes (like search and discovery).

Creative Commons believes an open web following open standards leads to better outcomes for everyone. Our vision for the web is that everyone following interoperable standards, whether they be legal standards like the CC licenses or technical standards like CC REL and RDFa, will result in a platform that enables social and technical innovation in the same way that HTTP and HTML enabled change. DiscoverEd is a project that allows us to explore ways to improve search for OER, and simultaneously demonstrate the utility of structured data.

Continued development of DiscoverEd is supported by the AgShare project, funded by a grant from The Gates Foundation. Creative Commons thanks MSU, vuDAT, MSU Global, and the participants in the DiscoverEd sprint last week for their support.

1 Comment »

Back to School Conclusion: The Open Trajectory of Learning

Alex Kozak, September 4th, 2009

As students around the world return to school, ccLearn blogs about the evolving education landscape, ongoing projects to improve educational resources, education technology, and the future of education. Browse the “Back to School” tag for more posts in this series.

Today’s predictions about the future of learning might eventually seem as preposterous as early 20th century predictions of flying cars and robot butlers. But what we sometimes forget is that our vision for the future today will ultimately shape the outcomes of tomorrow–not in a causal, deterministic way, but in an enabling way. By sharing our hopes and dreams for an open future for learning, we foster an environment in which it can happen.

At ccLearn, we strongly believe that the future for education and learning is one that includes technical, legal, and social openness.

The spaces in which teaching and learning occur are increasingly moving towards technical openness by running open source software, integrating machine readable metadata, and adopting open formats. Schools, colleges, and universities involved in open courseware, wikis, and other organizations engaged in online knowledge delivery are beginning to embrace RDFa and metadata standards like ccREL, open video codecs, open document formats, and open software solutions. More open technology continues to be developed, and there is no indication that this will stop or slow down.

Members of the global education community have been moving towards legal openness by converging on Creative Commons licenses that allow sustainable redistribution and remixing as the de facto licensing standard. This phenomenon is international- Creative Commons has been ported to 51 countries (7 in progress), with CC licensed educational resources being used all over the world. Although ccLearn found in our recent report “What status for ‘open’?” that some institutions have some homework to do on what it means to be open, we are well on the road towards a robust and scalable legal standard for open educational resources.

Perhaps most powerfully, we are beginning to see a move towards social openness in educational institutions in the prototyping of new models for learning involvement, organization, and assessment that maximizes the availability of learning to all people, everywhere. By leveraging the power of online organization and open content, often times coupled with a willingness to re-conceptualize what it means to be an educator, new possibilities for learning will emerge, leading to a more educated world.

We can’t fully predict today what kinds of practices, pedagogies, and technologies open education will enable tomorrow. But we are in a position to claim that our goal for an open future enables the creation of these new and better practices, technologies, and social structures.

ccLearn would like to thank The William and Flora Hewlett Foundation for their continued support of open education, the Creative Commons staff who make our work possible, and all of you for your continued support of a truly global commons. We hope that you all continue to contribute to open source learning software, embrace open formats, license your educational works with Creative Commons licenses, and get engaged in the world movement towards an open future for learning.


En Estados Unidos están de regreso al colegio este mes y con este contexto en ccLearn, han venido publicando una serie de entradas algunas de ellas ya quedaron comentadas en español, creo que justifica comentar y traducir lo pertinente:

De regreso al colegio, conclusiones: El camino abierto para el aprendizaje

La entrada de cierre para el ciclo de ccLearn sobre el regreso al colegio esta nuevamente a cargo de Alex Kozak quien indica como desde ccLearn, se cree firmemente en un futuro del proceso de educación y aprendizaje atravesado por la idea de apertura en lo técnico, lo legal y lo social.

Los espacios en los que la docencia y el aprendizaje se dan para Kozak están migrando a estándares abiertos en con el uso de software open source, integrando metadatos que pueden ser leídos por las máquinas y adoptando formatos abiertos. Escuelas, Universidades y en general instituciones de educación superior que desarrollan courseware abiertos, wikis y otras organizaciones involucradas en los procesos de disponer del conocimiento a través de la red están empezando a adoptar RDFa y estandares de metadatos como ccREL, codecs para video abierto, formatos abiertos de editores de textos, y soluciones de software abierto o libre.

De otro lado la comunidad global del sector educativo se esta moviendo hacia la apertura legal, sus decisiones de adopción de licencias Creative Commons como un estándar converge para permitir la redistribución y mezcla de los recursos . Este es un fenómeno internacional- Creative Commons se ha adaptado al sistema legal de 51 países (7 mas lo están haciendo), los recursos educativos licenciados con CC se usan por todo el mundo. En todo caso se debe considerar que ccLearn encontró en su informe “What status for ‘open’?” que algunas instituciones todavía tienen que revisar lo que significa abierto, pero que el camino hacia estándares de apertura en los recursos educativos esta en marcha.

Para Kozak incluso lo llamativo es que se esta empezando a ver una mayor apertura en lo social en relación con los pilotos educativos en los nuevos modelos que las instituciones ensayan. A la hora de abordar el proceso de aprendizaje, la organizacion, y valoracion de estos pilotos están maximizando la idea de hacerlo accesible a cualquiera en cualquier lugar. Kozak cree que apalancando la capacidad de las organizaciones en linea y del contenido abierto, junto con el cada vez mas frecuente deseo de re-conceptualizar lo que significa ser docente, nuevas posibilidades para el aprendizaje surgirán para llevarnos a un mundo mas educado.

Para Kozak aunque no podamos predecir las practicas, pedagogías y tecnologías que favorecerá una educación abierta mañana si podemos decir que la meta de un futuro abierto permitirá la creación de esas nuevas practicas, tecnologías y estructuras sociales.

Breve comentario desde mi propia óptica

Aunque en regiones como América Latina nos hacen falta datos para asumir como ciertas muchas de las afirmaciones de Kozak para el mundo anglosajón lo cierto es que la sensación que hay en el ambiente es que muchas de sus conclusiones pueden ser extensibles a nuestra realidad,

De hecho algunos otras de las entradas de este ciclo de regreso al colegio que hizo ccLearn se referían a proyectos concretos que mostraban proyectos y practicas abiertas (Vital signs y el caso de los libros de texto). Creo que deberíamos visibilizar algunas de las muchas iniciativas que están ocurriendo en nuestra región para conocerlas y aprender de ellas… espero poder hacerlo muy pronto! (si tienen ideas dejen su comentario y hagamos seguimiento de ellas juntos)

3 Comments »

Rights Expression vs. Rights Enforcement: clarifying the Associated Press story

Ben Adida, August 1st, 2009

The Associated Press wants to track reuse of their content through a “news registry.” This registry “will employ a microformat for news developed by AP”:

The microformat will essentially encapsulate AP and member content in an informational “wrapper” that includes a digital permissions framework that lets publishers specify how their content is to be used online and which also supplies the critical information needed to track and monitor its usage.

While Creative Commons is very sympathetic to the difficulty of explaining technical concepts in a short press release, we’re worried that the AP’s explanation, and in particular their reference to the Creative Commons’ Rights Expression Language (ccREL), might well be confusing.

The reference to Creative Commons appears in the AP’s microformat, hNews, which introduces hRights, a supposed “generalization” of ccREL. hRights is presumably the “digital permissions framework” that the AP diagrams as a box/wrapper around news content in order to “track and monitor usage.” Unfortunately, as Ed Felten points out, this claim doesn’t add up. Microformats and other web-based structured data, including ccREL, cannot track, monitor, or generally enforce anything. They’re labels, i.e. Post-It notes attached to a document, not locked boxes blocking access to the content.

When Creative Commons launched in 2002, we were often asked “is Creative Commons a form of DRM?” Our answer: no, we help publishers express their rights, but we don’t dabble in enforcement, because enforcement technologies are unable to respect important, complex, and often subjective concepts like fair use. Thus, ccREL is about expression and notification of rights, not about enforcement.

And when you think about it, there’s really no other realistic way. If the AP actually wants a “beacon” that reports usage information back to the mothership, then every endpoint must be programmed to perform this beacon functionality. Before it delivers content, every server must check that the client will promise to become a beacon. Which means the AP wants  an architecture where every cell phone, computer, or other networked device is locked down centrally, able to run only software that is verified to comply with this policy. That’s another reason why we don’t dabble in enforcement: the costs of Digital Rights Enforcement (or its politically correct equivalent, Digital Rights Management) to publishers, users, to our culture and to our ability to innovate are astronomically high.

Then there’s the issue of “RSS syndication” compatibility. We simply don’t see how the AP’s proposed system would allow both widespread beacon enforcement and compatibility with existing formats like RSS. Compatibility means that current RSS tools remain usable. Obviously, these tools do not currently perform the AP’s rights enforcement, so how could they magically be made to start phoning home now?

That said, there is an interesting nugget in the AP’s proposal, one which we encourage them to pursue: tagging content with rights, origin, and means of attribution is a good proposal. When Creative Commons began the work that led to ccREL, there were no established or open standards for expressing this type of structured information on the web, so we had to lay down some new infrastructure. When we published ccREL, we made it easy for others to innovate on top of ccREL: we included “independence and extensibility” as the first principle for expressing license information in a machine readable format. We based ccREL on RDF and RDFa to enable this standards-based extensibility.

The AP could, rather than reinvent a subset of ccREL using an incompatible and news-specific syntax, simply use ccREL and add their own news-specific fields. By doing this, they would immediately plug into the growing set of tools that parse and interpret rights expressed via ccREL. We would be happy to help, but we built ccREL in such a way that we don’t need to be involved if the AP would prefer to go it alone. And, of course, the AP can use ccREL with copyright licenses more restrictive than those we offer, if they prefer.

3 Comments »

HOWTO Make Your CC-licensed Images Visible to Robots

Fred Benenson, July 13th, 2009

After last week’s exciting announcement that Google Image search is now capable of filtering results by usage rights, we realized there is a lot of interest in how creators can signal their work as being CC-licensed to both humans and robots.

Fortunately, CC has a solution for this that is not only a standard, but recommended by the World Wide Web Consortium.

Its called the Creative Commons Rights Expression Language and is part of the semantic web. Without getting too technical, ccREL uses a technology called RDFa to express licensing information to machines so that they can deduce the same facts about a work (such as its title, author, and most importantly, its license) that humans can. If you’re interested in the future of the web and structured data, you’ll want to check out our wiki pages on RDFa, ccREL, and our white paper submitted to the W3C. Google has a page explaining RDFa and Yahoo has a page explaining how RDFa is used by Yahoo Search.

The easiest way to signal to both humans and robots that your content is CC licensed is to head over to our license chooser and choose a license to put on your own site.

Our license chooser automatically generates the proper ccREL code, so its easy! Don’t forget to fill out the “Additional Information” section. You’ll then get a snippet of XHTML embed that will contain ccREL. Place this near your work (preferably on its same page of the work which also happens to be unique) and you’re all set. If you’re running an entire content community, you can also dynamically generate this markup based on the particular user, title of the work and so on. Check out Thingiverse for a excellent example of this functionality.

Are you already using ccREL or RDFa on your website or platform? Let us know or add it to our Wiki page!

2 Comments »

CC Technology Summit 3: Turin, Italy

Nathan Yergler, March 29th, 2009

We did two Technology Summits in 2008 — one in Mountain View, CA in June and one in Cambridge, MA in December. I’m pleased to announce that the third CC Technology Summit will take place June 26, 2009 in Turin, Italy at Politecnico di Torino. This is just prior to the Communia Conference 2009 on the global science and economics of knowledge-sharing institutions.

We’re currently looking for presentations around copyright registries, ccREL and provenance in semantic web applications for the day’s program. If you’re interested, see the full details and CFP in the wiki. Hope to see many of you there!

Comments Off

Expanding the Public Domain: Part Zero

Diane Peters, March 11th, 2009

Creative Commons has spent a lot of time over the past year or so strategizing, and worrying, about the current state of the public domain and its future. In particular, we’ve been thinking about ways to help cultivate a vibrant and rich pool of freely available resources accessible to anyone to use for any purpose, unconditionally.

Our copyright licenses empower creators to manage their copyright on terms they choose. But what about creators who aren’t concerned about those protections, or who later want to waive those rights altogether? Unfortunately, the law makes it virtually impossible to waive the copyright automatically bestowed on creators. The problem is compounded by the fact that copyright terms vary dramatically and are frequently extended. Additionally, new protections, like the creation of sui generis database rights in the EU, are layered atop traditional rights, making an already complex system of copyright all the more complicated. In combination, these challenges stand in the way of the vibrant public domain that CC and many others envision.

Today at the O’Reilly Emerging Technology conference, our CEO Joi Ito will formally introduce the first of two tools designed to address these challenges. CC0 (read “CC Zero”) is a universal waiver that may be used by anyone wishing to permanently surrender the copyright and database rights they may have in a work, thereby placing it as nearly as possible into the public domain. CC0 is not a license, but a legal tool that improves on the “dedication” function of our existing, U.S.-centric public domain dedication and certification. CC0 is universal in form and may be used throughout the world for any kind of content without adaptation to account for laws in different jurisdictions. And like our licenses, CC0 has the benefit of being expressed in three ways – legal code, a human readable deed, and machine-readable code that allows works distributed under CC0 to be easily found. Read our FAQs to learn more.

CC0 is an outgrowth of six years of experience with our existing public domain tool, the maturation of ccREL (our recommendations for machine-readable work information), and the requirements of educators and scientists for the public domain. Science Commons’ work on the Open Access Data Protocol, to ensure interoperability of data and databases in particular, informed our development of CC0. It should come as no surprise that several of CC0’s early adopters are leading some of the most important projects within the scientific community.

The ProteomeCommons.org Tranche network is one such early adopter. “Our goal is to remove as many barriers to scientific data sharing as possible in order to promote new discoveries. The Creative Commons CC0 waiver was incorporated into our uploading options as the default in order to help achieve this goal. By giving a simple option to release data into the public domain, CC0 removes the complex barriers of licensing and restrictions. This lets researchers focus on what’s most important, their research and new discoveries,” said Philip Andrews, Professor at the University of Michigan.

Another early adopter of CC0 is the Personal Genome Project, a pioneer in the emerging field of personal genomics technology. The Personal Genome Project is announcing today the release of a large data set containing genomic sequences for ten individuals using CC0, with future planned releases also under CC0. “PersonalGenomes.org is committed to making our research data freely available to the public because we think that is the best way to promote discovery and advance science, and CC0 helps us to state that commitment in a clear and legally accurate way,” said Jason Bobe, Director of Community.

John Wilbanks, CC’s vice president for science, follows Joi Ito at Etech with a presentation addressing the role of CC0 in promoting open innovation.

Building CC0 into a universally robust tool has required the efforts and dedication of many over the course of more than a year. CC jurisdiction project leads in particular provided us with meaningful forums in which to openly discuss CC0′s development. They also provided jurisdiction-specific research critical to our understanding of public domain around the world. This support was invaluable to the crafting of a legally sound public domain tool for use everywhere. An overview of CC’s development and public comment process can be found on the CC wiki, together with links to our blog postings summarizing key policy and drafting decisions.

About the second tool that we refer to above, stay tuned. Funding permitting, we plan to roll out a beta public domain assertion tool this coming summer that will make it easy for people to tag and find content already in the public domain — increasing its effective size, even if due to copyright extensions works are not naturally added to the public domain.

Note, one small improvement we’re introducing with CC0 is that its deed and legalcode are located at http://creativecommons.org/publicdomain/zero/1.0/. The forthcoming public domain assertion tool will also be rooted under this directory. Thanks to everyone who reminded us that the public domain is not a license, and public domain tools should not be under a “licenses” directory!

A word of thanks to our pro bono legal counsel at Wilson Sonsini Goodrich & Rosati and Latham & Watkins. Their legal review and analysis provided the heightened level of rigor that users of our licenses and legal tools have come to expect from Creative Commons.

10 Comments »

New web metadata validator released

Asheesh Laroia, January 6th, 2009

(This was originally published on CC Labs.)

This past summer, Hugo Dworak worked with us (thanks to Google Summer of Code) on a new validator. This work was greatly overdue, and we are very pleased that Google could fund Hugo to work on it. Our previous validator had not been updated to reflect our new metadata standards, so we disabled it some time ago to avoid creating further confusion. The textbook on CC metadata is the “Creative Commons Rights Expression Language”, or ccREL, which specifies the use of RDFa on the web. (If this sounds like keyword soup, rest assured that the License Engine generates HTML that you can copy and paste; that HTML is fully compliant with ccREL.) We hoped Hugo’s work on a new validator would let us offer a validator to the Creative Commons community so that publishers can test their web pages to make sure they encode the information they intended.

Hugo’s work was a success; he announced in August 2008 a test version of the validator. He built on top of the work of others: the new validator uses the Pylons web framework, html5lib for HTML parsing and tokenizing, and RDFlib for working with RDF. He shared his source code under the recent free software license built for network services, AGPLv3.

So I am happy to announce that the test period is complete, and we are now running the new code at http://validator.creativecommons.org/. Our thanks go out to Hugo, and we look forward to the new validator gaining some use as well as hearing your feedback. If you want to contribute to the validator’s development or check it out for any reason, take a look at the documentation on the CC wiki.

Comments Off

RDFa now a W3C recommendation; message from Hal Abelson

Mike Linksvayer, October 16th, 2008

Yesterday RDFa, a technical standard Creative Commons has championed at the World Wide Web Consortium for five years, was made a W3C Recommendation — a standard for the web to build upon.

CC founding board member and MIT computer science professor Hal Abelson sends this message:

Dear Staff and Board,

I’m writing with some great news:

Today, the technical specification RDFa in XHTML Syntax and Processing was formally accepted as a Web Consortium Technical Recommendation by W3C Director Tim Berners-Lee.

Those the words might not mean much to any but the geekiest of us — but this is a big deal.

Creative Commons was a early adopter of Semantic Web standards. And yet, while the Semantic Web provided RDF as a standard for expressing metadata, it did not provide a standard for how that metadata should be integrated into ordinary Web pages.

The original concept of the Semantic Web did not encompass the notion that ordinary Web pages would be augmented with machine-readable metadata. Even today, that notion remains controversial. One considerable faction still holds that HTML should be purely a formatting language with no provision for any semantic information at all. Other factions, like microformats community, advocate metadata standards that do not integrate well into RDF and general Semantic Web applications.

CC licensing was the first use of the Web to envision Web publishers augmenting their pages with small amounts of machine-readable markup: the CC licensing attributes. It was our desire achieve this consistently with the Semantic Web that led to our involvement with the Web standards community; and the need to advocate for such a standard was why CC joined the Web Consortium in the first place.

RDFa is the standard that has emerged from this effort. RDFa is a general mechanism for expressing machine-readable attributes on Web pages in a way that is integrates with HTML. The most obvious example for us is the Creative Commons Rights Expression language (ccREL) — a machine-readable way to express CC licensing.

W3C’s adoption today of the RDFa recommendation solidifies the technical underpinning of ccREL and opens the door to the development and widespread support for CC-compliant tools on the Web.

There are many people who deserve credit for RDFa. Mike Linksvayer and Nathan Yergler certainly get kudos for their consistent support and development of the CC infrastructure to emphasize RDFa and ccREL.

But the lion’s share of the credit goes to Ben Adida, CC’s W3C representative, who led this effort creatively and tirelessly. Ben’s leadership in the technical design of RDFa and the negotiations and refinements to bring RDFa all the way through the complex Web standards process has been an effort of more than five years.

This work on RDFa not only has major benefit to CC, but it’s a significant example CC providing technical leadership in Web community and a contribution that will have implications far beyond CC’s own applications.

Ben deserves our sincerest thanks and congratulations.

== Hal

(Also check out Hal’s starring role in the new Jesse Dylan video about Creative Commons, A Shared Culture.)

Congratulations and thanks to Ben and everyone else who has worked so hard on this effort for so many years.

If you’re a web developer, check out RDFa and ccREL. A great place to start is Ben’s Introduction to ccREL talk from our first CC technology summit held in June (slides and video available at the link; also check out the CFP for our upcoming December tech summit at MIT). Ben also recommends a new post from the founder of Drupal on Drupal, the semantic web and search.

Otherwise (and even if you are a web developer), the best way to support this work is by supporting Creative Commons. Our annual fundraising campaign just kicked off yesterday, so now is an excellent time to give.

Thanks!

Comments Off


Subscribe to RSS

Archives

  • collapse2014
  • expand2013
  • expand2012
  • expand2011
  • expand2010
  • expand2009
  • expand2008
  • expand2007
  • expand2006
  • expand2005
  • expand2004
  • expand2003
  • expand2002