The Mozilla Foundation is unabashedly committed to a free and open web. They see it as a vital part of a healthy digital ecosystem where creativity and innovation can thrive. We couldn’t agree more. And we couldn’t be prouder to have Mozilla’s generous and ongoing support. We were recently able to catch up with Mark Surman, the Foundation’s Executive Director, who talks about Mozilla and its myriad projects, and how his organization and ours are a lot like lego blocks for the open web.
Most people associate Mozilla with the Firefox but you do much more than just that – can you give our readers some background on the different arms of Mozilla as an organization? What is your role there?
Mozilla’s overall goal is to promote innovation and opportunity on the web — and to guard the open nature of the internet.
Firefox is clearly the biggest part of this. But we’re constantly looking for new ways to make the internet better. Our growing focus on identity, mobile and web apps is a part of this. Also, we’re reaching out more broadly beyond software to invite people like filmmakers, scientists, journalists, teachers and so on to get involved.
Personally, I’m most active in this effort to reach out more broadly and to get many more people involved in our work. Much of this is happening through a program I helped start called Mozilla Drumbeat. As Executive Director of Mozilla Foundation, I also manage the overall umbrella legal structure for all of Mozilla’s activities.
What is the connection between Mozilla and CC? Do you use our tools in your various projects?
At the highest level, Mozilla and CC are both working for the same thing — a digital society based on creativity, innovation and freedom. And, of course, we use CC licenses for content and documents that we produce across all Mozilla projects.
Mozilla has given generously to Creative Commons – what was the motivation behind donating? What is it about CC that you find important?
I think of both organizations as giving people ‘lego blocks’ that they can use to make and shape the web. Mozilla’s lego blocks are technical, CC’s are legal. Both help people create and innovate, which goes back to the higher vision we share.
What do you see as CC’s role in the broader digital ecosystem? How does CC enable Mozilla to better innovate in that space?
We need an organization like CC to make sure that the content layer of the web is as open and free as the core tech upon which it’s all built. It’s at this content layer that most people ‘make the web’ — it’s where people feel the participatory and remixable nature of the web. Keeping things open and free at this level — and making them more so — is critical to the future of the open web.
Help ensure a bright future for the open web and donate to Creative Commons today.Comments Off
Today a new German site launched, IGEL (“Initiative gegen ein Leistungsschutzrecht”; in English, “initiative against a related right”). The site, spearheaded by German lawyer Till Kreutzer, provides information on a possible proposal for a new “related right” for press publishers. Original content on the site is released under the Creative Commons Attribution license.
Additionally, Creative Commons has agreed to be listed as a supporter of IGEL. We almost never stake out a position beyond our core role of providing voluntary infrastructure to facilitate sharing. This sometimes leads to criticism of CC from both those who oppose copyright and see us as apologists, and from those who fear sharing, and see anything less than complete control, no matter how voluntary, as undermining copyright.
We take this criticism from both extremes as indication that we’re doing our job well — a job that isn’t even about copyright, let alone apologizing for or undermining copyright. CC’s job is to provide tools to help people who want to, and society overall, to get the most possible out of the sharing and collaboration made possible through communications technologies and human creativity. Copyright happens to be the legal framework that shapes how sharing and collaboration occur, so our tools operate in that framework to grant permissions in advance for sharing and collaboration.
This brings us to new related rights. Examples include sui generis database rights only applicable in Europe, proposals for special broadcast rights, which would give broadcasters a new set of exclusive rights merely for having broadcasted material, and a potential proposal for a new press publisher right to control use of non-copyrighted snippets of press material as well as specific headline wordings, for example. This potential press publisher right is what IGEL concerns.
Such new related rights, when they go into effect, make sharing and collaboration harder, for at least two reasons.
One, all communication requires some common expression. Things that fall outside of the scope of copyright (e.g., facts, abstract ideas) and copyright exceptions and limitations that facilitate quoting and critique give scope for communication, without every single sentence one utters being subject to potential lawsuit. New related and nearby rights can effectively limit the scope of what may be communicated freely, e.g., collections of facts in the case of database rights, and very brief descriptions of news items, in the case of press publisher rights — or even the facts of a news story, in the case of “hot news” restrictions recently mooted by publishers in the U.S.
New York City Gridlock by Roy Googin / CC BY-SATwo, with a proliferation of rights, it is harder to know who has exclusive control over what, or whether multiple parties have exclusive control over different rights over a work. This phenomena of too many property claims forms what is sometimes called an anticommons — overlapping exclusive claims can prevent anyone from using a work — the opposite (thus “anti”) of a commons, in which anyone may use a work under a clear, easily discernible set of rules.
The press publishers right as it was proposed now for Germany is expressly intended to make linking to (and viewing of) openly accessible press content on the web cost a mandatory fee, whenever it happens in any kind of commercial context. Together with the vagueness of the term “press product” in this sense and the unclear boundaries of commercial contexts, the new right is apt to spread uncertainty as to when a link can freely be set, thus harming a core principle of sharing and of the internet. At the same time, creators using Creative Commons licenses might suddenly find themselves falling into the scope of being a press publisher in the meaning of the new right. This could lead to the paradox situation of original Creative Commons content unintentionally becoming paid-content — that is if the publishers right is drafted to be non-waivable.
This brings us to why Creative Commons considers new copyright-like rights harmful. Such rights are clear barriers to getting the most out of sharing and collaboration and threatens to the open web, with no evidence of any countervailing benefits. New copyright-like rights make it a bit harder to share and collaborate with openly licensed materials, by constraining and confusing what can be openly licensed when multiple rights are involved. More significantly they make it harder to share and collaborate even when copyright is not pertinent, but the natural flow of using digital communication technologies is, e.g., sharing a link with a title.
In some ways increasing default restrictiveness makes the tools Creative Commons provides more valuable. Less default facilitation of sharing and collaboration means those who want to share must take careful steps to enable it — and Creative Commons has encapsulated the hard work in its tools. Furthermore, the more the default condition is lockdown, the more valuable works that aren’t fully locked down become. However, at Creative Commons we are are not simply working to maximize use of our tools, which after all are just a means to facilitate sharing and collaboration.
Finally, one should note, however one feels about the reality of current copyright law, that new copyright-like rights do harm — either adding insult to injury, or making copyright less efficient and credible as it becomes increasingly easy to obtain protection for non creative works, a threshold copyright requires for good reason. If you read German, we encourage you to visit the IGEL site and learn about the related rights proposals it addresses. We’ll also have more to say here, perhaps not about why new copyright-like rights are harmful, but how Creative Commons tools operate in a world in which such rights exist — some readers will be aware that European sui generis database rights are particularly troublesome — for our tools do have to do their best to enable sharing in collaboration in the world we find ourselves in, and as that world changes. (This is a difficult job. Please make a donation to support our work!)
Thanks to John Hendrik Weitzmann, Legal Project Lead of Creative Commons Germany, for introducing IGEL and assistance with this post.2 Comments »
CERN Library releases its book catalog into the public domain via CC0, and other bibliographic data news
CERN, the European Organization for Nuclear Research that is home to the Large Hadron Collider and birthplace of the web, has released its book catalog into the public domain using the CC0 public domain dedication. This is not the first time that CERN has used CC tools to open its resources; earlier this year, CERN released the first results of the Large Hadron Collider experiments under CC licenses. In addition, CERN is a strong supporter of CC, having given corporate support at the “creator” level, and is currently featured as a CC Superhero in the campaign, where you can join them in the fight for openness and innovation!
Jens Vigen, the head of CERN Library, says in the press release,
“Books should only be catalogued once. Currently the public purse pays for having the same book catalogued over and over again. Librarians should act as they preach: data sets created through public funding should be made freely available to anyone interested. Open Access is natural for us, here at CERN we believe in openness and reuse… By getting academic libraries worldwide involved in this movement, it will lead to a natural atmosphere of sharing and reusing bibliographic data in a rich landscape of so-called mash-up services, where most of the actors who will be involved, both among the users and the providers, will not even be library users or librarians.”
In related news, the Cologne-based libraries have made the 5.4 million bibliographic records they released into the public domain earlier this year, also via CC0, available in various places. See the hbz wiki, lobid.org (and their files on CKAN), and OpenDATA at the Central Library of Sport Sciences of the German Sports University in Cologne. For more information, see the case study.
The German Wikipedia has also used CC0 to dedicate data into the public domain; specifically, their PND-BEACON files are available for download. Since Wikipedia links out to quite a number of external resources, and since a lot of articles link to the same external resources, PND-BEACON files are the German Wikipedia’s way of organizing the various data. “In short a BEACON file contains a 1-to-1 (or 1-to-n) mapping from identifiers to links. Each link consists of at least an URL with optionally a link title and additional information such as the number of resources that are available behind a link.” Learn more from the English description of the project.1 Comment »
Last time on the CC blog I was sharing my thoughts about the evaluation of CC’s contribution to Collaboration and Sharing. There was a part there in which I was making the point that it is an impact which is distinctly challenging for estimation. Well, my full hearted belief that that analysis is, in fact, the pinnacle of prospective hardships can explain why when I first came to engage with CC’s contribution to the field of art, I was feeling lighthearted. After all, most of the characteristics which made sharing and collaboration such a tough domain to gauge, are not properties of art. So, I can begin by reporting that it was definitely light-minded to be lighthearted; the contribution to art is a completely independent pandora’s box.
I hope at least this last point will be rendered clearer by reading this post, but my aim here is actually to describe my initial attempts to tackle this distinct quandary. Like with my former posts, by unabashedly exposing my very modest attempts, we, here at CC, are hoping to elicit a response and to engage you all in this important project.
Down to Business: CC’s Contribution to Art
Art encompasses activities that are traditionally divided into distinct genres. However, online creation has challenged the boundaries of those genres as it has provided an environment which made it easy for creators to put their creative efforts into works that cannot be conveniently categorized under one genre or even two, but rather reflect a hodgepodge of genres. Sometimes these acts of creativity coalesce into new genres, and sometimes they remain unique instances. The measurement of the contribution of CC needs to take account of all of these cases, and cannot be content with estimating the contribution to each traditional genre.
New genres as novel types of artistic endeavors have an independent value of their own which ought to be noted and measured separately. There are several reasons for this:
1. The evaluation of the novelty of these new works is altogether different than that of works of traditional genres.
- 2. These works usually involve different types of creators than traditional works (e.g., on the lay-professional scale) and therefore represent a different type of contribution to art.
- 3. Passive consumers and future contributors would necessarily have a different interaction with new types of works than traditional ones, which means that their perspective requires a distinct analysis.
- 4. The contribution of these enterprises to other value fields of CC (e.g., to Collaboration and Sharing) are different and should be distinguished and measured properly considering this difference.
- 5. From a pragmatic perspective, the estimation of new artistic enterprises obviously requires new metrics.
- 6. Lastly, and most importantly, CC is very plausibly contributing in a very distinct way to new enterprises as opposed to existing ones. For example, because those new works are created in a much more of a copyright limbo, CC’s ability to contribute specifically to their effective production and consumption as well as more broadly to the way that the enterprise is framed as part of the IP realm is unique.
Now having said all that, the contribution of art to welfare is in itself very hard to estimate, even before delving into the effective measurement of sub-genres. As a result, not many economists have tried to come up with analytical frameworks that would gauge art and its contribution. In fact, there persists a form of prima facie acceptance that art is dually valuable, for the outputs it produces, and as a human enterprise. The trouble with evaluation has to do with both: not all of art’s outputs are market outputs, and even when they are, they usually emblem non-monetary value in addition to their monetary one, and the abstract contribution of “art as human enterprise” is an even tougher cookie.
However, although CC is likewise resigned that art is valuable, for the purpose of its value analysis it must subscribe itself to some theoretical framework that analyzes the contribution of art. Absent such a framework, it will be impossible to assess any form of incremental contribution. As for the possible models that could potentially be applied, some writers have analyzed the quality of artistic products as strongly hinged in the question of how innovative they are. In other words, a valuable or a good artwork is one which is avant-garde in terms of technique or artistic expression. (Check out David W. Galenson’s Analyzing Artistic Innovation). From a slightly different perspective, some ascribe an artwork’s contribution to the extent by which it promotes innovation in other fields. The basis of the latter is that art is unique in cultivating creativity, originality and inventiveness (for example, Xavier Castañer and Lorenzo Campos’s The Determinants of Artistic Innovation: Bringing in the Role of Organizations, 26 Journal of Cultural Economics 29-52 (2002)).
If we are ready to accept this last paradigm, then that will allow us to rely on the extension to the contribution of CC to art of the full breadth of theories which analyze the capacity of innovation to enhance welfare, or the value of innovation in art.
Yet, putting aside the multiple benefits to accepting these paradigms, there are several difficulties which have to do with the imperfect correspondence of these frameworks to art. To demonstrate, not even the underlying Schmpeterian concept of creative destruction applies to art, as art tends to incorporate all prior expression within it as it evolves. Therefore, any analysis which discusses the contribution of art in innovation terms would require substantial theoretical accommodation.
The innovation paradigms of the second category (the ones considering the contribution to art as in itself a contributor to innovation) mind less the level or nature of the artistic outputs themselves, and mostly emphasize the very existence of novel outputs as inherently beneficial. In other words, they would still need to be complemented with other theories recognizing the direct importance of the artistic enterprise.
This is why in addition to developing novelty measures and to understanding how CC contributes institutionally to innovation, the project continues under the assumption that all else being equal, having more art is better, having more art contributors is better, having more consumption of art is better, having better art is better and extended quality in creativity and consumption is better. This assumption plays out alongside the presumption that more art variability is better which is a parameter directly related to innovation in art. Therefore, CC sets out to measure its impact on those values as to provide the necessary fodder for the analysis of its contribution. Examples follow.
Quantity includes all the measures that are based on counting. Among which are the following:
1. Tracking the number of CC artworks that are being produced. Obviously, our work would not end once coming up with this number, because an analysis would have to ensue which may be extremely complicated. This is because it isn’t necessary that all other things being equal, more artworks is invariably a welfare improvement; for example, because more clamor which more art might produce may mean less welfare (note that this pertains only to the detriments of overcrowding and not to other claims that touch upon quality which needs to be accounted for too).
Well, the only thing I can say about that is that it is these moments which make me grateful for taking this one step at a time.
2. The number of CC artists. Again, like with the case of the number of works, this datum does not reveal the entire story: An example for a claim which would be influential in the analysis is that artistic production is optimal when it is the single realm of a thin stratum of artists (the benefits of the alternatives). Now since CC operates under the contrary conviction that more engagement in artistic pursuits and thus tries to increase it without discretion, it needs to prove that the outcome it promotes is superior in terms of the contribution to welfare.
The latter claim suggests that this parameter should be divided up by profile of the artist. To the extent that this is possible it would be beneficial to distinguish between the added number of lay and expert CC artists, between heavy and light contributors, between additions of CC artists who create just CC works and those that use different legal frameworks other than CC.
3. The number of new types of CC artworks that are being generated.
4. The use of assisstive applications for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
=> Obviously, for the purposes of allowing an analysis which would consider CC’s dynamic contribution it is necessary to be gathering data with respect to temporal trends as well.
Internal & external quality parameters
1. The progression of the technique being employed in CC works, per each art genre, and for each function, like the creation of the new contribution and for the fusing together of existing artistic resources for the new creation.
2. The progression of the inherent quality of the artistic expression of CC works. This is a very complex attribute to measure, because it requires the perspective of time, or at least the ability to estimate the overall cultural weight of the work, which in turn requires multi-term adjustment.
1. Value as a resource/use availability: The progress in the outward impression which is being created by the artwork divided by (1) Lay artist impression, and (2) Expert artist impression. This quality measure has to do with the ability of others to extract benefits from the artwork and can be estimated using the proxy of use: the extent to which the work is used as a resource for other works.
2. Consumption readiness/ease of access. This parameter is set to measure the accessibility of the work for passive consumption. This again requires analysis that would tie this data back to the measure of quality: it is impossible, for example, that degraded art or lower quality art is in general more accessible than art of better quality.
Quality measurement, extra challenges
Don’t tell me you thought that was it? Up until this point I’ve been calmly suggesting quality measures, without offering a clue as to how to create the actual quality scale for each. So how to begin measuring quality in art? Well, thankfully we are not the first to have to approach this question. Cultural economists have dealt with this issue, particularly in relation to the question of the proper government subsidy for non-market goods such as cultural products many times are. (See, for e.g., Eric Thompson’s et al. Valuing the Arts: A Contingent Valuation Approach, 26 Journal of Cultural Economics, and Douglas S. Noonan’s Contingent Valuation and Cultural Resources: A Meta-Analytic Review of the Literature, 27 Journal of Cultural Economics 159-176 (2003)).
What these scholars offered was to go from household to household, and use a method called Contingent Valuation in order to assess the extent to which people in general value a particular cultural service. The Contingent valuation method (CVM) employs survey methods to gather stated preference information, and through those it derives a translation into a monetary value with is called the WTP – the willingness to pay.
So these scholars begin with price, as an arguably satisfactory proxy for quality of an art product when there is a market for it. Yet when exploring CC’s predominant fields of activity we see almost no outputs with a dollar value. Therefore, although CC can safely rely on CVM as an established technique in cultural economics, it remains debatable whether CVM can capture the full value generated from cultural goods, and within it, from art: For one, art is classed as an experiential or addictive good, for which demand is cumulative, and hence dynamically unstable, whereas in WTP, people are being asked to evaluate it even if they do not consume it at all, as though it was a commodity like a street lamp. A solution for that might be to turn to expert appraisal. And indeed, when we shall come to the stage where we start going into detail with these metrics, we expect to rely on parameters used by experts to perform appraisals for different forms of art.
Two, there is a very strong claim that art has intrinsic value, as a public good, that is unappraisable by the individual by way of potential consumption estimation. (David Throsby thus differentiates between economic and cultural value, see in David Throsby’s Determining the Value of Cultural Goods: How Much (or How Little) Does Contingent Valuation Tell Us?, 27 Journal of Cultural Economics 275-285 (2003)).
This issue cannot be solved using traditional economic tools, which may mean these should be abandoned. Instead, we ought to identify measurable characteristics of cultural goods which give rise to their cultural value. For example, “their aesthetic properties, their spiritual significance, their role as purveyors of symbolic meaning, their historic importance, their significance in influencing artistic trends, their authenticity, their integrity, their uniqueness,” and so on. This is partly why in order to correctly quantify the contribution to welfare in all its facets, we must content ourselves, at least to some extent with simplified measures that pertain to quantity of production, to engagement and to the richness of the field as we are beginning to do here. This, in addition to those parts of the artistic enterprise which can be economically evaluated using such methods as CVM.
CC Art Variability Measures, Internal, External
1. (direct measures) Novelty level, conceptual and experimental separately measured, of CC works. (1) for each new genre (2) within every existing genre.
2. (indirect measures) The number of new relevant applications which are used for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
Control Measures (confounders)
In order to be able to measure the pure impact of CC, it is necessary to be able to be able to clear out influences unrelated to CC that may muddy our measures. The following are metrics directed for this purpose:
- 1. Changes in the production of non-CC art. This parameter will be used to gauge changes in artistic activity which can reflect on CC art too but have nothing to do with any activity led by CC. While collecting this data it is important to separate between non-CC art which is licensed under open framework and between non-CC art relying on proprietary frameworks. This is because part of the growth of comparable frameworks might be attributable to CC’s activity under the 3rd pillar of contribution which might further complicate the analysis.
2. Extension of consumption of non-CC art. The aim here is to clean the CC impact with respect to consumption.
3. Art markets expansion
4. Extension in the number of general artists. (measuring unrelated entrance to the specific labor market)
- 5. Evolution in general technical platforms for art creation, distribution, consumption.
- 6. Government grants for art (non CC – easy separation: government will usually define the license to be used)
That’s all folks.Comments Off
Digital Garage, long time friend and supporter of CC, has just donated $100,000 to our annual campaign! According to Joi Ito, Digital Garage Co-Founder and Board Member and CC CEO – “Digital Garage considers Creative Commons to be a key piece of infrastructure for our global society. As a cutting-edge business that invests in internet companies and incubators that help facilitate this global society, it’s imperative that Creative Commons remains as strong as possible.”
Join Digital Garage in making sure Creative Commons stays strong by donating today.Comments Off
CC Talks With: Jeff Mao and Bob McIntire from the Maine Department of Education: Open Education and Policy
Maine has been a leader in adopting educational technology in support of its students. In 2002, through the Maine Learning Technology Initiative (MLTI), the state began providing laptops to all students in grades 7-8 in a one-to-one laptop program. In 2009, Maine expanded the project to high school students. The one-to-one laptops paved the way for open education initiatives like Vital Signs, empowering students to conduct their own field research in collaboration with local scientists, and make that research available online. Recently, Maine has been engaged in some interesting and innovative projects around OER as a result of federal grant funds. For this installment of our series on open education and policy, we spoke with Jeff Mao and Bob McIntire from the Maine Department of Education. Jeff is Learning Technology Policy Director at MLTI, and Bob works for the Department’s Adult & Community Education team.
One part of the $700 billion American Recovery and Reinvestment Act (ARRA) was dedicated to creating technology-rich classrooms. This funding was distributed through the existing No Child Left Behind Title IID program. With their one-to-one student laptop program, Maine was already ahead of the game with regard to technology in the classroom, so they decided to focus the ARRA funding on OER projects. “We wanted to create something that had a longer shelf life,” said Bob. Maine’s grants were broken into two initiatives: research to identify and annotate high quality OERs, and the creation of professional development models using OER.
Curate metadata, don’t stockpile resources
Maine is a “non-adoption” state, which means that teachers at the local level determine the educational resources they wish to use in their classrooms. Most other states adopt educational materials at the state level. For instance, for a class like 9th grade world history, states will approve multiple textbook titles from multiple publishers, and schools will be able to choose from among the state approved list. Since it’s up to local teachers to determine which educational resources are good for their teaching, part of the Maine OER grants is devoted to researching the rough process that teachers step through when evaluating content. MLTI has been working on a type of educational registry. This registry will be a website that can house the metadata teachers collect around the resources they wish to use. This website–still in development–will help teachers to be able to find, catalog, categorize, and add other informative data to quality resources. Perhaps as important, it will allow teachers to share with others what they did with the content, whether the material worked (or bombed), and other sorts of useful descriptive information. Right now the team is using the social bookmarking service delicious to add metadata to high quality OERs that they find online. This project is coordinated by the Maine Support Network, a professional development and technical assistance provider, and all the resources are linked through one delicious site at http://www.delicious.com/syntiromsn.
Weaning teachers off of printed textbooks
Jeff talked about a way to restructure the traditional textbook adoption cycle that would result with an end product of 100% OER. Currently, the Maine textbook adoption process goes something like this: After six years of using the same textbook, teachers realize their turn is coming up to place an order for a new textbook. In the springtime, they call publishers and ask for demo copies of new books to potentially be used the following fall. Teachers peruse the books sent to them, and settle for the one that is the least flawed. Teachers use the book for five and half years, after which the process repeats itself. Jeff hopes this inefficient process can be changed. He suggests that rather than waiting until the final year to seek out new, pre-packaged educational materials, why not spend the interim years seeking out individual learning objects to replace every piece of their static textbooks?
Such a process could work to improve some of the content that teachers don’t like (and don’t use) in their traditional textbooks. And, through this iterative, piecemeal process, they can share their illustrative discoveries (and dead ends too) with other teachers. The Department itself could pitch in providing the tools, software, and other infrastructure to help teachers keep track of which resources have been reviewed, replaced, or modified. Jeff thinks that enabling teachers to operate in a constant revision mode is a better way to structure the acquisition of teaching and learning materials, rather than reviewing textbooks only once every five or six years.
As most open educational resources are digital, Jeff said there’s an increasing need to be able to deal with strictly digital materials. Digital materials can be leveraged better because Maine students and teachers already have the laptops to access and manipulate the content (which can’t be done with physical books), digital materials can help integrate other best-of types of technology and interactive pedagogy into their lessons, and digital materials helps set up the conditions to support embedded assessment mechanisms.
Share your process as OER; everything is miscellaneous
Maine hopes its work on OER can be used by other states and communities, considering the research and resources will be produced using federal dollars. They will publish their process and offer the resources they create as OER itself online. Jeff said, “the more we can demonstrate this process is effective, the better it speaks to the efficacy of OER.” And, publishing information about resources and processes should be something natural to share. “If a teacher expends six hours finding a great OER for teaching students polynomials,” said Jeff, “it just needs to be done once.” But at the same time, with the diversity of resources available online–and with clear rights statements through the use of Creative Commons–variations on the sets of resources can be nearly infinite. Teachers can have their own educational “iMixes,” just as iTunes users create playlists of their favorite music.
The future classroom
As Maine continues its work on OER research and professional development, Jeff and Bob offer a vision of a classroom where students gather in small groups, talking, exploring and building projects and investigating ideas together. There is no lecturing, and open educational resources integrate with classroom instruction seamlessly. As most kids are naturally inclined to try to find information online, teachers can guide students in using high quality, adaptable OER. Jeff also suggests that we should be investing time and effort into more direct support for students, building or extending the tools being built for teachers, and proactively including students in the resource evaluation and review process.
The success of Maine and others’ OER projects is not assured. Dwindling budgets will remain an ongoing challenge, and while there’s been some recognition of OER in policy initiatives such as the National Education Technology Plan, Jeff and Bob question whether current budget woes will derail national and state efforts for change. Teachers are increasingly overburdened, and the development and support for a hands-on process like Maine’s requires ongoing teacher participation, feedback, and practice.
In the long run, Jeff thinks that OER will challenge the educational content industry in much the same way that the music industry was challenged by–and eventually succumbed to–Apple’s “buy-whatever-you-want” model of music distribution, where users could break apart the album format and simply purchase the songs they wish. Jeff predicts that the textbook industry will be forced to break apart their offerings too, and sell individual chapters or lessons, where before they offered only packaged content to a captured education audience. And Jeff says the benefits apply to publishers too–“If they sell you Chapter 1 and it’s really good,” he said, “maybe you’ll want to buy the whole book.”1 Comment »
The first website CC board member Caterina Fake ever made was a fan page for Lolita author Vladimir Nabokov, her favorite writer. “When I first went online around 1993-1994, every site was just something people had just put up–pictures of their cat, or a marble collection, or Bob Dylan discography. It was just strangers making cool stuff and sharing it online. The Internet was premised on this culture of generosity.”
But as the web grew, so did the rules about copyright and ownership of content. And somewhere along the way, this culture of generosity got lost in lockdown. That’s why, within six months of co-founding Flickr in 2004, Fake made sure that users could upload their photos to her rapidly expanding photo-sharing site with CC licenses. “Flickr is very much a platform for this culture of generosity to take place,” she says. “Creators should be able to choose to make their work available. If they have no interest in the ridiculous restrictions copyright is imposing on people, that should be okay.”
Today, Flickr has over 167 million CC-licensed photos, making it one of the largest repositories of freely shareable images in the world.
In the summer of 2009, Fake started Hunch, a website that builds a “taste graph” of the Internet. Users respond to questions like: “Do you like your sandwich cut vertically or horizontally?” and “When flying, do you prefer a window seat or an aisle seat?” The data collected goes towards finding random correlations on web users and providing recommendations on magazines,TV shows, and books. It’s all part of what Fake is most passionate about, what she calls participatory media.
Fake has been a supporter of sharing creative content from very early on. Before she was even thinking about founding successful web companies, Fake was a painter, sculptor, and writer. “I’m a big proponent of people having the ability to express themselves and be part of a culture that supports creative work,” she says. “I believe everyone who wants to make a living off their work should be more than welcome to do so. And those who do not should also have the ability not to be constrained by copyright.”
Help build a culture of generosity on the web by donating to Creative Commons today.Comments Off
It’s been almost three years since the CC community last met in person, adjacent a larger event in Sapporo, Japan. For such a rapidly growing and evolving network of experts and volunteers, the time is more than ripe to meet again in 2011. As we start earnestly planning for the event, we are asking ourselves where we should meet and whether it’s possible to find a co-host for the event.
To this end, CC has posted a request for proposals inviting interested organizations to apply to co-host the next global CC meeting, ideally co-located with another relevant event. The meeting will be held in the third quarter of 2011, bringing together the CC Affiliate Network, CC board and staff, key stakeholders and many others to engage strategically on issues related to the future of our shared commons and to further build CC’s vital community. Moreover, we’ll be identifying opportunities to collaborate on mutual projects, as well as celebrating our successes and exchanging experiences as we conclude our eighth year together.
Importantly, the meeting will include sessions that are open to the public as well as CC affiliate sessions. This will provide a meaningful way to connect local initiatives with our global network of legal experts and community leaders. The CC Global Meeting 2011 is a major focus this coming year, and we look to partner with a strong, well-connected host that can help CC fundraise and organize the event.
The ideal location must be easily reached from a major international airport, readily accessible by public transportation, equipped with appropriate conference venues and affordable lodging, and offer easy and inexpensive travel visas. Proposals should address fundraising to cover costs of the event and travel for principal participants.
We’re accepting proposals to co-host and co-organize the meeting through a lightweight proposal process, open until 2011 January 10. If you’re interested in submitting a proposal, our wiki page provides further details. Contact 2011global [at] creativecommons.org for more information.
Looking forward to seeing you in 2011!4 Comments »
Greg Kidd and Karen Gifford by Elizabeth Sabo / CC BY
We’re thrilled to announce that we have successfully met the $3000 matching giving challenge by 3taps, a new startup that makes sifting through classified ads a whole lot easier. We are grateful to 3taps for their support and thankful to everyone who got in on the challenge and doubled the value of their donation to CC. We are in the final weeks of our fundraising campaign, so please continue to pitch in and support the work of CC!
Why 3taps supports CC:
“3taps indexes factual data about items offered for exchange, like price, quantity and item description. Facts like these are important public information that let people find the best deal on the item they want. There has been a lot of confusion about the status of factual data on the Internet, and confusion in this area inhibits innovation. Creative Commons’ newly-released Public Domain Mark is an important tool for bringing clarity to this area. It couldn’t have come at a better time for those interested in collaboration in the sphere of data.” – Karen Gifford, co-founder. More on 3taps.
I’m delighted to introduce Andrew Rens, one of our exceptional CC Superheroes, who will tell you in his own words why he supports Creative Commons and why you should too. Rens, the founding legal lead of Creative Commons South Africa – a volunteer position he held from 2003 to 2009 – possesses particularly adept superpowers when it comes to facing tough issues around intellectual property and education in Africa. Here is his story. Join Rens and become a CC superhero – donate today.
“Since its inception Creative Commons has been instrumental in enabling so much diverse creativity, from music to design, from science to education, from business to philanthropy that I won’t attempt to refer to it all. Instead I’ll reflect on my personal experience of supporting CC, and why I think that you should seriously consider joining me in supporting Creative Commons.
From the day I first heard about Creative Commons I believed that it would be immensely helpful to two things which I am passionate about: Africa and education. Shortly thereafter I became the first legal lead for the Creative Commons South Africa project. I worked as legal lead, a volunteer position, from 2003 until 2009. What motivates someone to keep working as a volunteer for six years? What motivated me was the immense privilege of contributing to the work of others, of playing a part, however small, in some of the most inspiring initiatives I’ve ever seen.
One of those is Free High School Science Texts which offers curriculum compliant peer produced CC licensed school textbooks in math, physics, chemistry and biology. Another great project is Siyavula, a platform which enables teachers to co-create lesson materials. Then there is Full Marks, another teacher friendly site that enables teachers to co-create math and science quizzes. Astonishingly these three projects were all begun by one very smart and determined guy: Mark Horner. Yet another great project is Yoza, a self publishing platform that enables mobile access to novels and short stories, and so encourages literacy in a generation of Africans who have no ready access and whose only computers are mobile phones.
These are all good examples of the creativity of the open educational resources (OER) movement. The OER movement draws its inspiration from the Cape Town Open Education Declaration which speaks of “developing a vast pool of educational resources on the Internet, open and free for all to use.” Enabling sharing eliminates one barrier to education: highly priced learning materials. It also begins something else, described in the Cape Town Declaration as “planting the seeds of a new pedagogy where educators and learners create, shape and evolve knowledge together, deepening their skills and understanding as they go.”
One of the first seeds to sprout is Peer to Peer University (P2PU), a volunteer driven project to create a peer to peer driven learning community. P2PU bills itself as the “social wrapper around open educational resources.” Peer learning may well be the key innovation that helps resolve the crisis which tertiary education is experiencing worldwide.
Each new development is only possible because of the development before it; peer learning is only possible with open educational resources; open educational resources are only possible with open licences such as the Creative Commons licences. Each layer relies on the continuing viability of the layer which it builds on. That is one important reason that I support the ongoing work of Creative Commons, because the fundamentals of easily understood, easily used, open copyright licences need to be maintained.
Another reason I support the ongoing work of Creative Commons is the urgent need for work on patents and databases to enable people to research collaboratively and share their results. Yet another reason is because Creative Commons is committed to expanding the network of Creative Commons projects in Africa, supporting Africans not just to port Creative Commons licences to their jurisdictions but also to provide trusted local expertise to their educational communities.
I’ve had the satisfaction of seeing that the time sacrificed as a volunteer to port the Creative Commons licences has been more than repaid; the South African CC licences have been used vastly more times than any other technical legal document I’ve drafted. This is typical of how Creative Commons has worked; for every license ported there have been thousands if not millions of works using that license. The outpouring of human expression and ingenuity enabled by Creative Commons has been a huge return on every hour of volunteer time and every dollar spent on the staff who support the volunteers and keep the website working. These investments of time and money are small only relative to the creativity they’ve enabled. Every dollar donated, every hour spent could have been used to another end, and yet without them the return would not have been as great.
Although I have been privileged to participate in these exciting developments, I don’t believe that my experience is exceptional. Everyone who contributes to Creative Commons has the opportunity to be involved with a plethora of fascinating individuals and world changing projects. Please join me in supporting Creative Commons today.”Comments Off