In 2004, designer and animator Justin Cone created “Building on the Past” as part of our Moving Images Contest and won. Justin originally made the video, which demonstrated Creative Commons’ mission in two minutes, available under CC BY-NC. At the encouragement of Wikieducator’s Wayne Macintosh, Justin decided to re-release “Building on the Past” under the most open CC license, CC Attribution (CC BY) and made a short video explaining why (also under CC BY). Both videos are featured in Creative Commons unplugged, a part of Wikieducator’s Open content licensing 4 educators workshop (a work in progress).
In the video, Justin talks about why CC is so important to him:
“Creative Commons is important to me for two reasons: The first reason is that it just makes life easier. I don’t have to worry about law suits or trying to secure permissions from people who might be impossible to get in touch with. It just makes creation easier and encourages the exchange of ideas; it encourages discussion and education. The second reason is a little more symbolic. By putting the CC license on my work, it basically says I care enough to share. I feel like I’m taking part in a community just by licensing my work with CC.”
He goes on to explain why he changed the license of his film:
“Originally I licensed my “Building on the Past” video with an Attribution-Noncommercial license. And I think the noncommercial part was there because I was just generally suspicious about corporate interest or something. It wasn’t very well thought out, but I think I was worried that somebody would take the video, re-contextualize it in a way that wasn’t appropriate for the video. Since then, I’ve kind of changed the way that I think about things. The video has been showed around the world; it has been translated and subtitled in different languages and it has taken on a life of its own. And I think that it deserves to be a little freer. There’s no reason to keep it from being used by a commercial interest because I think it has some educational value. I think it has a message that can be debated, discussed, disagreed with or agreed with, and by removing the noncommercial part of my license, it’s easier for people to now do all those things.”
At the end, he offers tips for other creators, saying we should ask ourselves two questions: “Is this project bigger than me?” and “When you finish a project, is this really the end of the project, or is this the beginning?” If your answer is affirmative in both cases, Justin notes that CC “makes it so much easier for your project to expand beyond you”:
“I like to think of projects as stories. So if you choose a traditional copyright, then the story of your project has just a limited number of possible endings. And sometimes those endings are fine and they work for the story. But a lot of times it’s more interesting to choose a different path for your story. And if you go with a Creative Commons license you’re basically saying, I don’t want this story to end. I want it to go on and on. I want it to have different endings, different twists and turns rather, and I want other people to tell this story. I think that’s a better story, it’s a more exciting story; it’s epic.”
Wish CC a happy birthday by showing your support today!2 Comments »
Creative Commons files comments in U.S. Department of Commerce’s Inquiry on Copyright Policy, Creativity, and Innovation in the Internet Economy
Creative Commons has filed comments in the U.S. Department of Commerce’s Inquiry on Copyright Policy, Creativity, and Innovation in the Internet Economy. The Department received nearly 900 submissions over the comment period, which ended December 10. A summary of the Department’s interest in this topic is described below:
The Department of Commerce’s Internet Policy Task Force is conducting a comprehensive review of the relationship between the availability and protection of online copyrighted works and innovation in the Internet economy. The Department, the United States Patent and Trademark Office (USPTO), and the National Telecommunications and Information Administration (NTIA) seek public comment from all interested stakeholders, including rights holders, Internet service providers, and consumers on the challenges of protecting copyrighted works online and the relationship between copyright law and innovation in the Internet economy. After analyzing the comments submitted in response to this Notice, the Internet Policy Task Force intends to issue a report that will contribute to the Administration’s domestic policy and international engagement in the area of online copyright protection and innovation.
All of the comments are posted to the NTIA’s Internet Policy Task Force website. The comments of Creative Commons and a few other organizations are highlighted below:
Creative Commons urged the Department to ensure that the Internet remains open for innovation by adopting and promoting policies that enable and preserve the ability for users to lawfully share their creativity:
Creativity and innovation on the Internet is enabled by open technologies, open networks, and open content. Support for open licensing and public domain legal tools can help the maintain robust information flows that facilitate innovation and growth of the Internet economy.
Open content licensing is playing an increasing role in digital cultural heritage and the growth of the digital economy. Websites like Flickr, Picasa, Vimeo, Blip.tv, SoundCloud, Jamendo, Wikipedia and Wikimedia Commons share millions of CC licensed free cultural works.
Educational institutions, organizations, and teachers and learners use CC tools to overcome the legal and technical restrictions that prevent educational resources from being accessible, adaptable, interoperable, and discoverable.
Scientists and research institutions seeking to overcome the legal and technical barriers to sharing and building on data and knowledge are using CC tools, maximizing potential on investments and accelerating scientific discovery and innovation.
In considering the relationship between copyright and innovation, it is critical to remember that copyright is fundamentally a balance between the rights of the creator and the rights of the public at large. It is unavoidable that copyright creates restrictions on free expression and the free flow of ideas. However, it can also provide a powerful incentive to create. Effective copyright policy finds an equilibrium between the creator’s incentive to create and the public’s right to access, share and build on existing works. To that end, the Department should focus on finding ways to encourage more people to create and contribute. In addition to benefits, the costs of enforcement – both financial and in increased barriers to innovate – must be considered.
Whether for pleasure, education, or commerce, the web’s ability to help fuel innovation has derived from its tapestry of contributions, which are the product of people, communities, and organizations around the world creating, modifying, sharing, and hosting content. In our view, it is imperative that these quintessential qualities of the Internet be preserved without compromising the rights of content producers, whether big or small, and those that host and distribute such content.
[...] the federal government can most effectively promote creativity and innovation in the Internet Economy by encouraging the use of open licensing models and by requiring access to the results of federally funded research.
One of the primary sources of innovation in the U.S. economy is scholarly communications: articles, monographs, and databases written by professors, graduate students, and other researchers in all fields of human endeavor. The ideas expressed in these writings stimulate new research, advance the scientific and technology enterprise, and encourage commercial development of marketable products and services.
[...] the Department of Commerce, and the federal government as a whole, should concentrate their efforts on encouraging the creation and maintenance of robust, open platforms that support commercial and noncommercial ventures. The federal government should not expend limited resources on protecting particular business models in the face of technological change.
We just received the exciting news that Tucows, a company that started offering free downloads of shareware and freeware on the Internet in 1993, will take part in a matching challenge of up to $10,000. This means that whatever you donate right now will automatically be doubled. We need your help to meet their challenge and turn $10,000 into $20,000 for CC.
Here’s why Tucows supports CC:
“We support Creative Commons because all of our business philosophy is based on the open Internet. For the Internet to really flourish and remain an open, healthy, and great platform for innovation, we need to adapt old sets of rules to new paradigms. Creative Commons is one of the first and best examples of that.” -Elliot Noss, President and CEO
As we approach the end of the year, I invite you to think about how creativity and openness have affected your life. How much would you give to see a future filled with sharing? If you’ve supported CC already this year, would you be willing to give again knowing that your gift will be automatically doubled?
Join Tucows and donate today.Comments Off
Mike Carroll was a practicing lawyer in Washington DC when the idea of openly licensed copyright landed on his desk as a pro bono project. “It was going to be a central repository of content, where we had the copyright and would openly license it to others,” he says. After a group brainstorm led by Larry Lessig at the Harvard Berkman Center in May 2001, the lawyers decided to scrap the central repository idea and create licenses that others could use freely. Shortly after that, Carroll was invited to join the board of the organization that would soon become known as Creative Commons.
CC quickly evolved from an idea that artists were skeptical about—why would any creative person be willing to give up control?—to something that proliferated across state borders and disciplines.
“The Internet is global, and we knew we would have to grapple with the complexities of international copyright and that we would add science to the mix,” he says. “But the pressure to do that came much faster than we expected. In short order, we had to organize a pretty sophisticated operation on a limited budget to engage with the international network of support for the Creative Commons idea.”
Still, Carroll says, there are challenges ahead. “We are offering tools as a solution to a problem that not everyone knows they have.” For one thing, awareness about what open content is, and why making it legally open matters, is still a bit hazy for some. The good news is that some web sites have seamlessly integrated CC into their functions, like the Flickr search engine. The challenge is to engage with the continuing evolution of the web to make sure adopting CC is easy and natural.
“We care deeply about not getting locked into things that can’t evolve as the web evolves. If it were easier to find reliable CC content, that makes using CC licenses more attractive. We’re trying to keep the web open in an interoperable space—but it’s not just the technology. It’s the values embedded in our technical choices.”
Those values—openness, flexibility, sharing—are a part of Carroll’s life both professionally and personally. Carroll is now the Director of the Program on Information Justice and Intellectual Property at American University. In conjunction with Creative Commons, he has worked for years with the library community to promote open access to the scholarly and scientific journal literature on web. He also serves on the National Research Council’s Board on Research Data and Information to address issues such as data sharing among scientists, and he’s a Fellow at the Center for Democracy and Technology.
He’s also a hobby musician who sometimes gets together with other copyright lawyers to jam. “I’m one of those musicians [that loves] to play to the crowd,” he says, in the same spirit of an organization dedicated to helping other creators share.
Support the organization Carroll has been a part of from the very beginning by donating to Creative Commons today.2 Comments »
The Mozilla Foundation is unabashedly committed to a free and open web. They see it as a vital part of a healthy digital ecosystem where creativity and innovation can thrive. We couldn’t agree more. And we couldn’t be prouder to have Mozilla’s generous and ongoing support. We were recently able to catch up with Mark Surman, the Foundation’s Executive Director, who talks about Mozilla and its myriad projects, and how his organization and ours are a lot like lego blocks for the open web.
Most people associate Mozilla with the Firefox but you do much more than just that – can you give our readers some background on the different arms of Mozilla as an organization? What is your role there?
Mozilla’s overall goal is to promote innovation and opportunity on the web — and to guard the open nature of the internet.
Firefox is clearly the biggest part of this. But we’re constantly looking for new ways to make the internet better. Our growing focus on identity, mobile and web apps is a part of this. Also, we’re reaching out more broadly beyond software to invite people like filmmakers, scientists, journalists, teachers and so on to get involved.
Personally, I’m most active in this effort to reach out more broadly and to get many more people involved in our work. Much of this is happening through a program I helped start called Mozilla Drumbeat. As Executive Director of Mozilla Foundation, I also manage the overall umbrella legal structure for all of Mozilla’s activities.
What is the connection between Mozilla and CC? Do you use our tools in your various projects?
At the highest level, Mozilla and CC are both working for the same thing — a digital society based on creativity, innovation and freedom. And, of course, we use CC licenses for content and documents that we produce across all Mozilla projects.
Mozilla has given generously to Creative Commons – what was the motivation behind donating? What is it about CC that you find important?
I think of both organizations as giving people ‘lego blocks’ that they can use to make and shape the web. Mozilla’s lego blocks are technical, CC’s are legal. Both help people create and innovate, which goes back to the higher vision we share.
What do you see as CC’s role in the broader digital ecosystem? How does CC enable Mozilla to better innovate in that space?
We need an organization like CC to make sure that the content layer of the web is as open and free as the core tech upon which it’s all built. It’s at this content layer that most people ‘make the web’ — it’s where people feel the participatory and remixable nature of the web. Keeping things open and free at this level — and making them more so — is critical to the future of the open web.
Help ensure a bright future for the open web and donate to Creative Commons today.Comments Off
Today a new German site launched, IGEL (“Initiative gegen ein Leistungsschutzrecht”; in English, “initiative against a related right”). The site, spearheaded by German lawyer Till Kreutzer, provides information on a possible proposal for a new “related right” for press publishers. Original content on the site is released under the Creative Commons Attribution license.
Additionally, Creative Commons has agreed to be listed as a supporter of IGEL. We almost never stake out a position beyond our core role of providing voluntary infrastructure to facilitate sharing. This sometimes leads to criticism of CC from both those who oppose copyright and see us as apologists, and from those who fear sharing, and see anything less than complete control, no matter how voluntary, as undermining copyright.
We take this criticism from both extremes as indication that we’re doing our job well — a job that isn’t even about copyright, let alone apologizing for or undermining copyright. CC’s job is to provide tools to help people who want to, and society overall, to get the most possible out of the sharing and collaboration made possible through communications technologies and human creativity. Copyright happens to be the legal framework that shapes how sharing and collaboration occur, so our tools operate in that framework to grant permissions in advance for sharing and collaboration.
This brings us to new related rights. Examples include sui generis database rights only applicable in Europe, proposals for special broadcast rights, which would give broadcasters a new set of exclusive rights merely for having broadcasted material, and a potential proposal for a new press publisher right to control use of non-copyrighted snippets of press material as well as specific headline wordings, for example. This potential press publisher right is what IGEL concerns.
Such new related rights, when they go into effect, make sharing and collaboration harder, for at least two reasons.
One, all communication requires some common expression. Things that fall outside of the scope of copyright (e.g., facts, abstract ideas) and copyright exceptions and limitations that facilitate quoting and critique give scope for communication, without every single sentence one utters being subject to potential lawsuit. New related and nearby rights can effectively limit the scope of what may be communicated freely, e.g., collections of facts in the case of database rights, and very brief descriptions of news items, in the case of press publisher rights — or even the facts of a news story, in the case of “hot news” restrictions recently mooted by publishers in the U.S.
New York City Gridlock by Roy Googin / CC BY-SATwo, with a proliferation of rights, it is harder to know who has exclusive control over what, or whether multiple parties have exclusive control over different rights over a work. This phenomena of too many property claims forms what is sometimes called an anticommons — overlapping exclusive claims can prevent anyone from using a work — the opposite (thus “anti”) of a commons, in which anyone may use a work under a clear, easily discernible set of rules.
The press publishers right as it was proposed now for Germany is expressly intended to make linking to (and viewing of) openly accessible press content on the web cost a mandatory fee, whenever it happens in any kind of commercial context. Together with the vagueness of the term “press product” in this sense and the unclear boundaries of commercial contexts, the new right is apt to spread uncertainty as to when a link can freely be set, thus harming a core principle of sharing and of the internet. At the same time, creators using Creative Commons licenses might suddenly find themselves falling into the scope of being a press publisher in the meaning of the new right. This could lead to the paradox situation of original Creative Commons content unintentionally becoming paid-content — that is if the publishers right is drafted to be non-waivable.
This brings us to why Creative Commons considers new copyright-like rights harmful. Such rights are clear barriers to getting the most out of sharing and collaboration and threatens to the open web, with no evidence of any countervailing benefits. New copyright-like rights make it a bit harder to share and collaborate with openly licensed materials, by constraining and confusing what can be openly licensed when multiple rights are involved. More significantly they make it harder to share and collaborate even when copyright is not pertinent, but the natural flow of using digital communication technologies is, e.g., sharing a link with a title.
In some ways increasing default restrictiveness makes the tools Creative Commons provides more valuable. Less default facilitation of sharing and collaboration means those who want to share must take careful steps to enable it — and Creative Commons has encapsulated the hard work in its tools. Furthermore, the more the default condition is lockdown, the more valuable works that aren’t fully locked down become. However, at Creative Commons we are are not simply working to maximize use of our tools, which after all are just a means to facilitate sharing and collaboration.
Finally, one should note, however one feels about the reality of current copyright law, that new copyright-like rights do harm — either adding insult to injury, or making copyright less efficient and credible as it becomes increasingly easy to obtain protection for non creative works, a threshold copyright requires for good reason. If you read German, we encourage you to visit the IGEL site and learn about the related rights proposals it addresses. We’ll also have more to say here, perhaps not about why new copyright-like rights are harmful, but how Creative Commons tools operate in a world in which such rights exist — some readers will be aware that European sui generis database rights are particularly troublesome — for our tools do have to do their best to enable sharing in collaboration in the world we find ourselves in, and as that world changes. (This is a difficult job. Please make a donation to support our work!)
Thanks to John Hendrik Weitzmann, Legal Project Lead of Creative Commons Germany, for introducing IGEL and assistance with this post.2 Comments »
CERN Library releases its book catalog into the public domain via CC0, and other bibliographic data news
CERN, the European Organization for Nuclear Research that is home to the Large Hadron Collider and birthplace of the web, has released its book catalog into the public domain using the CC0 public domain dedication. This is not the first time that CERN has used CC tools to open its resources; earlier this year, CERN released the first results of the Large Hadron Collider experiments under CC licenses. In addition, CERN is a strong supporter of CC, having given corporate support at the “creator” level, and is currently featured as a CC Superhero in the campaign, where you can join them in the fight for openness and innovation!
Jens Vigen, the head of CERN Library, says in the press release,
“Books should only be catalogued once. Currently the public purse pays for having the same book catalogued over and over again. Librarians should act as they preach: data sets created through public funding should be made freely available to anyone interested. Open Access is natural for us, here at CERN we believe in openness and reuse… By getting academic libraries worldwide involved in this movement, it will lead to a natural atmosphere of sharing and reusing bibliographic data in a rich landscape of so-called mash-up services, where most of the actors who will be involved, both among the users and the providers, will not even be library users or librarians.”
In related news, the Cologne-based libraries have made the 5.4 million bibliographic records they released into the public domain earlier this year, also via CC0, available in various places. See the hbz wiki, lobid.org (and their files on CKAN), and OpenDATA at the Central Library of Sport Sciences of the German Sports University in Cologne. For more information, see the case study.
The German Wikipedia has also used CC0 to dedicate data into the public domain; specifically, their PND-BEACON files are available for download. Since Wikipedia links out to quite a number of external resources, and since a lot of articles link to the same external resources, PND-BEACON files are the German Wikipedia’s way of organizing the various data. “In short a BEACON file contains a 1-to-1 (or 1-to-n) mapping from identifiers to links. Each link consists of at least an URL with optionally a link title and additional information such as the number of resources that are available behind a link.” Learn more from the English description of the project.1 Comment »
Last time on the CC blog I was sharing my thoughts about the evaluation of CC’s contribution to Collaboration and Sharing. There was a part there in which I was making the point that it is an impact which is distinctly challenging for estimation. Well, my full hearted belief that that analysis is, in fact, the pinnacle of prospective hardships can explain why when I first came to engage with CC’s contribution to the field of art, I was feeling lighthearted. After all, most of the characteristics which made sharing and collaboration such a tough domain to gauge, are not properties of art. So, I can begin by reporting that it was definitely light-minded to be lighthearted; the contribution to art is a completely independent pandora’s box.
I hope at least this last point will be rendered clearer by reading this post, but my aim here is actually to describe my initial attempts to tackle this distinct quandary. Like with my former posts, by unabashedly exposing my very modest attempts, we, here at CC, are hoping to elicit a response and to engage you all in this important project.
Down to Business: CC’s Contribution to Art
Art encompasses activities that are traditionally divided into distinct genres. However, online creation has challenged the boundaries of those genres as it has provided an environment which made it easy for creators to put their creative efforts into works that cannot be conveniently categorized under one genre or even two, but rather reflect a hodgepodge of genres. Sometimes these acts of creativity coalesce into new genres, and sometimes they remain unique instances. The measurement of the contribution of CC needs to take account of all of these cases, and cannot be content with estimating the contribution to each traditional genre.
New genres as novel types of artistic endeavors have an independent value of their own which ought to be noted and measured separately. There are several reasons for this:
1. The evaluation of the novelty of these new works is altogether different than that of works of traditional genres.
- 2. These works usually involve different types of creators than traditional works (e.g., on the lay-professional scale) and therefore represent a different type of contribution to art.
- 3. Passive consumers and future contributors would necessarily have a different interaction with new types of works than traditional ones, which means that their perspective requires a distinct analysis.
- 4. The contribution of these enterprises to other value fields of CC (e.g., to Collaboration and Sharing) are different and should be distinguished and measured properly considering this difference.
- 5. From a pragmatic perspective, the estimation of new artistic enterprises obviously requires new metrics.
- 6. Lastly, and most importantly, CC is very plausibly contributing in a very distinct way to new enterprises as opposed to existing ones. For example, because those new works are created in a much more of a copyright limbo, CC’s ability to contribute specifically to their effective production and consumption as well as more broadly to the way that the enterprise is framed as part of the IP realm is unique.
Now having said all that, the contribution of art to welfare is in itself very hard to estimate, even before delving into the effective measurement of sub-genres. As a result, not many economists have tried to come up with analytical frameworks that would gauge art and its contribution. In fact, there persists a form of prima facie acceptance that art is dually valuable, for the outputs it produces, and as a human enterprise. The trouble with evaluation has to do with both: not all of art’s outputs are market outputs, and even when they are, they usually emblem non-monetary value in addition to their monetary one, and the abstract contribution of “art as human enterprise” is an even tougher cookie.
However, although CC is likewise resigned that art is valuable, for the purpose of its value analysis it must subscribe itself to some theoretical framework that analyzes the contribution of art. Absent such a framework, it will be impossible to assess any form of incremental contribution. As for the possible models that could potentially be applied, some writers have analyzed the quality of artistic products as strongly hinged in the question of how innovative they are. In other words, a valuable or a good artwork is one which is avant-garde in terms of technique or artistic expression. (Check out David W. Galenson’s Analyzing Artistic Innovation). From a slightly different perspective, some ascribe an artwork’s contribution to the extent by which it promotes innovation in other fields. The basis of the latter is that art is unique in cultivating creativity, originality and inventiveness (for example, Xavier Castañer and Lorenzo Campos’s The Determinants of Artistic Innovation: Bringing in the Role of Organizations, 26 Journal of Cultural Economics 29-52 (2002)).
If we are ready to accept this last paradigm, then that will allow us to rely on the extension to the contribution of CC to art of the full breadth of theories which analyze the capacity of innovation to enhance welfare, or the value of innovation in art.
Yet, putting aside the multiple benefits to accepting these paradigms, there are several difficulties which have to do with the imperfect correspondence of these frameworks to art. To demonstrate, not even the underlying Schmpeterian concept of creative destruction applies to art, as art tends to incorporate all prior expression within it as it evolves. Therefore, any analysis which discusses the contribution of art in innovation terms would require substantial theoretical accommodation.
The innovation paradigms of the second category (the ones considering the contribution to art as in itself a contributor to innovation) mind less the level or nature of the artistic outputs themselves, and mostly emphasize the very existence of novel outputs as inherently beneficial. In other words, they would still need to be complemented with other theories recognizing the direct importance of the artistic enterprise.
This is why in addition to developing novelty measures and to understanding how CC contributes institutionally to innovation, the project continues under the assumption that all else being equal, having more art is better, having more art contributors is better, having more consumption of art is better, having better art is better and extended quality in creativity and consumption is better. This assumption plays out alongside the presumption that more art variability is better which is a parameter directly related to innovation in art. Therefore, CC sets out to measure its impact on those values as to provide the necessary fodder for the analysis of its contribution. Examples follow.
Quantity includes all the measures that are based on counting. Among which are the following:
1. Tracking the number of CC artworks that are being produced. Obviously, our work would not end once coming up with this number, because an analysis would have to ensue which may be extremely complicated. This is because it isn’t necessary that all other things being equal, more artworks is invariably a welfare improvement; for example, because more clamor which more art might produce may mean less welfare (note that this pertains only to the detriments of overcrowding and not to other claims that touch upon quality which needs to be accounted for too).
Well, the only thing I can say about that is that it is these moments which make me grateful for taking this one step at a time.
2. The number of CC artists. Again, like with the case of the number of works, this datum does not reveal the entire story: An example for a claim which would be influential in the analysis is that artistic production is optimal when it is the single realm of a thin stratum of artists (the benefits of the alternatives). Now since CC operates under the contrary conviction that more engagement in artistic pursuits and thus tries to increase it without discretion, it needs to prove that the outcome it promotes is superior in terms of the contribution to welfare.
The latter claim suggests that this parameter should be divided up by profile of the artist. To the extent that this is possible it would be beneficial to distinguish between the added number of lay and expert CC artists, between heavy and light contributors, between additions of CC artists who create just CC works and those that use different legal frameworks other than CC.
3. The number of new types of CC artworks that are being generated.
4. The use of assisstive applications for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
=> Obviously, for the purposes of allowing an analysis which would consider CC’s dynamic contribution it is necessary to be gathering data with respect to temporal trends as well.
Internal & external quality parameters
1. The progression of the technique being employed in CC works, per each art genre, and for each function, like the creation of the new contribution and for the fusing together of existing artistic resources for the new creation.
2. The progression of the inherent quality of the artistic expression of CC works. This is a very complex attribute to measure, because it requires the perspective of time, or at least the ability to estimate the overall cultural weight of the work, which in turn requires multi-term adjustment.
1. Value as a resource/use availability: The progress in the outward impression which is being created by the artwork divided by (1) Lay artist impression, and (2) Expert artist impression. This quality measure has to do with the ability of others to extract benefits from the artwork and can be estimated using the proxy of use: the extent to which the work is used as a resource for other works.
2. Consumption readiness/ease of access. This parameter is set to measure the accessibility of the work for passive consumption. This again requires analysis that would tie this data back to the measure of quality: it is impossible, for example, that degraded art or lower quality art is in general more accessible than art of better quality.
Quality measurement, extra challenges
Don’t tell me you thought that was it? Up until this point I’ve been calmly suggesting quality measures, without offering a clue as to how to create the actual quality scale for each. So how to begin measuring quality in art? Well, thankfully we are not the first to have to approach this question. Cultural economists have dealt with this issue, particularly in relation to the question of the proper government subsidy for non-market goods such as cultural products many times are. (See, for e.g., Eric Thompson’s et al. Valuing the Arts: A Contingent Valuation Approach, 26 Journal of Cultural Economics, and Douglas S. Noonan’s Contingent Valuation and Cultural Resources: A Meta-Analytic Review of the Literature, 27 Journal of Cultural Economics 159-176 (2003)).
What these scholars offered was to go from household to household, and use a method called Contingent Valuation in order to assess the extent to which people in general value a particular cultural service. The Contingent valuation method (CVM) employs survey methods to gather stated preference information, and through those it derives a translation into a monetary value with is called the WTP – the willingness to pay.
So these scholars begin with price, as an arguably satisfactory proxy for quality of an art product when there is a market for it. Yet when exploring CC’s predominant fields of activity we see almost no outputs with a dollar value. Therefore, although CC can safely rely on CVM as an established technique in cultural economics, it remains debatable whether CVM can capture the full value generated from cultural goods, and within it, from art: For one, art is classed as an experiential or addictive good, for which demand is cumulative, and hence dynamically unstable, whereas in WTP, people are being asked to evaluate it even if they do not consume it at all, as though it was a commodity like a street lamp. A solution for that might be to turn to expert appraisal. And indeed, when we shall come to the stage where we start going into detail with these metrics, we expect to rely on parameters used by experts to perform appraisals for different forms of art.
Two, there is a very strong claim that art has intrinsic value, as a public good, that is unappraisable by the individual by way of potential consumption estimation. (David Throsby thus differentiates between economic and cultural value, see in David Throsby’s Determining the Value of Cultural Goods: How Much (or How Little) Does Contingent Valuation Tell Us?, 27 Journal of Cultural Economics 275-285 (2003)).
This issue cannot be solved using traditional economic tools, which may mean these should be abandoned. Instead, we ought to identify measurable characteristics of cultural goods which give rise to their cultural value. For example, “their aesthetic properties, their spiritual significance, their role as purveyors of symbolic meaning, their historic importance, their significance in influencing artistic trends, their authenticity, their integrity, their uniqueness,” and so on. This is partly why in order to correctly quantify the contribution to welfare in all its facets, we must content ourselves, at least to some extent with simplified measures that pertain to quantity of production, to engagement and to the richness of the field as we are beginning to do here. This, in addition to those parts of the artistic enterprise which can be economically evaluated using such methods as CVM.
CC Art Variability Measures, Internal, External
1. (direct measures) Novelty level, conceptual and experimental separately measured, of CC works. (1) for each new genre (2) within every existing genre.
2. (indirect measures) The number of new relevant applications which are used for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
Control Measures (confounders)
In order to be able to measure the pure impact of CC, it is necessary to be able to be able to clear out influences unrelated to CC that may muddy our measures. The following are metrics directed for this purpose:
- 1. Changes in the production of non-CC art. This parameter will be used to gauge changes in artistic activity which can reflect on CC art too but have nothing to do with any activity led by CC. While collecting this data it is important to separate between non-CC art which is licensed under open framework and between non-CC art relying on proprietary frameworks. This is because part of the growth of comparable frameworks might be attributable to CC’s activity under the 3rd pillar of contribution which might further complicate the analysis.
2. Extension of consumption of non-CC art. The aim here is to clean the CC impact with respect to consumption.
3. Art markets expansion
4. Extension in the number of general artists. (measuring unrelated entrance to the specific labor market)
- 5. Evolution in general technical platforms for art creation, distribution, consumption.
- 6. Government grants for art (non CC – easy separation: government will usually define the license to be used)
That’s all folks.Comments Off
Digital Garage, long time friend and supporter of CC, has just donated $100,000 to our annual campaign! According to Joi Ito, Digital Garage Co-Founder and Board Member and CC CEO – “Digital Garage considers Creative Commons to be a key piece of infrastructure for our global society. As a cutting-edge business that invests in internet companies and incubators that help facilitate this global society, it’s imperative that Creative Commons remains as strong as possible.”
Join Digital Garage in making sure Creative Commons stays strong by donating today.Comments Off
CC Talks With: Jeff Mao and Bob McIntire from the Maine Department of Education: Open Education and Policy
Maine has been a leader in adopting educational technology in support of its students. In 2002, through the Maine Learning Technology Initiative (MLTI), the state began providing laptops to all students in grades 7-8 in a one-to-one laptop program. In 2009, Maine expanded the project to high school students. The one-to-one laptops paved the way for open education initiatives like Vital Signs, empowering students to conduct their own field research in collaboration with local scientists, and make that research available online. Recently, Maine has been engaged in some interesting and innovative projects around OER as a result of federal grant funds. For this installment of our series on open education and policy, we spoke with Jeff Mao and Bob McIntire from the Maine Department of Education. Jeff is Learning Technology Policy Director at MLTI, and Bob works for the Department’s Adult & Community Education team.
One part of the $700 billion American Recovery and Reinvestment Act (ARRA) was dedicated to creating technology-rich classrooms. This funding was distributed through the existing No Child Left Behind Title IID program. With their one-to-one student laptop program, Maine was already ahead of the game with regard to technology in the classroom, so they decided to focus the ARRA funding on OER projects. “We wanted to create something that had a longer shelf life,” said Bob. Maine’s grants were broken into two initiatives: research to identify and annotate high quality OERs, and the creation of professional development models using OER.
Curate metadata, don’t stockpile resources
Maine is a “non-adoption” state, which means that teachers at the local level determine the educational resources they wish to use in their classrooms. Most other states adopt educational materials at the state level. For instance, for a class like 9th grade world history, states will approve multiple textbook titles from multiple publishers, and schools will be able to choose from among the state approved list. Since it’s up to local teachers to determine which educational resources are good for their teaching, part of the Maine OER grants is devoted to researching the rough process that teachers step through when evaluating content. MLTI has been working on a type of educational registry. This registry will be a website that can house the metadata teachers collect around the resources they wish to use. This website–still in development–will help teachers to be able to find, catalog, categorize, and add other informative data to quality resources. Perhaps as important, it will allow teachers to share with others what they did with the content, whether the material worked (or bombed), and other sorts of useful descriptive information. Right now the team is using the social bookmarking service delicious to add metadata to high quality OERs that they find online. This project is coordinated by the Maine Support Network, a professional development and technical assistance provider, and all the resources are linked through one delicious site at http://www.delicious.com/syntiromsn.
Weaning teachers off of printed textbooks
Jeff talked about a way to restructure the traditional textbook adoption cycle that would result with an end product of 100% OER. Currently, the Maine textbook adoption process goes something like this: After six years of using the same textbook, teachers realize their turn is coming up to place an order for a new textbook. In the springtime, they call publishers and ask for demo copies of new books to potentially be used the following fall. Teachers peruse the books sent to them, and settle for the one that is the least flawed. Teachers use the book for five and half years, after which the process repeats itself. Jeff hopes this inefficient process can be changed. He suggests that rather than waiting until the final year to seek out new, pre-packaged educational materials, why not spend the interim years seeking out individual learning objects to replace every piece of their static textbooks?
Such a process could work to improve some of the content that teachers don’t like (and don’t use) in their traditional textbooks. And, through this iterative, piecemeal process, they can share their illustrative discoveries (and dead ends too) with other teachers. The Department itself could pitch in providing the tools, software, and other infrastructure to help teachers keep track of which resources have been reviewed, replaced, or modified. Jeff thinks that enabling teachers to operate in a constant revision mode is a better way to structure the acquisition of teaching and learning materials, rather than reviewing textbooks only once every five or six years.
As most open educational resources are digital, Jeff said there’s an increasing need to be able to deal with strictly digital materials. Digital materials can be leveraged better because Maine students and teachers already have the laptops to access and manipulate the content (which can’t be done with physical books), digital materials can help integrate other best-of types of technology and interactive pedagogy into their lessons, and digital materials helps set up the conditions to support embedded assessment mechanisms.
Share your process as OER; everything is miscellaneous
Maine hopes its work on OER can be used by other states and communities, considering the research and resources will be produced using federal dollars. They will publish their process and offer the resources they create as OER itself online. Jeff said, “the more we can demonstrate this process is effective, the better it speaks to the efficacy of OER.” And, publishing information about resources and processes should be something natural to share. “If a teacher expends six hours finding a great OER for teaching students polynomials,” said Jeff, “it just needs to be done once.” But at the same time, with the diversity of resources available online–and with clear rights statements through the use of Creative Commons–variations on the sets of resources can be nearly infinite. Teachers can have their own educational “iMixes,” just as iTunes users create playlists of their favorite music.
The future classroom
As Maine continues its work on OER research and professional development, Jeff and Bob offer a vision of a classroom where students gather in small groups, talking, exploring and building projects and investigating ideas together. There is no lecturing, and open educational resources integrate with classroom instruction seamlessly. As most kids are naturally inclined to try to find information online, teachers can guide students in using high quality, adaptable OER. Jeff also suggests that we should be investing time and effort into more direct support for students, building or extending the tools being built for teachers, and proactively including students in the resource evaluation and review process.
The success of Maine and others’ OER projects is not assured. Dwindling budgets will remain an ongoing challenge, and while there’s been some recognition of OER in policy initiatives such as the National Education Technology Plan, Jeff and Bob question whether current budget woes will derail national and state efforts for change. Teachers are increasingly overburdened, and the development and support for a hands-on process like Maine’s requires ongoing teacher participation, feedback, and practice.
In the long run, Jeff thinks that OER will challenge the educational content industry in much the same way that the music industry was challenged by–and eventually succumbed to–Apple’s “buy-whatever-you-want” model of music distribution, where users could break apart the album format and simply purchase the songs they wish. Jeff predicts that the textbook industry will be forced to break apart their offerings too, and sell individual chapters or lessons, where before they offered only packaged content to a captured education audience. And Jeff says the benefits apply to publishers too–“If they sell you Chapter 1 and it’s really good,” he said, “maybe you’ll want to buy the whole book.”1 Comment »