Last week the Wikimedia Foundation announced it is adopting an open access policy for research works created using foundation funds. According to their blog post, the new open access policy “will ensure that all research the Wikimedia Foundation supports through grants, equipment, or research collaboration is made widely accessible and reusable. Research, data, and code developed through these collaborations will be made available in Open Access venues and under a free license, in keeping with the Wikimedia Foundation’s mission to support free knowledge.”
The details of the open access policy can be found on the Wikimedia Foundation website. There will be an expectation that researchers receiving funds from the foundation will provide “unrestricted access to and reuse of all their research output…”. Published materials, proposals, and supporting materials will be covered under the open access policy. The policy states that media files must be made available under the Creative Commons Attribution-ShareAlike 3.0 license (the version currently used by Wikipedia), or any other free license. In addition, the policy requires that data be made available under an Open Definition-conformant license (with the CC0 Public Domain Dedication preferred), and that any source code be licensed under the GNU General Public License version 2.0 or any other Open Source Initiative-approved license.
The open access policy from the Wikimedia Foundation joins other institutions–including governments, philanthropic foundations, universities, and intergovernmental organizations who have adopted policies to increase access to important and useful information and data for the public good. Thanks to Wikimedia for their continued leadership in support of free knowledge for all.Comments Off on Wikimedia adopts open licensing policy for foundation-funded research
Philanthropic foundations fund the creation of scholarly research, education and training materials, and rich data with the public good in mind. Creative Commons has long advocated for foundations to add open license requirements to their grants. Releasing grant-funded content under permissive open licenses means that materials may be more easily shared and re-used by the public, and combined with other resources that are also published under open licenses.
Yesterday the Bill & Melinda Gates Foundation announced it is adopting an open access policy for grant-funded research. The policy “enables the unrestricted access and reuse of all peer-reviewed published research funded, in whole or in part, by the foundation, including any underlying data sets.” Grant funded research and data must be published under the Creative Commons Attribution 4.0 license (CC BY). The policy applies to all foundation program areas and takes effect January 1, 2015.
Here are more details from the Foundation’s Open Access Policy:
- Publications Are Discoverable and Accessible Online. Publications will be deposited in a specified repository(s) with proper tagging of metadata.
- Publication Will Be On “Open Access” Terms. All publications shall be published under the Creative Commons Attribution 4.0 Generic License (CC BY 4.0) or an equivalent license. This will permit all users of the publication to copy and redistribute the material in any medium or format and transform and build upon the material, including for any purpose (including commercial) without further permission or fees being required.
- Foundation Will Pay Necessary Fees. The foundation would pay reasonable fees required by a publisher to effect publication on these terms.
- Publications Will Be Accessible and Open Immediately. All publications shall be available immediately upon their publication, without any embargo period. An embargo period is the period during which the publisher will require a subscription or the payment of a fee to gain access to the publication. We are, however, providing a transition period of up to two years from the effective date of the policy (or until January 1, 2017). During the transition period, the foundation will allow publications in journals that provide up to a 12-month embargo period.
- Data Underlying Published Research Results Will Be Accessible and Open Immediately. The foundation will require that data underlying the published research results be immediately accessible and open. This too is subject to the transition period and a 12-month embargo may be applied.
Trevor Mundel, President of Global Health at the foundation, said that Gates “put[s] a high priority not only on the research necessary to deliver the next important drug or vaccine, but also on the collection and sharing of data so other scientists and health experts can benefit from this knowledge.”
Congratulations to the Bill & Melinda Gates Foundation on adopting a default open licensing policy for its grant-funded research. This terrific announcement follows a similar move by the William and Flora Hewlett Foundation, who recently extended their CC BY licensing policy from the Open Educational Resources grants to now apply foundation-wide for all project-based grant funds.
Regarding deposit and sharing of data, the Gates Foundation might consider permitting grantees to utilize the CC0 Public Domain Dedication, which allows authors to dedicate data to the public domain by waiving all rights to the data worldwide under copyright law. CC0 is widely used to provide barrier-free re-use to data.
We’ve updated the information we’ve been tracking on foundation intellectual property policies to reflect the new agreement from Gates, and continue to urge other philanthropic foundations to adopt open policies for grant-funded research and projects.3 Comments »
On Monday California Governor Jerry Brown signed into law AB 609–the California Taxpayer Access to Publicly Funded Research Act. The law requires that research articles created with funds from the California Department of Public Health be made publicly available in an online repository no later than 12 months after publication in a peer-reviewed journal. AB 609 is described as the first state-level law requiring free access to publicly funded research. It is similar to the federal National Institutes of Health Public Access Policy. The bill has been making its way through the California legislature since being introduced by Assemblyman Brian Nestande in February 2013. Nestande’s office announced the passage yesterday.
The law applies to grantees who receive research funds from the Department of Public Health, and those grantees are responsible for ensuring that any publishing or copyright agreements concerning manuscripts submitted to journals fully comply with AB 609. For an article accepted for publication in a peer-reviewed journal, the grantee must ensure that an electronic version of the peer-reviewed manuscript is available to the department and on an appropriate publicly accessible database approved by the department within 12 months of publication in the journal.
Congratulations to California, the leadership of Assemblyman Nestande, and the coalition of open access supporters who worked hard to make this law a reality.Comments Off on California enacts law to increase public access to publicly funded research
Today marks the launch of the Open Access Button, a browser bookmark tool that allows users to report when they hit paywalled access to academic articles and discover open access versions of that research. The button was created by university students David Carroll and Joseph McArthur, and announced at the Berlin 11 Student and Early Stage Researcher Satellite Conference.
From the press release:
The Open Access Button is a browser-based tool that lets users track when they are denied access to research, then search for alternative access to the article. Each time a user encounters a paywall, he simply clicks the button in his bookmark bar, fills out an optional dialogue box, and his experience is added to a map alongside other users. Then, the user receives a link to search for free access to the article using resources such as Google Scholar. The Open Access Button initiative hopes to create a worldwide map showing the impact of denied access to research.
The creators have also indicated that they plan to release the data collected by the Open Access Button under CC0. Congratulations on the release of this useful tool.1 Comment »
We’re psyched to be a part of OKFestival: Open Knowledge in Action. The OKFestival takes place September 17-22, 2012 in Helsinki, Finland, and features “a series of hands-on workshops, talks, hackathons, meetings and sprints” exploring a variety of areas including open development, open cultural heritage, and gender and diversity in openness. You can buy tickets to the festival for any number of days until September 16 at http://okfestival.org/early-bird-okfest-tickets/. The OKFestival website has all the details, including the preliminary schedule.
We are particularly interested in and helped to shape the Open Research and Education topic stream, where we are leading an “Open Peer Learning” workshop on Wednesday (Sept 19) from 11:30am to 3:30pm. For the workshop the School of Open (co-led by Creative Commons and P2PU) is combining forces with the OKFN’s School of Data to explore, test and develop learning challenges around open tools and practices in data, research, and education. Participation in the workshop is free (you don’t even have to buy a festival ticket), but space is limited, so RSVP at: http://peerlearningworkshop.eventbrite.com/
The workshop will be held in this awesome space, reserved for four HACK workshops:
For those of you able to come to Helsinki, look out for our CC staff reps, Jessica Coates and Timothy Vollmer, along with many of our European affiliates who will be holding a regional meeting on Day four of the fest.
For the rest of you, you can still participate in helping to build initiatives like the School of Open from wherever you are by visiting http://schoolofopen.org/ and signing up for the mailing lists there.Comments Off on Counting down to the Open Knowledge Festival (Sept 17-22)
In November we wrote that the White House Office of Science and Technology Policy (OSTP) was soliciting comments on two related Requests for Information (RFI). One asked for feedback on how the federal government should manage public access to scholarly publications resulting from federal investments, and the other wanted input on public access to the digital data funded by federal tax dollars.
Creative Commons submitted a response to both RFIs. Below is a brief summary of the main points. Several other groups and individuals have submitted responses to OSTP, and all the comments will eventually be made available on the OSTP website.
- The public funds tens of billions of dollars in research each year. The federal government can support scientific innovation, productivity, and economic efficiency of the taxpayer dollars they expend by instituting an open licensing policy.
- Scholarly articles created as a result of federally funded research should be released under full open access. Full open access policies will provide to the public immediate, free-of-cost online availability to federally funded research without restriction except that attribution be given to the source.
- The standard means for granting permission to the public aligned with full open access is through a Creative Commons Attribution (CC BY) license.
- If the federal government wants to maximize the impact of digital data resulting from federally funded scientific research, it should provide explicit, easy-to-understand information about the rights available to the public.
- The federal government should establish policies that insure the public has cost-free, unimpeded access to the digital data resulting from federally funded scientific research. Access to this data should be made available as soon as possible, with due consideration to confidentiality and privacy issues, as well as the researchers’ need to receive credit and benefit from the work.
- The federal government can grant these permissions to the public by supporting policies whereby 1) data is made available by dedicating it to the public domain or 2) data is made available through a liberal license where at most downstream data users must give credit to the source of the data. CC offers tools such as the CC0 waiver and CC BY license in support of these goals.
Last month, CC participated in the yearly SERCI congress, which took place in Bilbao, Spain. SERCI is the Society for Economic Research on Copyright Issues. The SERCI congress is therefore intended to allow researchers to discuss their ongoing work with their peers and to further academic alliances between them for the benefit of future research enterprises. We were only able to participate following a rigorous process in which our research outputs were refereed.
Just to give you an idea of the people we were fortunate to meet at SERCI and how interesting and critically important their research is, attending was Nancy Gallini (University of British Colombia), who was discussing antitrust implications of copyright bundles, such as the ones arguably created by collecting societies. Participating was also Michael Yuan (Roger Williams University) who was discussing a paper he wrote along with Koji Domon (Waseda University) presenting their research comparing between copyright systems of “Indefinitely Renewable Copyright” and the current system. Christian Handke (Erasmus University of Rotterdam) spoke about Copyright and its “Effects on Different Types of Innovation”, and Jin-Hyuk Kim (the University of Cambridge) was discussing his work on copyright levies. These participants were just part of a very long list of prominent researchers from all over the world, and the person orchestrating it all was Richard Watt from the University of Canterbury.
As you can probably tell from the titles of the papers, we were delighted to find at the congress academics highly involved in research directly intended to impact global, international and national copyright policy! That, as well as the quality of their input, is why they have the ear of policy makers and this is why they are right up our alley!
So to serve as an example for how high the level of involvement of these academics in policy-making circles reaches, at SERCI we met economists who work for governmental authorities such as Benjamin Mitra-Kahn (UK Intellectual Property Office), Dimiter Gantchev (WIPO) and Raphael Solomon (Copyright Board of Canada). Benjamin was speaking about the Hargreaves report, which is a review of Intellectual Property and Growth, initiated in November 2010 by England’s Prime Minister, David Cameron, conducted by Prof. Ian Hargreaves. And Dimiter Gantchev was discussing the recent discussions ongoing in WIPO about global copyright registries.
I believe I can objectively report that the level of interest from participants in Creative Commons was very high. And our own topic for discussion can be essentially described as ourselves: Our presentation was about our ongoing project about CC’s economic contribution (see especially first, fourth and fifth posts about the project, and of course, the paper itself). Several good results came from our participation:
First, we were able to arouse a lot of interest among this global community of researchers, and boy, did we cherish the attention! For instance, people were asking how CC is impacting the copyright environment that applies to its different communities, how exactly the process of applying the license works, how CC analyzes its users’ incentives, etc.
Everybody who was there now knows what we do and how important we are in the space of spurring the operation of our different communities (through enhanced sharing and transactional benefits). Obviously, this newly acquired knowledge about CC is bound to be shared with researchers in the respective institutions of the participants, thus percolating through the community of scholars and increasing our renown, as an organization and as a platform.
The critically important implication of all this is that when these scholars are voicing their opinions in policy-making circles, it is highly probable that they will now be offering CC as potential solution for different problems that hinder the activity within our target communities, of creators, scientists, educators, governments, NGOs, individual data contributors, etc.
Second, we were able to receive substantial advice on the project we came in to present. For instance, we discussed with the other participants the decisions we have made to look at our contribution in different CC communities separately and only then at cross-influences, our understanding of our user incentives and our ability to substantially reduce transaction costs, as well as our suggested formulation of how the sharing and collaboration we promote benefits welfare and individuals.
Third, we struck bonds with some new studious friends – and CC now has more colleagues within the research community. This means that we can count on collaborations for CC-oriented research, which we are conducting, and also that we can expect others to initiate and conduct independent research on CC themes.
To make a long story short, SERCI has been what was expected and more, and we are already looking forward to implementing what we’ve learned, to start our cooperation with the scholars we met and to make plans for our next such event!
Note: The SERCI Congress program which is linked to is not updated. We are told by SERCI that it will be, to include the names of all the participants in the coming days.2 Comments »
Last time on the CC blog I was sharing my thoughts about the evaluation of CC’s contribution to Collaboration and Sharing. There was a part there in which I was making the point that it is an impact which is distinctly challenging for estimation. Well, my full hearted belief that that analysis is, in fact, the pinnacle of prospective hardships can explain why when I first came to engage with CC’s contribution to the field of art, I was feeling lighthearted. After all, most of the characteristics which made sharing and collaboration such a tough domain to gauge, are not properties of art. So, I can begin by reporting that it was definitely light-minded to be lighthearted; the contribution to art is a completely independent pandora’s box.
I hope at least this last point will be rendered clearer by reading this post, but my aim here is actually to describe my initial attempts to tackle this distinct quandary. Like with my former posts, by unabashedly exposing my very modest attempts, we, here at CC, are hoping to elicit a response and to engage you all in this important project.
Down to Business: CC’s Contribution to Art
Art encompasses activities that are traditionally divided into distinct genres. However, online creation has challenged the boundaries of those genres as it has provided an environment which made it easy for creators to put their creative efforts into works that cannot be conveniently categorized under one genre or even two, but rather reflect a hodgepodge of genres. Sometimes these acts of creativity coalesce into new genres, and sometimes they remain unique instances. The measurement of the contribution of CC needs to take account of all of these cases, and cannot be content with estimating the contribution to each traditional genre.
New genres as novel types of artistic endeavors have an independent value of their own which ought to be noted and measured separately. There are several reasons for this:
1. The evaluation of the novelty of these new works is altogether different than that of works of traditional genres.
- 2. These works usually involve different types of creators than traditional works (e.g., on the lay-professional scale) and therefore represent a different type of contribution to art.
- 3. Passive consumers and future contributors would necessarily have a different interaction with new types of works than traditional ones, which means that their perspective requires a distinct analysis.
- 4. The contribution of these enterprises to other value fields of CC (e.g., to Collaboration and Sharing) are different and should be distinguished and measured properly considering this difference.
- 5. From a pragmatic perspective, the estimation of new artistic enterprises obviously requires new metrics.
- 6. Lastly, and most importantly, CC is very plausibly contributing in a very distinct way to new enterprises as opposed to existing ones. For example, because those new works are created in a much more of a copyright limbo, CC’s ability to contribute specifically to their effective production and consumption as well as more broadly to the way that the enterprise is framed as part of the IP realm is unique.
Now having said all that, the contribution of art to welfare is in itself very hard to estimate, even before delving into the effective measurement of sub-genres. As a result, not many economists have tried to come up with analytical frameworks that would gauge art and its contribution. In fact, there persists a form of prima facie acceptance that art is dually valuable, for the outputs it produces, and as a human enterprise. The trouble with evaluation has to do with both: not all of art’s outputs are market outputs, and even when they are, they usually emblem non-monetary value in addition to their monetary one, and the abstract contribution of “art as human enterprise” is an even tougher cookie.
However, although CC is likewise resigned that art is valuable, for the purpose of its value analysis it must subscribe itself to some theoretical framework that analyzes the contribution of art. Absent such a framework, it will be impossible to assess any form of incremental contribution. As for the possible models that could potentially be applied, some writers have analyzed the quality of artistic products as strongly hinged in the question of how innovative they are. In other words, a valuable or a good artwork is one which is avant-garde in terms of technique or artistic expression. (Check out David W. Galenson’s Analyzing Artistic Innovation). From a slightly different perspective, some ascribe an artwork’s contribution to the extent by which it promotes innovation in other fields. The basis of the latter is that art is unique in cultivating creativity, originality and inventiveness (for example, Xavier Castañer and Lorenzo Campos’s The Determinants of Artistic Innovation: Bringing in the Role of Organizations, 26 Journal of Cultural Economics 29-52 (2002)).
If we are ready to accept this last paradigm, then that will allow us to rely on the extension to the contribution of CC to art of the full breadth of theories which analyze the capacity of innovation to enhance welfare, or the value of innovation in art.
Yet, putting aside the multiple benefits to accepting these paradigms, there are several difficulties which have to do with the imperfect correspondence of these frameworks to art. To demonstrate, not even the underlying Schmpeterian concept of creative destruction applies to art, as art tends to incorporate all prior expression within it as it evolves. Therefore, any analysis which discusses the contribution of art in innovation terms would require substantial theoretical accommodation.
The innovation paradigms of the second category (the ones considering the contribution to art as in itself a contributor to innovation) mind less the level or nature of the artistic outputs themselves, and mostly emphasize the very existence of novel outputs as inherently beneficial. In other words, they would still need to be complemented with other theories recognizing the direct importance of the artistic enterprise.
This is why in addition to developing novelty measures and to understanding how CC contributes institutionally to innovation, the project continues under the assumption that all else being equal, having more art is better, having more art contributors is better, having more consumption of art is better, having better art is better and extended quality in creativity and consumption is better. This assumption plays out alongside the presumption that more art variability is better which is a parameter directly related to innovation in art. Therefore, CC sets out to measure its impact on those values as to provide the necessary fodder for the analysis of its contribution. Examples follow.
Quantity includes all the measures that are based on counting. Among which are the following:
1. Tracking the number of CC artworks that are being produced. Obviously, our work would not end once coming up with this number, because an analysis would have to ensue which may be extremely complicated. This is because it isn’t necessary that all other things being equal, more artworks is invariably a welfare improvement; for example, because more clamor which more art might produce may mean less welfare (note that this pertains only to the detriments of overcrowding and not to other claims that touch upon quality which needs to be accounted for too).
Well, the only thing I can say about that is that it is these moments which make me grateful for taking this one step at a time.
2. The number of CC artists. Again, like with the case of the number of works, this datum does not reveal the entire story: An example for a claim which would be influential in the analysis is that artistic production is optimal when it is the single realm of a thin stratum of artists (the benefits of the alternatives). Now since CC operates under the contrary conviction that more engagement in artistic pursuits and thus tries to increase it without discretion, it needs to prove that the outcome it promotes is superior in terms of the contribution to welfare.
The latter claim suggests that this parameter should be divided up by profile of the artist. To the extent that this is possible it would be beneficial to distinguish between the added number of lay and expert CC artists, between heavy and light contributors, between additions of CC artists who create just CC works and those that use different legal frameworks other than CC.
3. The number of new types of CC artworks that are being generated.
4. The use of assisstive applications for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
=> Obviously, for the purposes of allowing an analysis which would consider CC’s dynamic contribution it is necessary to be gathering data with respect to temporal trends as well.
Internal & external quality parameters
1. The progression of the technique being employed in CC works, per each art genre, and for each function, like the creation of the new contribution and for the fusing together of existing artistic resources for the new creation.
2. The progression of the inherent quality of the artistic expression of CC works. This is a very complex attribute to measure, because it requires the perspective of time, or at least the ability to estimate the overall cultural weight of the work, which in turn requires multi-term adjustment.
1. Value as a resource/use availability: The progress in the outward impression which is being created by the artwork divided by (1) Lay artist impression, and (2) Expert artist impression. This quality measure has to do with the ability of others to extract benefits from the artwork and can be estimated using the proxy of use: the extent to which the work is used as a resource for other works.
2. Consumption readiness/ease of access. This parameter is set to measure the accessibility of the work for passive consumption. This again requires analysis that would tie this data back to the measure of quality: it is impossible, for example, that degraded art or lower quality art is in general more accessible than art of better quality.
Quality measurement, extra challenges
Don’t tell me you thought that was it? Up until this point I’ve been calmly suggesting quality measures, without offering a clue as to how to create the actual quality scale for each. So how to begin measuring quality in art? Well, thankfully we are not the first to have to approach this question. Cultural economists have dealt with this issue, particularly in relation to the question of the proper government subsidy for non-market goods such as cultural products many times are. (See, for e.g., Eric Thompson’s et al. Valuing the Arts: A Contingent Valuation Approach, 26 Journal of Cultural Economics, and Douglas S. Noonan’s Contingent Valuation and Cultural Resources: A Meta-Analytic Review of the Literature, 27 Journal of Cultural Economics 159-176 (2003)).
What these scholars offered was to go from household to household, and use a method called Contingent Valuation in order to assess the extent to which people in general value a particular cultural service. The Contingent valuation method (CVM) employs survey methods to gather stated preference information, and through those it derives a translation into a monetary value with is called the WTP – the willingness to pay.
So these scholars begin with price, as an arguably satisfactory proxy for quality of an art product when there is a market for it. Yet when exploring CC’s predominant fields of activity we see almost no outputs with a dollar value. Therefore, although CC can safely rely on CVM as an established technique in cultural economics, it remains debatable whether CVM can capture the full value generated from cultural goods, and within it, from art: For one, art is classed as an experiential or addictive good, for which demand is cumulative, and hence dynamically unstable, whereas in WTP, people are being asked to evaluate it even if they do not consume it at all, as though it was a commodity like a street lamp. A solution for that might be to turn to expert appraisal. And indeed, when we shall come to the stage where we start going into detail with these metrics, we expect to rely on parameters used by experts to perform appraisals for different forms of art.
Two, there is a very strong claim that art has intrinsic value, as a public good, that is unappraisable by the individual by way of potential consumption estimation. (David Throsby thus differentiates between economic and cultural value, see in David Throsby’s Determining the Value of Cultural Goods: How Much (or How Little) Does Contingent Valuation Tell Us?, 27 Journal of Cultural Economics 275-285 (2003)).
This issue cannot be solved using traditional economic tools, which may mean these should be abandoned. Instead, we ought to identify measurable characteristics of cultural goods which give rise to their cultural value. For example, “their aesthetic properties, their spiritual significance, their role as purveyors of symbolic meaning, their historic importance, their significance in influencing artistic trends, their authenticity, their integrity, their uniqueness,” and so on. This is partly why in order to correctly quantify the contribution to welfare in all its facets, we must content ourselves, at least to some extent with simplified measures that pertain to quantity of production, to engagement and to the richness of the field as we are beginning to do here. This, in addition to those parts of the artistic enterprise which can be economically evaluated using such methods as CVM.
CC Art Variability Measures, Internal, External
1. (direct measures) Novelty level, conceptual and experimental separately measured, of CC works. (1) for each new genre (2) within every existing genre.
2. (indirect measures) The number of new relevant applications which are used for CC works: (1) art editing applications (Technique) (2) art distribution applications (Distribution) (3) search applications (for CC art) (4) Curation activity, exhibition (CC work).
Control Measures (confounders)
In order to be able to measure the pure impact of CC, it is necessary to be able to be able to clear out influences unrelated to CC that may muddy our measures. The following are metrics directed for this purpose:
- 1. Changes in the production of non-CC art. This parameter will be used to gauge changes in artistic activity which can reflect on CC art too but have nothing to do with any activity led by CC. While collecting this data it is important to separate between non-CC art which is licensed under open framework and between non-CC art relying on proprietary frameworks. This is because part of the growth of comparable frameworks might be attributable to CC’s activity under the 3rd pillar of contribution which might further complicate the analysis.
2. Extension of consumption of non-CC art. The aim here is to clean the CC impact with respect to consumption.
3. Art markets expansion
4. Extension in the number of general artists. (measuring unrelated entrance to the specific labor market)
- 5. Evolution in general technical platforms for art creation, distribution, consumption.
- 6. Government grants for art (non CC – easy separation: government will usually define the license to be used)
That’s all folks.Comments Off on CC’s Contribution to Welfare, Field-by-Field: The Separate Contribution to Art
You have probably already noticed that through this series of posts we are proceeding along a trend from general high-level questions to the more practical ones of measurement and evaluation. So, it shouldn’t surprise you that our next nuts-and-bolts step is to start touring the different fields in which CC is active and analyzing its separate contribution to each.
Keep in mind, though, the one caveat, that even once we are done with the field-by-field exploration we would still need to think of the “overflow” contribution of CC. In other words, we would still have to measure its multidisciplinary contribution – i.e., the contribution that is made to more than one field at once and the contribution which fashions new fields.
In part, prophesying the future estimation “overflow” contribution is the reason why I decided to begin this run by describing our preliminary thoughts about CC’s contribution to collaboration and sharing. Now because this is so obvious, I probably don’t need to mention this, but I am: “Collaboration and sharing is not your traditional field of operation and so it might have been infinitely easier to begin with art or one of its sub-genres, or even with OER, basic science, or traditional instances of user-generated-content.” This is because the former are considered true-to-life fields of human enterprise, and as such have (some) ready-made measures for evaluation. Collaboration and sharing, on the other hand, are considered as methods of operation and not as fields in and of themselves. This means that as a method, their independent contribution to welfare is almost never considered. And so, not only is there nobody to learn from when it comes to the evaluation of CC’s enhancement of sharing and collaboration, but the merits of this contribution is almost never acknowledged, not even in the abstract way in which we have been accustomed to, considering CC’s contribution.
Still, abstractly, we all understand that collaboration and sharing have considerable independent benefits! This is why its encouragement is a CC goal.
And to break it down a little, hand-wavingly: As methods for creation, collaboration and sharing tie new ties and promote communities by making firmer existing ones, they expand creation, and groups of creators, they allow creation to evolve based on optimal reliance on the shared creativity of the group, and consumers to freely intake those works, in increasing numbers and in greater capacity. To summarize, those are methods that clearly extend the accumulated value of the single works by manifolds. One way to think of the extended contribution of these methods is by thinking of them as an energizing force that promotes creativity as a whole, by empowering each work created through a collaborative process, allowing it to contribute in a way that goes far beyond its direct value.
End of hymn to collaboration and sharing.
Ok, so I hope you agree that referring to sharing and collaboration as a separate area is not merely the right thing to do because they are an independent realm of contribution, but also that it is the practical thing to do for the purposes of gauging CC’s contribution: As mentioned in the second paragraph of this post, CC’s activity creates innovative enterprises across fields and as time goes by, even generates novel ones. If we don’t recognize the energy that allows that to happen – collaboration & sharing, we will have no way of accounting for this budding activity in our evaluation. After all, these processes are in different stages, and they do not yet have sound gauges to estimate their contribution, even once they fully materialize. On the other hand, if we recognize that sharing and collaboration is a method with its own measures, assessing its effectiveness in different circumstances, then at least we shall have a way of referring to this obviously beneficial activity. In other words, measuring the expansion of collaborative energy is key to our ability to foresee and measure completely new creative enterprises, which cannot be accounted for by looking at the trends that the different fields are undergoing.
So now when we are all convinced, I am going to try and get to it.
For the sake of maintaining order, I will repeat what we are trying to do: Under the collaboration & sharing rubric, what is evaluated is the extent to which CC promotes creative communities and collaborative social capacity. Of course, one constant concern while considering the proper metrics, is to be careful of double-counting: Since social collaboration is pertinent to each field, the value that stems from collaborative energy should be separated from the specific contribution to individual cases of creativity. An important across-the-board distinction is between vertical and horizontal collaboration, which has to do with time and intention: Horizontal collaboration means to refer to mutual, close to concurrent creation of the work, while the participants in the creative act are all intending to create a joint output. Vertical collaboration, on the other hand, are cases where the collaboration amounts in the reliance on creative resources that have been produced in separate processes for the creation of a new work. The importance of distinguishing between the two modes is that they are expected to create different types of works, involve different types of collaborators and to generate different amounts of collaborative energy. This all means that they differ in their contribution.
Collaboration & sharing, and they are enhanced by CC’s 3 pillars of contribution
Tool-by-tool, use-by-use, or the transactional contributions:
- Vertical contribution: (a) from the perspective of the original creator: the availability and choice of CC tools facilitate downstream uses and grant the creator with necessary certainty with respect to future uses (b) from the perspective of downstream creators and users: the tools allow the produced work to itself be used as a resource very simply and in a way that can be relied upon.
- Horizontal contribution is assisted by reliance on tools that coordinate the usage according to active participants’ expectations.
The operation of CC as an institution:
- Reassures collaborating actors that the licenses which are being relied upon are interoperable and that efforts of extended interoperability and standardization will be ongoing.
- Reassures collaborating actors that the license choice will be continuously supported and will only gain traction (:Stability).
- Stabilizes, guarantees, and clarifies the licenses’ legal meaning and ensures that all actors’ (a) Reliance interests are protected and that (b) Expectation interests are protected.
- Stabilizes, guarantees, and clarifies the licenses’ social meaning (for partaking actors and future actors) and ensures that all actors’ (a) reliance interests are protected and that (b) their expectation interests are protected and that (C) their reputational interests are promoted.
- Reassures collaborating actors of the existence and proliferation of the CC supporting tools. For example, the search tools for CC works.
- Allows for collaboration to happen between actors of distinct geographical locations and across jurisdictions.
The 3rd pillar’s direct contribution to collaboration:
- CC weighs in on the normative discussion to highlight the merit of sharing and collaborative enterprises and their importance to the general welfare, countering contrary efforts by other institutions.
- Just for the record: the vast positive externalities which the 3rd pillar produces do not allude us. Evidently, the benefits that are produced here are carried over to every activity pertaining to collaboration. Figuring out how to discern the value ultimately induced by CC alone is a challenge which awaits us.
Measuring the Contribution to Collaboration – Quality, Quantity, Variability
As argued earlier, the general importance of social collaboration is found in its ability to charge the existing fields of creative activity with the required energy that would ensure that their measures of quality, quantity and variability improve.
When it comes to quantity, more collaboration is translated into the following: (1) more participants in single creative processes (2) more simultaneous cooperation in a single creative process, and (3) more intake of shared works. From the internal quality perspective, enhanced collaboration means that the cultivation of the creative spark emitted by each collaborator is rendered more efficacious. From the external quality perspective, a collaborative work created in an environment, which appreciates collaboration, will be more useful to the consumers of the work because they will see it as a potential resource. And when it comes to the potential contribution to variability, that translates into new collaborative efforts across fields, within fields and likewise completely novel activities and field-generative ones.
Proposed Measures (including confounders)
So now I am about to propose a set of metrics, aimed towards measuring CC’s contribution to collaboration under the three pillars, and by quantity, quality and variability. Whatever you do with it, don’t treat this list as exhaustive. I am merely trying to demonstrate our general direction, and to maybe instigate some reaction (for example, from YOU):
- Number of CC’d collaborative projects of all types. (account for cross-field cooperation)
- Number of entities involved in each CC’d collaborative project (a) Separately: People, organizations, groups (b) Numbers, percentages
- Type of collaborators involved in each CC’d collaborative project: (a) Lay/professional, (b) Professional: By type, Numbers, Involvement level (size), Geography distribution (real location of contributors, of users),
- Level of cooperation or the depth and breadth of the tree-like infrastructure – i.e. measure the number of reuses or reincarnations of a given CC resource.
- Newness level, on a scale of newness of the CC’d enterprise
- Consumption of each CC’d work: passive use (a) Accessibility measures (b) Consumption levels
- Efficiency increase in the use of the CC’d work (productive use: use as a resource)
- New collaborative applications; addition of new auxiliary tools for CC’d collaboration (and increased use thereof)
- New collaborative enterprises identification tools; search tools, etc. (and increased use thereof)
The breakdown by CC tool is a refinement which isn’t mentioned but is clearly relevant to each.
So far so good. But, even a comprehensive list of these metrics will not be the end of our troubles, because we need to control for non-CC affects on collaboration (confounders). For example, parameters like the general IP environment, legal and social, and the activity of other actors like ones that are operating in the same space as CC, should be carefully discerned. The way to go about it would be to use metrics that will gauge external influence and will thus control for impacts external to CC. So there is an initial list:
- Collaborative projects based on other platforms – across disciplines
- Creative projects that are not collaborative – across disciplines
- IP Lawsuits based on authorship claims
- Legal regime changes that pertain to collaboration
- Technical platforms for collaboration (dynamic changes)
- (other) Legal platforms for collaboration (dynamic changes)
- Government grants for collaborative enterprises (easy separation: government will usually define the license to be used)
In my former post I spent quite a few words trying to explain where I believe CC should and shouldn’t venture looking for the proper metrics that will efficaciously represent its contribution to welfare. The bottom line was that what seems like the ultimate decision is to look to the direct contribution of CC to Quality, Quantity and Variability measures. I now intend to elaborate a little on this approach.
The first thing that is important to mention is that Quantity, Quality and Variability ought to be measured across the different fields in which CC operates, for example, in art, in OER, in UGC. Second, that QQ&V should be measured under the different value pillars (transactional, institutional, norm) and third, as they pertain to both productive and consumptive use. By productive use, I mean the sense by which CC complements the quality, the quantity and the variability of active, creative, collaborative endeavors and by consumptive use I refer to the sense in which CC promotes passive use of existing creative enterprises, by expanding access to them and by increasing the efficiency of consumption.
The contribution to Quantity is probably the easiest to explain. It means just one thing by way of method: counting; how many new works are being created thanks to CC’s activity? (productive), how many additional passive uses are there? (consumptive) and how many distinct new creators, collaborators and consumers are added? These can then be naturally specified by pillar of contribution (again: transactional, institutional, norm), by field of activity, across fields, by use type and by user type.
Quality has both an internal and an external meaning: By internal quality of a work we mean to refer to the works’ level of excellence in terms of its own field, which in itself is a complex measure that judges the value of the work itself. The external quality measure refers to the level of contribution of the work to the promotion of a collaborative environment and is therefore tied to both the productive process of the its creation and to its consumptive uses. In terms of purely consumptive uses, quality refers to the advantage which consumers are able to extract from the work itself.
The variability parameter is set to measure internal and external novelty as it is induced by CC. Internal variability means the creation of new types of works within a field, whereas external variability pertains to the dynamics of the creation of new fields of activity. The aspect of variability is very much related to the innovation literature that often analyzes the status and dynamics of growth in terms of the accumulation of new products.
But our job doesn’t end here, because it is not just changes that CC induces in the measure of quality, quantity and variability that ought to be calculated; in fact, crucial to the value assessment is the consideration of change rate. Since the rapidity of value accumulation is in itself a substantial aspect of the contribution level.
Now, the next immediate step ought to be defining metrics, per field, for each of these three attributes, quantity, quality and variability. But, as we’ve become accustomed to, there are still several complications that need to be dealt with:
- CC is operating in numerous fields. In order to be optimally effective, it must rely on cost/benefit analysis which will suggest to it how to best divide its own resources.
- CC is a comprehensive framework which creates value spillover effects across fields.
- CC’s fields of operation are not clear-cut fields in the sense that some works are hard to categorize. For example, basic science and OER are far from being distinct fields and of course, user generated content comes in all “flavors”. And since CC sets itself to promote these interdisciplinary collaborations they are essential and weighty parts of the value it creates.
- Often, CC’s contribution will be in creating altogether new fields of activity. It is important not to lose track of those by putting too much emphasis on inter-field benefits.
Now it seems to me that the pitfalls introduced by #2-#4 may be bridged by considering the contribution to collaboration, or to the mode of creation and consumption that is more heavily based on sharing. Or at least that’s the belief I choose to stick to. Firmly. Yet, when it comes to pitfall #1, that one really seems to require the second order estimation of the contribution to welfare of the actual field, or a predefined preference ordering which relies on other underpinnings. At any rate, it transgresses the scope of the estimation project.
So, for example, we can think of the contribution to collaboration using these attributes: When it comes to quantity, effective will be more participants in each CC’d creative enterprise, in comparison to non-CC’d enterprises, as well as more distribution of the CC’d work when it comes to the consumptive measures. From the internal quality perspective, what is judged is the level of the cooperation according to the promotion of the creative spark through the shared mode of creation. From the external quality perspective, a high-quality collaborative work would be replicated more than others, and create collaborative energy which will carry over to other enterprises. When it comes to the potential contribution to variability, what is judged is the extent to which more types of collaborative efforts are being fashioned, both within and across fields.
CC’s contribution to art can serve us as yet another example. If we think in quantity terms, we can measure the number of new CC creations, the number of new CC artists, the number of new types of artists, and the extent of distribution of CC creations, from the perspective of consumptive use. When it comes to quality, the level of the relative internal excellence of the CC’d artwork can be measured, as well as its external impact on other creative sites. Quality measures for consumption will encompass its effectiveness, in terms of the impression that it is capable of making on consumers. Variability, in art, will be set to measure the relative innovativeness of the CC artistic enterprises, both within existing genres and in the cultivation of new ones.
And I could go on and on to CC open education enterprises, to the CC’d basic science enterprises as well as to distinct cases of CC’d user generated content, but the point is probably clear by now. The basic idea is that these measurements are all the output we need, because having calculated them, we ought to be able to go on to scrutinizing CC’s contribution on any desired level. For example, once we know how art is affected by CC, through the measures of quantity, quality and variability, anybody could translate that into how art’s contribution to welfare is boasted.
Does that all make sense to you? Let us know, we’ll appreciate it.1 Comment »