Skip to content

Dispatches from Wikimania: Values for Shaping AI Towards a Better Internet

Better Internet, Policy
Isolated Araneiform Topography
Isolated Araneiform Topography, from UAHiRISE Collection on Flickr. Public Domain Mark.

AI is deeply connected to networked digital technologies — from the bazillions of works harvested from the internet to train AI to all the ways AI is shaping our online experience, from generative content to recommendation algorithms and simultaneous translation. Creative Commons engaged participants at Wikimania on August 15, 2023  to shape how AI fits into the people-powered policy agenda of the Movement for a Better Internet.

The session at Wikimania was one of a series of community consultations hosted by Creative Commons in 2023. 

The goal of this session was to brainstorm and prioritize challenges that AI brings to the public interest commons and imagine ways we can meet those challenges. In order to better understand participant perspectives, we used Pol.is, a “real-time survey system, that helps identify the different ways a large group of people think about a divisive or complicated issue.” This system is a powerful way to aggregate and understand people’s opinions through written expression and voting. 

Nate Angell and I both joined the conference virtually, two talking heads on a screen, while the majority of approximately 30 participants joined in-person in Singapore. After introducing the Movement for a Better Internet and asking folks to briefly introduce themselves, we immediately started our first Pol.is with the question: “What are your concerns about AI?” If you’re curious, you can pause here, and try out Pol.is for yourself. 

In Pol.is, participants voted on a set of ten seed statements — statements that we wrote, based on previous community conversations,— they added their own concern statements, and then they voted on concern statements written by their peers in the room. Participants can choose “Agree,” “Disagree,” or “Unsure.” Overall, 31 total people voted and 532 votes were cast (with an average of 17.16 votes per person). 

96% of participants agreed that “Verification of accuracy, truthfulness and provenance of AI-produced content is difficult.” This statement drove the most consensus among all participants in the group. Consensus indicates that people from different opinion groups have a common position, or in other words, people who do not usually agree with each other agree on this topic. The other two most consensus-driving concerns were: “Large-scale use of AI may have a negative impact on the environment” and “I suspect a push for greater copyright control would eventually be appropriated and exploited by big companies. E.g. Apple and privacy.”  

The most divisive statement was: “AI is developing too fast and its impact is unclear.” Divisive implies the areas with the most differing opinions (rather than with the most disagreement, as widespread disagreement is a consensus too).  The other three most divisive statements were also the most unclear statements, with more than 30% voting “Unsure”: “AI can negatively impact the education of students,” “AI can use an artist’s work without explicit permission or knowledge,” and “AI and the companies behind them steal human labor without credit and without pay.” 

Back in our workshop room, we  viewed the data report live, which was somewhat difficult due to limitations in text size. Participants in the room elaborated on their concerns, highlighting why they agreed or disagreed on particular points. 

In the second half of the workshop, we asked participants to imagine ways we can meet one particular challenge. We focused our discussion on the only statement with 100% agreement: “AI makes it easier to create disinformation at scale.” 

Participants were asked to write down their ideas in a shared document, and stand up to share their thoughts in front of the audience. The three major buckets for innovation in this space were education, technical advancement, and cultural advocacy. In education, participants brought up the need for critical thinking education to reinforce the ability to identify reliable sources and AI tools education to allow more people to understand how misinformation is created. Technical projects included developing AI to tackle disinformation, building a framework for evaluating AI tools during development, and creating better monitoring systems for misinformation. Participants also highlighted the need for cultural advocacy, from building the culture of citations and human-generated reference work to policy advocacy to maintain the openness of the commons. 

Creative Commons will continue community consultations with Open Future Foundation in the next month. Sign up and learn more here. 

 

Posted 07 February 2024

Tags