Discussion: The Knowledge Commons (April 2019)

May 28, 2019

Kelsey Breseman, Data Together

Our theme for April 2019 was Knowledge Commons. The group discussed the governance of commons more broadly last year. This time around, we chose to focus on how knowledge, information, and digital data might be treated and governed as common pool resources. Key themes that emerged in our discussion:

  • Defining a knowledge commons through communities’ practice
  • The cost of sharing digital informational resources - is it really zero?
  • The problem of underutilizing knowledge commons: incentives and participation
  • Strategies for knowledge “commoning”: sustainability, technical vs social solutions, and trust and accountability

Our discussion centers around “commons”: as Ostrom defines it, a complex ecosystem; a resource that is shared by a group of people that is subject to social dilemmas (Ostrom, 3). Naturally, our discussion opens with the question of definitions.


Aerial photo of a circular park surrounded by pavement

Photo by June Dalton on Unsplash


What is a commons? How do we distinguish a knowledge commons?

ERIC: Ostrom talks a lot about natural resource commons– things like fisheries, or forests, or pastures. But we wanted to more directly engage with that as it relates to the digital realm: data, knowledge, technical infrastructures.

ROB: In defining a knowledge/digital commons, there seems to be a lot more variety than you might at base assume. There are much broader situations and baseline needs and concerns to address in different ways for different constituencies. Some patterns transfer from one project to another; others fail to do so. Software commons is unique. Data Together might share more with other knowledge repositories.

ERIC: Liz, can you share any examples of knowledge commons from your own work?

LIZ: Well, there’s Wikipedia. But I spend more of my time in a community called Public Lab, which works on not only sharing information, but also thinks about how to produce actionable knowledge together.

MICHELLE: Yeah, from my work at Protocol Labs, it’s often less valuable to define something as commons or not, and more valuable to explore how communities manage resources with externalities.

LIZ: And in our decentralized data archiving project, we wondered about how communities define each other in order to set goals for access to data sharing. It’s reasonable to approach that through Ostrom’s 8 and see where we get with respect to community self-regulation and structures to exclude and include.

The “Ostrom’s 8” that Liz refers to are eight design principles of robust, long-enduring, common-pool resource institutions which Ostrom identifies (Ostrom, 7):

  • Clearly defined boundaries should be in place.
  • Rules in use are well matched to local needs and conditions.
  • Individuals affected by these rules can usually participate in modifying the rules.
  • The right of community members to devise their own rules is respected by external authorities.
  • A system for self-monitoring members’ behavior has been established.
  • A graduated system of sanctions is available.
  • Community members have access to low-cost conflict-resolution mechanisms.
  • Nested enterprises—that is, appropriation, provision, monitoring and sanctioning, conflict resolution, and other governance activities—are organized in a nested structure with multiple layers of activities.

The cost of sharing

One major differentiator between a physical commons and an information-based commons is a change in scarcity.

BRENDAN: Almost all the readings pointed to this near zero cost of copying.

MATT: The “near zero cost of copying” is an accepted idea– this is something I cover in my intro classes. However, after EDGI [and Data Rescue, an effort to massively archive public environmental data for ongoing public access], I believe less that the cost of copying is close to zero. It feels like it’s close to zero when you download something. But the infrastructural costs are significant, and focusing on the “zero cost” makes this infrastructure (and its ownership and maintenance) invisible– at our peril.

ROB: Part of the issue is that there are different versions of copying – copies from here to there, which is still relatively cheap, and then there’s copying in the sense of, now I own this. Those are used interchangeably so often.

There is a freedom involved in copying– near-instantaneous transportation of information resources is relatively simple. But there is a notion that because copying is low-cost, information will naturally spread and therefore exist and be owned in many places. This is not a good assumption if you depend on this idea for ongoing access.

LIZ: I was struck by a particular quote:

“In an era of rapid change, participants will move from operational situations into collective-choice situations– sometimes without self-conscious awareness that they have switched arenas.” (Ostrom, 51)

This reminded me of what happened in the wake of Data Rescue. We’re now aware we need to control the servers, and also are managing knowledge that is based in hardware. I look forward to us becoming aware of forming the conditions of our self-governance and how they apply to community-based data stewardship.

Challenges of the knowledge commons: underuse

While physical commons' challenges center around overuse (classically, overgrazing, overuse of natural resources, or lack of upkeep in well-used common areas), the more common challenge for digital knowledge commons is underuse. This can come in the form of non-contribution (a digital library with too few resources to be useful) or in the form of non-use (a waste of resources committed to contribution and maintenance).

KEVIN: What about the tragedy of the anticommons? With natural resources, the problem is overuse, but the problem with the knowledge commons is that people underuse resources or don’t add to them. What incentives could we propose so they’d join and contribute?

ROB: Traditionally it’s been fame, power. Liz was talking earlier about about what brings people together (survival). Or validation from close peers– this is super sticky.

BRENDAN: In our group here we have a lot of really colorful examples of constructing a commons around environmental data. But what about social networks/media– are they a commons?

MICHELLE In the social media example, there’s personal control of data that has gotten people mobilized. But it’s so difficult to connect other data commons to regular life. I spent years in the federal government trying to get people submitting metadata to open data. Will making tech easier be the thing that makes that happen?

MATT: It has to be easier than what already exists. It doesn’t have to be better, it just has to be easier.

Strategies for sustainable knowledge commoning & hope for the commons

MICHELLE: What tools do folks make that people don’t hate to use? Where do we have the ability to change rules around what the community wants and needs?

Few people have the ability to make or change the rules that affect them. This is particularly true in the tech space. How do we change that?

Promoting small networks with more autonomy could slow information flow, but it could be a good thing in that it could promote healthy communities that deliberate in a productive way.

ROB: Picking up on the question of speed: sustainability is a continual process. What does sustainability mean in an environment [such as a digital commons] where the pace of change is extremely high?

MATT: It does seem that fast is not especially sustainable. Systems that change fast don’t stay around for a very long time because they are changing so fast. I went back and looked at what the English commons actually were [the commons Hardin is referring to in his original paper on the “Tragedy of the Commons”].

First of all, there’s a whole bunch of different commons. Hardin’s assumptions are really just repeating something from the 1830s, a polemic that was part of the enclosure movement: that human interest in accruing capital automatically leads to the destruction of commons (unless you have enclosure of the commons).

Instead, the commons turn out to have been pretty successful for 1000 years because people didn’t operate as self-interested machines.

They didn’t always work out; there are different kinds of problems, and it’s not a utopia, but so what?

BRENDAN: Humans are irrational. We can’t interpret the models in terms of a rational human. Hardin was responding to this model where humans were “homo economicus”, perfectly self-interested. But Ostrom refutes that quite simply because we already know now, in modern economics, that people do not always behave in self-interested ways.

Particular challenge to digital commons: creating space for broad participation

One of “Ostrom’s 8” principles for successful commons is homogeneity within the community of users. How do we balance that with the reach of web resources and the diversity of skills and actors involved?

MICHELLE: I have a question for the group: do you have any insight around the concept of ‘technology growing so complicated that it requires people who are able to engage at a minimum level of competence / capability?’

Will we get to a point where everyone can contribute?

ROB: I’m not optimistic. We always jump to legislative recourse– not just community standards. There’s no clear notion of bigger levers to pull.

How do we express things in a way that we can include the least tech savvy among us without carving out separate repositories where they get oversimplified or incomplete access to the information?

Knowledge deficit is not the same as a skills deficit. Can we teach the minimum skills?

LIZ: “Expert” cultures tend to self-select, self-promote, and stay homogeneous. This is something I’d like to explore more. I was reading about how programming languages are generated from certain cultures, genders, neurotypes– and it doesn’t have to be that way.

MATT: The trouble with “doesn’t have to be that way” is that the world is big, and it’s hard to keep it from organizing in on itself. I would like us to shape provisional signposts on how to organize ourselves. From these readings, it seems like you can engineer the success of a commons, to some extent.

How do we as technologists create systems for good commons? Technical vs social solutions

LIZ: How do we set up community structures?

We could carve out sections of society where this is robust– not just where humans respect each other as peers, but also where all professions respect each other as peers.

This is a hard problem to encapsulate in a code of conduct, but people are working on it.

MICHELLE: At Protocol Labs, we work on protocol-level changes to the way information moves. Can we “automate our best selves”? Is there any way we could build something into the technology?

BRENDAN: Yes! We can be self-critical on how commons form. Can we form a group non-homogeneously? Can we use this as a signpost for goal-setting?

This reminds me of the work of the Python community, which aggressively moved to weed out this technological superiority complex and ejected certain members on the basis that they were not welcoming of those of differing technical means.

We don’t want to go all the way to the other end of the spectrum and connect folks with no common interests, but our opportunity is to put in effort at bringing diverse folks to the table before starting.

Our group includes people interested in commons principles and people interested in technologies– let’s continue to compose the types of people we collaborate with as a method for busting out of collaborating.

ROB: I come immediately back to Michelle’s question about how to bring out our best selves at the technology level. We’ve talked before about Twitter– one of Twitter’s biggest problems is that you feel like you have to always have a hot take. If you don’t respond quickly and violently on Twitter, then you’re not doing it right. But there are features elsewhere that ask you “are you sure?”, like Gmail not sending an email for 30 seconds so that you can undo it. Can we do this at more levels / all levels in the technology stack?

MICHELLE: Another way to look at it is that there’s an idea in decentralized conversation that everyone should be able to be in their own space. But no! We need to be able to govern the decentralized space.

Accountability & trust

ROB: This connects to the Dulong de Rosnay piece: a counterpoint to anonymity in online spaces is reputation. It means you can’t have throwaway accounts; you need a durable identity. You can’t be a bad actor under a different name.

LIZ: I have experimented with sidestepping reputation as a prime value in a commons context. Reputation can have some of the same undesirable impacts (incumbencies, authority, over-representation). Often, we’re moving at the speed of “did you test it for yourself, and did it work?” Empiricism can be slow, but in a knowledge commons, understanding “pre-authenticated knowledge” from another system is just moving it into a new context.

ROB: I agree. We often don’t even have that basic level– in code reviews, it’s very common for people to approve without actually testing it themselves & finding out: well, does it work?

BRENDAN: It’s true, and I’m guilty. How do we confirm, at a technical level? But a technical system can introduce challenges. We’re now talking about whether you can make a system that will come in and interrupt you. We don’t to build it as technology vs. people– we want something that resembles participation but doesn’t end up as coercion. “Did you do the reading?” As a person I can just ask you, and you’ll say no.

SASHA: In this context, we can think of continuous integration testing as a trust layer (a base agreement). That’s a case where a robot can ensure that something was tested to the level of: well, does it work?

Inclusively capturing reputation in digital spaces

MICHELLE: I’ve been wondering about ways to kindly connect real world stuff to digital stuff / our online lives. We (as a community) tried to build a reputation system connected to Github, but didn’t have ways to include business functions that are not included in Github. Let’s expand that to a broader context. How do you do carbon sequestration in blockchain, how do you prove it? Has anyone seen an example of something that does this well?

BRENDAN: I don’t think there is one. Studying photography brought this to me– you can’t take a picture of the future. You can barely capture the present. Anything that exists in the digital world is just a partial, twisted version of the real world.

ROB: Brendan and I had a conversation from a few weeks ago about where trust is involved in a system. Blockchain seems predicated on the idea that we can prove trust, but then you have to choose what you trust. I’ve heard it said, “it’s attesting, but it’s not proving”– that’s an important concept here when moving between realms of digital and analog. These are important seams in the systems, where we should all be paying the most attention.

BRENDAN: “Trust comes from people, not cryptography." That’s my anti-tragedy of the commons line. I don’t think there’s a causal connection from physical to digital, but there is going from digital to physical. When someone has shown who they are online, I can then trust them in the real world.

Living in trust

MATT: My question isn’t quite fully formed. I was thinking about the empiricism experiment that Liz performed. I don’t quite understand it, because it seems like it tries to run without a trust layer, which is counter to the way I want to live.

At home, I keep our bikes in the bike shed with no lock on the door. Every once in a while the bike gets stolen. But it’s worth it to me, because I get to live in a world where I believe that it’s okay to leave my shed unlocked.

The commons works because people don’t work as self-interested machines– because people operate as members of community.

KEVIN: Ostrom and Hess talked about how building a knowledge commons requires “great amounts of energy and time from individuals or small groups”, and I feel that’s who we are now.

They define a homogeneous group as “people working towards a common goal”. I think us, wanting to change the world for the better, working together towards that, is what makes us “homogeneous”.

I’m always so grateful and honored to be with this group. And even though we obviously don’t have all the answers, we have some great questions. That’s a great place to start!


Readings


Data Together is a community of people building a better future for data. We engage in a monthly Reading Group on themes relevant to information and ethics. Our conversations takes place in a group of about a dozen people whose backgrounds include building decentralized web protocols and tools, archiving environmental data, academic research of ethical frameworks, community creation for citizen science, and more, with a lot of overlap.

This reading group is something your own collective can do too! We encourage you to draw on our notes document template, which includes specific themes and discussion questions we prepare for each month’s readings (the readings are listed at the end of the post). We hope that you’ll link your own discussion notes back to our posts, with any top level points you’d like to share with everyone else pursuing the month’s theme.

This blog post is derived from the conversation, but is not a replica of it; we rearrange and paraphrase throughout. See the recorded call for the full discussion!