Discussion: Content Moderation and Consent (April 2020)

Jan 12, 2021

Kelsey Breseman, Data Together

This topic covers factors that impact the content that we see. How do platforms balance freedom of expression versus consent to avoid offensive content, navigate algorithmic versus human moderation and curation, or incentivize different types of interaction? What are downstream effects of these choices?

Readings

  • Bijan Stephen for The Verge: “Something Awful’s Founder Thinks Youtube Sucks at Moderation” (2019) - strategies for moderation from Something Awful’s founder
  • Casey Newton for The Verge: “Bodies in Seats” (2019) - around outsourced content moderation & its impacts on the humans who have to view & judge the content. Would recommend:
    • From the beginning to “But for the first time, three former moderators for Facebook in North America agreed to break their nondisclosure agreements and discuss working conditions at the site on the record.”
    • NOT RECOMMENDED: the middle of the article for this group—it’s good reporting but needs a content warning for graphic descriptions (and the intro gets the points across)
  • Jussi Passanen: “Human centred design considered harmful” – how good design principles applied to business sense can be harmful for humans, especially in the context of a livable planet
  • Choose one of the following (their lengths vary):
  • Choose one of the following (their lengths vary):
    • Clint Pumphrey for HowStuffWorks: “How Do Advertisers Show Me Custom Ads?” (2012) – cookies and retargeting, notice tone
    • Cade Metz for the New York Times: “How Facebook’s Ad System Works” (2017) – targeting factors that Facebook uses for ads, the inception of ads into a content stream, some treatment of the Russia issue
    • Cole Nemeth for Sprout Social: “How the Twitter Algorithm Works in 2020” (2020) before “How to turn off the Twitter algorithm” – a really short one just highlighting the factors involved
    • Will Oremus for Slate: “Twitter’s New Order” (2017) – much more in depth (not just how but why and future directions) but pretty long
    • Josh Constine for TechCrunch: “How Facebook News Feed Works” (2016) before “An Updated List Of News Feed Algorithm Changes”

Optional bonus readings

  • Naomi Wu’s experience with media manipulation & being “content moderated” off of several funding platforms she had used to make a living: part 1 and part 2 – this is very interesting and topical but too long to include in the required reading
  • Flame Warriors, if you’d like a lighter take, is a tongue-in-cheek characterization of the various types of people moderators encounter
  • The end of the Casey Newton “Bodies in Seats” article, from “Last week, I visited the Tampa site with a photographer.” to the end, interesting additional perspective re trying to figure out how this stuff should be done

Top-down view into metal pins lit in red/purple

Photo by Paul Esch-Laurent on Unsplash


Experiences of community solely through the internet

KELSEY: I want to open by asking if everybody could share one story of a sense of community you experienced through a purely digital platform.

I can start with my story. I spent a summer working at a website called Instructables. It’s a tutorial site where people share how to make different things. It’s an explicitly inclusive and friendly community. One of the things that they’re very careful to have in their makerspace/lab area is a test kitchen. The founder was talking to me about this: people who make robots, they know they’re makers. But making in a kitchen is also making; making with a sewing machine is also making. I admire that their community goes out of their way to invite all of those kinds of making in. I also really like their comment policy, which is literally “be nice”. As a user, you can flag a comment not just as spam or inappropriate, but also as “not nice” to have it reviewed by a moderator.

The summer I spent there, making projects, was the first time I ever felt like I had friends who I met only through their online presence. You get windows into the lives of the most prolific makers because they post step-by-step photo instructions of different stuff that they’re making, and I was one of them.

KEVIN: I can share next. EDGI is the first time I’ve had community through a digital platform. EDGI is completely distributed. For a long time, I only interacted with EDGI people through Zoom, Slack, or GitHub. It’s really amazing when you have consistent interactions through all these mediums that, a year later when I was able to attend an in-person meeting, it didn’t feel like a first meeting at all. It felt like we had known each other for so long already, but we were like wait, I’ve never actually given you a hug before! EDGI is continuing, going strong, and continues to be my community.

LOURDES: For me it is the Midwest 90s Emo Facebook group that I am part of. A few of us, the main posters in the group, have branched off, and we’ve created our smaller private group and we’ve become really good friends. Lately we’ve been having Zoom calls with like 20 of us. I’ve met a few people in different cities now. It’s cool knowing I have friends across the country.

DAWN: I was really into modding the Elder Scrolls game Morrowind. I was an active member of a forum called Tamriel Rebuilt, which was all about people who wanted to rebuild a whole continent in the Elder Scrolls game Morrowind, a quixotic project that was maybe not possible in the constraints of the game engine at the time.

I had been active in forums like IRC and stuff like that before, but that was one of the first places where I really got to know other people—and at a time where there weren’t a lot of these remote collaboration tools. I felt like I knew these people well, because we posted in forums together, and you got to know who they were from that.

GREG: During Hurricane Irma, I was watching the storm as it emerged and headed right at us. I saw a whole bunch of people talking about how the storm was coming, but in different forums, and so I set up a Slack for people who wanted to prepare. Within a couple of days there were almost 1000 people in it, and they were doing all kinds of things.

I felt responsible for this space that I’d set up, and so I was working 18 hour days just to manage the things that all these people were doing. But a lot of the ideas were actually not very good ideas. They were all really well intentioned, but a lot of times, people just came up with something and then said, let’s get to work. It was just like, Who did you talk to? Who said that was a good idea?

I had a really challenging experience trying to organize this community and Slack. As fun as it was as it was coming together, I had very few tools at my disposal for sorting through noise and for evaluating the quality of different kinds of things that were going on. It was only through connecting with other organizers who were experienced with this community curation and network organizing that we got some measure of control by doing things– like, every channel should have a document that says what’s happening in this channel. We had to figure out how to make it work.

I walked away wanting to help others learn from our mistakes so that they can make more interesting mistakes.

The work of maintainership

KELSEY: I really thought it was interesting, Greg, that you brought up the word responsibility so quickly. When I picked articles, I focused on digital spaces and power, control, responsibility, and what we owe to each other as participants.

Do you still feel responsible to your community?

GREG: Absolutely. In the time, I felt responsible for the network that I’d convened, but also responsible to ensure that that network was accountable to members of the community. So I was spending a lot of my time going out and finding people who would never join our Slack, talking to them, and then bringing back that perspective into this platform.

More recently, with Liz Barry, I facilitated a set of conversations discussing the phenomenon of emergent, spontaneous, civic hacking disaster responders. I just published a piece yesterday, Introducing the Principles of Equitable Disaster Response, that shares principles that we articulated for network-centric crisis response. It’s helpful to have something on hand to say, this is why we’re going to go to the extra effort to hear from somebody before we put this out into the world.

There wasn’t really a framework for how to locate this digital virtual community in its real-world context, and I felt responsible for bridging that gap.

DAWN: There’s also a lot of “moderation” work that never counts as moderation work, which is actually just maintaining social ties. What has to be formally understood around moderation?

The human cost of content moderation

LOURDES: Has anyone heard the podcast Last Podcast on the Left? It’s a comedy podcast about murders and paranormal stuff and conspiracy theories. I got really into it. I joined the the group on Facebook and started posting a lot, and all of a sudden, I was a moderator.

It was a whole world; mods had our own group, and a chat. The mods purposely chose people from different perspectives, so in the chat there were these far-right dudes that were really into guns, and me, far-left lady of color; I was totally on them all the time. But we have this sense of camaraderie, too. And we even have a group name for ourselves. But it’s a couple of hours a day, and thousands of people were in this group that we modded.

The reading about the Facebook moderators, that’s a whole other level. But there was some stuff that people posted and I saw that just made me question, what is up with humanity? And why are we, unpaid group mods, doing this?

DAWN: That was a lot of labor, Lourdes. I know. It takes a lot of time and a lot of emotional energy.

I was thinking also about the Facebook article, and even the Something Awful one. In the background in my mind was that really great article by James Bridle about auto-generated content targeting children on YouTube, which is deeply creepy and disturbing. And Lisa Nakamura wrote an article thinking through this thing, which said, we see these postings of all of this hateful, misogynist, racist, and deeply disturbing graphic content as aberrations. But what if we took that seriously as an expected outcome? Let’s not treat it as an unintended consequence of what these platforms do, but read it as one of their products. That should help us think about how these spaces are built and maintained.

I don’t actually know if I think that any of these models are sustainable. I think what Facebook and YouTube and some of these big platforms are doing is basically trying to just throw as much cheap labor and people’s dignity at a problem as they can and hope that they’re going to get good enough at automating from what they learn from that—at the cost of all of these people’s emotional wellbeing—and see if that’ll get them to a point where they expect that no one’s gonna have to look at it. I just don’t think that’s going to work if you build systems that magnify or create it, where virality is encouraged.

And so it’s not just that one video gets posted once, it’s that it gets posted hundreds of thousands of times as they mix and flow through multiple channels. I felt like that was a piece which wasn’t explicitly addressed, but brought up in those articles.

I like how Lisa Nakamura talks about it as a way to think about the design intentions.

KELSEY: In junior high, everybody would share those videos, those GIFs that would just go on being peaceful and nothing for a long time and then suddenly, a scary face would pop out. There was something so delightful about it to every 12 and 13 year old that would cause that to go viral, even though it was awful.

My three-year-old niece, whenever she got access to a phone for long enough, you’d look over and she’d be in that part of YouTube.

Something about how disturbing that is or how unusual that is—there’s got to be something biological about how attractive that is to somebody very young.

I wonder if that comes back to the question of responsibility to moderate. The existence of Something Awful is interesting by itself, right? Or 4chan; there is an audience that’s explicitly looking for the sorts of things that I, as an adult, would like to be moderated out of my feed.

Bad content for profit

LOURDES: When I was in middle school, the internet felt completely unregulated. You’d find really, really messed up stuff. I’m thinking of one website, this was in 2001, 2002, and it was graphic images from the war in Iraq and a lot of really messed up stuff that’s not moderated.

I was a 12 year old stumbling upon it. It was pretty easy to get to. So this isn’t a new issue. What’s new is the capitalism that perpetuates and is being perpetuated by it. Now, people make money off of this content. And then these companies that are contracting with Facebook and exploiting their workers, there’s a whole industry made from this messed up content.

DAWN: At the end of the article about Cognizant [Bodies in Seats], there was a line about how it’s a system where the people who work there thought, this is my first step into being a knowledge worker working at a tech company. But it’s really Facebook controlling costs and risk by outsourcing the messy work of finding and training human beings, and laying them all off when the contract ends.

In one way, I agree, it’s nothing new. But it does feel like it’s such an acceleration of existing patterns. That model of outsourcing risk is an old pattern. But when combined with the scale that some of these platforms are operating at, and the fine tuning that’s happening on them for certain behaviors or types of attention—that whole article on Twitter’s introduction of their new algorithm—they’re all compounding into this acute state.

LOURDES: There’s a really good book called Words that Wound, about hate speech, especially in a university setting. It’s in response to this libertarian idea that all speech should be free. It argues that words can be violent, that they have a physical impact, and especially when it’s a racist or homophobic slur or diatribe, that it can actually affect someone’s livelihood.

Social media as a civic forum

KELSEY: I’m curious how platforms play into this discussion of violence, public forums, and freedom of speech. Facebook and Twitter feel to a lot of people like they’re supposed to be a public forum, even though they’re very literally not. Do you think that if Facebook and Twitter didn’t appear to be such big civic forums, that we would request one from the government, a digital space to discuss things?

DAWN: There are various governments that have attempted to create those sorts of platforms, but they tend to be a capture for consent, or resistance to ideas.

Canada is a very consultation-heavy country. We get consulted a lot using consultation platforms. I think there’s other countries that have models like that, like Decidim, on a different scale in Barcelona. Greece had a way that they were doing these online consultations, but not necessarily in this model of a public forum that maybe Twitter and Facebook now operate at.

But in their original intent, that’s not what Twitter or Facebook were trying to be.

One of the articles shows that shift in Twitter. The history of Facebook is also a history of shifting; the way that Zuckerberg reformulated their mission multiple times reflects his changing thinking on what their goal is.

I don’t know whether people would demand it. Probably not; it’s a different model of how identities get performed or articulated at a nation level, but that nation-level identity can be a fiction anyway. But I think that there have been very powerful more regional platforms formed bottom-up, from people wanting that kind of space.

There are also people who want to nationalize some of these larger things, and/or break them apart, or the activist stakeholders who basically want to buy out Twitter and make it a platform co-op. I think there are some interesting ideas circulating around converting those spaces.

GREG: In its messaging on this stuff, Facebook conflates the concept of freedom of speech in multiple directions. First of all, freedom of speech just means that the government can’t arrest you. The First Amendment does not apply to allowing anything to be posted to Facebook. Facebook can choose whatever policies it wants. So for them to say “free speech” is not really coherent. And then they hide behind “free speech” to defend their position on allowing deceptive political advertisements.

I have some friends who’ve worked at Facebook for a very long time, since the mid-aughts. I’ve argued with them about this stuff for a long time. Those arguments have become really touchy over the last five years, but it’s worth at least understanding where they’re coming from. From their perspective: A) yes, humans are flawed, and so bad things are going to happen on Facebook, but B), the marketplace of ideas is inherently good. So although there are bad things that happen, on net, people being able to say whatever they want is good. It’s all going to work out in the end, and, well, who gets to decide what’s true?

From my perspective, that’s a really interesting question. It would be worth asking that question. Let’s have that conversation, as opposed to presenting it as a hard question to answer, and therefore, the end of the conversation.

I don’t necessarily have ready answers as to how they should decide what constitutes truth. It is hard. But I don’t think it’s as irreconcilable as they make it out to be. I think their perspective is bad, and I would like to see other opportunities to demonstrate that there really is a coherent alternative to the civil libertarian “there’s nothing we can do” approach.

Do y’all have experience with decentralized alternatives like Mastodon? Because I tried. It wasn’t so much fun.

LOURDES: I did try, yeah!

KELSEY: Greg, what was not fun for you?

GREG: It wasn’t actually clear how to do it. After half an hour, I managed to join a node, but I couldn’t figure out how to see people on other nodes or how to communicate with them.

I believe deeply in the concept of federation, and decentralized networks and so on, but I couldn’t figure it out!

If you had a community that you brought there, and you just made it work for that specific community, it seemed promising, but it was hard for me to imagine it catching hold.

DAWN: I think there’s been two waves of moves to Mastodon, which opened up some questions around it.

Gab switched to using it last summer in July, so the alt-right/far right had a space; everyone who got deplatformed from Twitter went to build their space so they could be white supremacists. They had a different underlying tech stack, and they switched to using Mastodon servers, but with sort of more selective roles on how to federate them. That spawned this whole crisis, or maybe just a question, within the Mastodon community, of, do we want to federate these things?

They came as a community to that space. And so I think that became rather self-contained and functional fairly quickly; it was an in-group that wanted in-group connections.

There was another one more recently, at the beginning of this year. There was a lot of censorship going on in India, and a bunch of journalists collectively made a move to Mastodon at the same time, trying to think about censorship resistance and what federated and decentralized alternatives offer.

I think that opens up some of these questions around how moderation or how content works. In these federated models, there isn’t necessarily a universal view. On Twitter, ostensibly I can access all the content unless someone blocked me and I’m logged into the platform. But that is something you can play with in something like Mastodon. You can choose how porous the boundary is in and out of your community.

This issue has come up as well with Scuttlebutt [SSB]. They had some pubs where people were blocking certain content, but because of how content is syndicated, people were worried that they were still syndicating content (without seeing it) from pubs that were anarcho-capitalist/libertarian veering into more right-wing. There’s a lot of interesting questions around moderation that these federated and peer-to-peer models open up.

KELSEY: I was trying to think of other semi-decentralized platforms for creation of community, and I feel like that’s been Slack for a lot of people. I’m in a space called We All JS, which is an explicitly identity-inclusive Slack community for people who write JavaScript. They have a bunch of special moderation bots and rules where if you say “you guys”, for example, Slackbot will respond and correct your language to something more gender neutral. Slack explicitly enabled that kind of custom interaction.

I wonder if that’s actually more decentralized than Mastodon because you don’t have to be a deeply committed decentralized technologist to use it.

I’m curious what platforms you all are on with your families or communities, and why.

How platforms shape content

DAWN: I collaborate with a lot of people who are very value-driven in how they work with technology. These questions about how far down our own stack to go come up a lot. In one organization, our main space is Matrix. I use Riot, Matrix chat [now called Element], and it’s always walking that tradeoff with proprietary versus open and decentralized alternatives. I have a lot of Signal groups, but also WhatsApp ones. And I’m just not on some platforms, like Facebook.

With all the mutual aid going on right now, it’s been on my mind a lot more. I started a pod in my building, and we’re using WhatsApp, because you have to go where the people are.

I think a lot of that model is one of harm reduction, not Puritanism. If you say, we’ll only talk if you first install x new tools and air gap your machine to generate your key, you’re going to lose a lot of people. You have to say, okay, let’s get working, but let’s start thinking actively about what risks we face as individuals, or as a small group or community, and how to reduce those risks. I think that’s an active tension; it’s not something to be resolved. It’s something to negotiate.

KELSEY: I had some interesting conversations with my mom around what gets communicated over different spaces. She was talking about how on Facebook, there’s a lot of stuff that she just doesn’t bother posting anymore: once you get past a certain number of friends, you don’t want to post anything very real or raw.

I’ve been thinking about how we self-moderate on these platforms and how that interplays with the moderation that’s built into the platform, or how these platforms nudge us to behave in certain ways towards each other.

DAWN: My Instagram is just friends, so I can let loose. Facebook is everyone in my life, so I keep it pretty restrained unless I’m in my private groups. We mold different platforms to different aspects of how we want to present ourselves to the outside world, to our audience. There’s all this performativity in internet life.

KELSEY: Can we flip that around? Are there ways that we could take power over these spaces?

DAWN: I think revisiting what moderation means is quite interesting. In SSB, you generate a keypair, and it’s tied to a device. So people have multiple identities on SSB per device: for me, it’d be like dcwalk-laptop, dcwalk-mobile, or something. When one person can have many identities on one platform, based on context, I think that really opens up some of these questions of moderation.

Secure Scuttlebutt has a really lovely set of community-defined principles. They talk about “near moderation”, and “interdependent abundance”. I think they’re trying to open up some of these topics from a very different angle; the different lens comes from different architectural choices, but it can speak back in cool ways. Even just visibility can be a power take-back mechanism. When people are doing pod-style or neighborhood-level collaboration, people have flags that don’t surface on a single platform.

The model that Facebook is perpetuating is this universal space. Scholars have very convincingly critiqued designing for a universal; that model doesn’t work. Ideas like federation probably better represent a more pluralistic approach to connecting people all over the world.

Understanding federation

LOURDES: So what does federation mean?

GREG: Federation, as I understand it, means you have a lot of different semi-autonomous systems that are a part of one system.

One distinction that I think is really important is the difference between federation and confederation. A confederation is more like allies who aren’t necessarily formally bound to each other, whereas federation is a set of members bound by specific set of rules. The United States is a federation of states. The states have their own governing scopes and processes, but the federal government sets some standards and does some things on their behalf, collectively.

When we talk about federated networks, it’s not always clear to me whether there is some sort of central organizing entity, or whether a federated network just means that there are different nodes that can communicate to each other using common protocols, but are not necessarily actually bound through some sort of central coordinating entity.

KELSEY: That’s a really valuable point. In a decentralized technology context, it’s not clear whether “federated” is meant in the sense of having some powers delegated to a unifying authority. I don’t hear that discussed much.

GREG: I’m really interested in how Americans seem to have lost the capacity to understand the concept of federation. The whole American idea is that you have local, state, and federal, and we don’t really understand that culturally anymore.

LOURDES: Do you think it has to do with how we see subjectivity and shared values? In my mind, a federation would be held together by shared values, but in the US, we’re so into subjectivity and the idea that there’s no universal morality.

The importance of boundaries

GREG: It’s really interesting that you hone in on values there. I study the commons, governing the commons, and the whole field of common pool resource management that’s largely associated with Elinor Ostrom and the Bloomington school. Traditionally, these are fields that study watersheds and fisheries, natural resources systems, but in the last ten years, a subfield has emerged that focuses on knowledge commons and digital resources as commons.

From that lens, the open source field has got a lot of things wrong. As an organizer, I was totally sold on the “here comes everybody” Yochai Benkler, The Wealth of Networks approach. But once I started doing these projects, I realized it doesn’t work. And I was trying to figure out why. That’s when I started reading Ostrom’s work.

Elinor Ostrom has a set of principles for the design of institutions that can sustainably share resources, and principle number one is to set boundaries. This was really tricky for open source and open knowledge communities to wrap their heads around. When I was talking about Ostrom’s work with real anarchist hacker types, they got immediately turned off by the notion of boundaries. Their whole idea is that a network is more valuable if more people are in it, so how would you set up boundaries?

I think that the key is to understand that the values set the boundaries. The values help a community determine what’s in and therefore what actually must be—even though this is an open network, if you’re not in line with these values, then you’re out. And we have mechanisms to remove them.

I worked on a project called SustainOSS, through which we developed a set of principles applying some of the Commons principles to open source software.

KELSEY: I’m resonating a lot with that. I’m a somewhat lapsed maintainer of an open source project called Tessel. We had this very embracing approach to maintainership. Anyone could show up and say, teach me how to use GitHub, teach me code and hardware, so that I can be a contributor. Our approach was a radical yes to that. I would definitely not recommend that to your average open source project, but it suited our project’s values. It also helped open up the definition of “open source” for me.

Different open source projects mean different things by “open source”, and that’s okay. It has to be okay to say that it is not the maintainer’s job to, A) make your feature, B) accept your pull request, or C) listen to you talk about what you do and don’t like about the thing they gave you for free. Open source can just mean I let you see it; open source can mean I let you suggest things but not contribute; it can mean I let you contribute; it can even mean I help you contribute. But that definition is up to the maintainer.

That understanding is frequently not reached. I think part of that has to do with open source still being kind of a new field. But part of it also has to do with people coming in with too much idealism. And I say that with a cringe in my body. But boundaries matter a lot. If you can’t make any, you don’t have anything meaningful inside of them.

LOURDES: I know some of you have read You Are Not a Gadget, by Jaron Lanier. He outlines the idea that the internet was a sort of anarchist vision that then became co-opted by capitalism and by our ideas of what a computer interface should look like. The fact that we store our information in a file folder, for example, is the metaphor chosen by one small group of people in deciding what a computer interface should look like.

I think having no boundaries or set of principles going in, allows other principles to take over. I think that’s how a lot of the technology we’ve built has become co-opted by systems of surveillance and advertising and capital.

DAWN: I think that the way that the internet unfolded speaks to having an intentional flexibility that got recast in perhaps unintended ways. I do think that there was an under-definition of certain values that allowed them to be empty buckets that could be filled with a lot of discordant things.

That’s a problem that I’m interested in. These projects that are very value-driven still use these flexible terms like “decentralization”. But it’s how you pair that with other values that actually gives trajectory to the types of social change that you’re pursuing.

Harmful design

KELSEY: The one article we haven’t really touched at all is Human Centered Design Considered Harmful, by Jussi Pasanen. I’m seeing the broader pattern of things that begin with these really genuine intentions, can turn into these crazy things—when people get involved, it gets messy. When money gets involved, it gets warped in this weird way.

GREG: I find the user-centered design discourse maddening. It can be useful as a tactic, but it involves this process of simplification. I understand the objective of that process, but when you’re dealing with things like infrastructure and complex systems, that simplification is a process of erasure, and it’s eventually going to come back to bite you in the ass. You’re probably going to fail, or if you succeed, you’re going to end up becoming a part of somebody else’s problem.

DAWN: There’s been some really good critiques of user-centered design. I think a lot of design scholars would point to an operationalizing tendency, which turns focus to product and then thinks about an individual engagement with the product as the site to design for. Don Norman, who wrote The Design of Everyday Things, is a seminal figure in this type of design thinking. He took a theory from psychology affordances, about a relational approach, and then narrowed it down to this way you think about an individual perceiving these affordances in an object.

There’s a lot of really cool ways in which people are trying to challenge that tendency. Thinking about, say, Sasha Costanza-Chock’s work, Design Justice and that very participatory approach.

LOURDES: I just received Design Justice, and I think their argument is right on point. Design really needs to happen from the bottom up instead of top down and through capital surveillance.

Pasanen’s argument about anthropocentrism also resonated with me: human-centered design is anthropocentric and ignores everything else in the world. It reminded me of a critique of the whole mainstream environmental movement as anthropocentric, the approach that says we need to take care of the wild and of the wilderness because we need to survive.

An environmental justice framing is not anthropocentric, it’s more society-centric. I think what’s weaving these together is this idea of a collective moving towards collective-oriented design and thought in decisions, as opposed to individual and alienated decisions that will destroy the world.

DAWN: There’s a lot of people challenging these framings, but I don’t think there are reproducible ways through yet. I don’t know if we’ve escaped the gravity well that is getting trapped in designing a specific product. And when you have to design a thing, you just fall back on those patterns because you need to move forward.

KELSEY: That’s definitely something we’ve talked about before in these discussions, that sense of a need for momentum. We need it to stay motivated, but making progress for its own sake can be the most harmful thing we do. On the other hand, I get really frustrated. We can say that it’s good theory all we want, but how do you broadly deploy decentralization or bottom-up power without starting at the top?

DAWN: Trans-local, non-hierarchical distribution. That’s it. That’s the way. They don’t scale, because scale is part of the problem.


Data Together is a community of people imagining a better future for data. We engage in a monthly Reading Group on themes relevant to information and ethics. Participants’ backgrounds range decentralized web protocols, data archiving, ethical frameworks, and citizen science.

This reading group is something your own collective can do too! We encourage you to draw on our notes for this month’s topic. Our notes list readings, call out themes, and suggest discussion questions.

This blog post is derived from our conversation, but is not a replica of it; we rearrange and paraphrase throughout. You can view the recorded call here.