Facebook, The Rodents, and The Common Knowledge Machine

The dark magicians at Facebook have cast a hex on the rationalist community.

Their hex is not the only hex, and it is not necessarily the hex of most consequence. It is however one of the more obvious, legible, and objectively harmful instances of witchcraft. This post is about how several people put up actual money to try counteracting the forces of evil and failed to make a dent. We will look at the problem they tried to solve, what was attempted, why it didn't work, and what the persistent nature of this problem says about the rationalist community.

The Problem

Facebook is in the uncomfortable position of selling a product that makes its users worse off. A series of increasingly rigorous studies have shown fairly conclusively that Facebook is bad for you. And that's just on traditional mental health lines. Facebook and other social media is also implicated in the hell that is our current political landscape. Between filter bubbles, psychological warfare against users, dark patterns, and the opportunity for 3rd parties to turn this apparatus of control into political outcomes what we have here is probably the #1 mindkiller in America. These things are more than just media hype. I've bought political ads on Facebook myself, and the taste of targeting magic I got access to with small amounts of spend gives me no doubt that deep pockets can get incredible powers.

But this post isn't really about Facebook, it's about rationalism. All of this is background to the fact that rationalists, especially ones that live in the Bay and Seattle Areas, use Facebook as their primary communication tool. This is a problem because zooming out to "raising the sanity waterline", a core rationalist mission, it becomes fairly obvious that hosting your communication on Facebook is a bit like the local health club holding their meetings at McDonalds.

Then there's the utilitarian concerns. Facebook is not actually a reliable messaging service for the purpose rationalists put it to. Namely, having a reliable forum to exchange messages on with the expectation other people will see them. Zvi already breaks down the problem here wonderfully, but the short version is that Facebook employs tactics that maximize engagement by making it artificially difficult to know when you should stop engaging with the forum. That means obfuscation of conversational flow, along with enforced selective attention on what messages to pay attention to.

All of this is to say Facebook being the stable equilibrium for local rationalist communities to discuss things on the Internet is a real problem. Plenty of people have already written about such problems before. Scott Alexander's Meditations on Moloch and Eliezer Yudkowsky's Inadequate Equilibria are two particularly notable examples. The subject of this post, the real subject is to describe an attempt to implement a solution advocated by both authors. It is about why this attempt failed and how it could be done better next time. And it's about why the failure should concern us.

Actuator: An Attempted Solution

Enter Actuator. Actuator is an attempt to implement a concept described by both Scott Alexander and Eliezer Yudkowsky as a potential solution to persistent coordination problems. The idea is fairly simple: you set up a notice that you want a certain thing to happen, and then other people can sign up with their email to indicate they want the thing to happen too. Once you reach a certain threshold of signsups, the service sends out a mass mail letting everyone know they've hit enough people who agree with details on who to organize with. It's clever, already shown to work by sites like IndieGoGo and Kickstarter, and fairly simple to implement.

Obviously if this existed it could be used to get people to commit to switching away from Facebook. Discord user Diffractor#2490 felt the idea was worth a try and put some of his own money in to start a pool that would pay out a bounty to anyone that could make the software. This pool ended up with $200. The plan looked something like this:

An anonymous person wrote the software, collected the money, and showed it to some people but nothing ever happened with it.

Why It Failed

So why didn't it work? Following some leads on Tumblr I tracked down the author of the software to ask his opinion, and I ended up with the following picture.

Bad Incentives

The author admits that his motivation for writing it was the bounty. Once he had developed the raw features without any particular polish he showed up to collect the money. With cash in hand, and no more money forthcoming other concerns distracted him and he stopped working on it. Only one person has worked on it seriously since, looking at the GitLab repository.

The lesson here is that if you want a bounty to achieve a particular outcome, like "website that is working and looks good and has people using it" you should split the bounty up into milestones. Imagine if instead of raising 200 and giving it all out to the first person with raw features, they'd raised 500 and split it up. The first 200 to whoever can make the functionality with reasonable tools (you could probably even specify particular tools) and code quality. The next 200 to whoever makes a style for it that actually looks good. Hold a contest, best one gets 200. Then the last 100 to whoever can write some decent documentation for the thing. Setup instructions and the like. This could all of course be the same person, but it doesn't have to be. Then once it's done, you have a good chance people might actually use it. If you had more money you could hand out a bounty to the first instance that has n active users for x weeks, or whatever other scheme. The point is that the primary failure mode here was not considering the full chain of events that are necessary to bring the concept to fruition, and putting those into the bounty payouts.

Many People Enjoy Using Facebook

In spite of what I wrote above, the author feels that there's a deeper problem with the whole idea: nobody wants it. He suspects this is a product idea that sounds cool but no one actually wants to use. I disagree, but I think that a related problem helped sink the Facebook idea: there are plenty of people who absolutely endorse their use of Facebook. Heck, some of the people involved here work there. I don't think enough effort was put into determining that this is a problem there was latent pain to fuel a solution for. It doesn't matter how bad a problem is if nobody wants to solve it.

Meta-Coordination Issues

From a systems perspective, for people to interact with Actuator they need to hear about Actuator. Getting people to buy into the Actuator plan and concept is itself a thorny coordination and Common Knowledge problem. This didn't seem to be factored into the strategy so almost nobody ended up using Actuator. Some kind of marketing campaign was probably necessary to bring the project to life.

Not Clear How The Move Would Happen

A lot of necessary questions just plain weren't answered here, such as:

Worse still, you had a truly low effort entry for the Facebook exodus.

Title: Coordinated Facebook Exodus

Description: I agree to leave facebook if ten million other people agree to leave with me.

Threshold: 10000000 Verified signups: 7”

For whatever reason nobody felt compelled to make a better one than this, probably because nobody else seemed like they were putting in effort either.

My Idea: Use The Costly Signaling, Luke

Moving beyond the poor implementation details, lets say you had a perfectly working version of Actuator. It's still not clear if people will follow through on their promise when the mail gets sent. I have an obvious rules patch for this. Mix a little Beeminder into the equation. Have people pledge money that they will do the thing they've signed up for, and if they don't Actuator charges them. If they do it and provide proof they did it, then they keep their money. If a certain amount of time passes and not enough signups happen, nobody gets charged. This solves the biggest problem this system has: you don't really know how serious people actually are. If people sign up for a coordinated deletion of their Facebook accounts, and each person puts 20 dollars behind that you can be fairly sure when the threshold gets hit that accounts will actually be deleted.

But What About The Mouse?

An important question left unanswered is what it says about the community for this project to fail. As Somnilogical points out this is a great idea that was barely tried. More important perhaps, is how little anguish there actually is about the use of Facebook. Zvi wrote a series of posts about it, and at least the Seattle community tried coordinating a move to Discord (with some success!). But the fact that this clearly suboptimal state was left alone by basically every actor involved, including Eliezer Yudkowsky who prefers drafting his posts on Facebook; well it's absurd. It says that the people who populate the rationalist community aren't serious about ideas like raising the sanity waterline or putting in hard work to get better outcomes on things that are obvious contributors to mental illness. A full analysis of this phenomena and what caused it is best left for another time. However a good starting point is Bendini's postmortem of the Bay Area community.

Summary & Lessons Learned

The rationalist communities use of Facebook is an inadequate scenario. A group of people put real money into making a simple tool that could enable transitions from inadequate equilibria to better, 'adequate' ones. This tool was made in the basic sense, but had none of the necessary polishing or marketing work done because the money only stipulated raw tool creation. Better planning and project management could have avoided this outcome. The fact that this outcome was collectively allowed to stand implies serious issues with the foundational makeup of the people within the rationalist community.

Lessons Learned