<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>The Last Rationalist</title><link href="https://www.thelastrationalist.com/" rel="alternate"></link><link href="https://www.thelastrationalist.com/feeds/all.atom.xml" rel="self"></link><id>https://www.thelastrationalist.com/</id><updated>2020-06-16T00:00:00+02:00</updated><entry><title>History and Warrant: Contra More and Yudkowsky On Religious Substitutes (Part Two)</title><link href="https://www.thelastrationalist.com/history-and-warrant-contra-more-and-yudkowsky-on-religious-substitutes-part-two.html" rel="alternate"></link><published>2020-06-16T00:00:00+02:00</published><updated>2020-06-16T00:00:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2020-06-16:/history-and-warrant-contra-more-and-yudkowsky-on-religious-substitutes-part-two.html</id><summary type="html">&lt;p&gt;One of the things that makes The Sequences frustrating is how Eliezer Yudkowsky's 
contempt for what came before him neuters their impact. The most obvious damage is 
how his refusal to follow academic norms resulted in &lt;a href="https://www.greaterwrong.com/posts/64FdKLwmea8MCLWkE/the-neglected-virtue-of-scholarship"&gt;an equilibrium of weak scholarship&lt;/a&gt;. 
He didn't so much as bother to &lt;a href="https://www.readthesequences.com/Bibliography"&gt;give his …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;One of the things that makes The Sequences frustrating is how Eliezer Yudkowsky's 
contempt for what came before him neuters their impact. The most obvious damage is 
how his refusal to follow academic norms resulted in &lt;a href="https://www.greaterwrong.com/posts/64FdKLwmea8MCLWkE/the-neglected-virtue-of-scholarship"&gt;an equilibrium of weak scholarship&lt;/a&gt;. 
He didn't so much as bother to &lt;a href="https://www.readthesequences.com/Bibliography"&gt;give his Sequences a bibliography&lt;/a&gt; 
until the &lt;em&gt;Rationality: AI to Zombies&lt;/em&gt; edition was published in 2015. The bibliography 
he did publish seems non-comprehensive to me (e.g. &lt;a href="https://www.readthesequences.com/Extensions-And-Intensions"&gt;where is Language In Thought 
And Action&lt;/a&gt; by Hayakawa?). 
But I think the deepest damage was done by Yudkowsky's habit of disassocating from 
thinkers he no longer totally agrees with, including his past self. This disassociation 
decontextualized his ideas, making it much harder for his readers to get a complete 
model of how they work and what to draw on for further development. &lt;/p&gt;
&lt;p&gt;One of the stranger acts of disassociation is Yudkowsky's &lt;a href="https://www.readthesequences.com/Search?q=future+shock&amp;amp;action=search"&gt;omission&lt;/a&gt; 
of his &lt;a href="http://www.sl4.org/shocklevels.html"&gt;future shock levels&lt;/a&gt; from The Sequences. 
Future shock levels are not the most rigorous idea, but considering that high 
future shock is more or less the unique ingredient that makes The Sequences what 
they are you'd think he would be more self aware about it. Part of the idea behind this 
omission seems to be that high future shock necessarily follows from a deep 
consideration of physical possibility, but that doesn't seem to be the case. As 
far as I know people with physics Ph.D's do not automatically turn into Extropians 
and Singularitans and Transhumanists. There is a certain element of storytelling 
that seems to be required for people to connect the dots in that particular way reliably. &lt;/p&gt;
&lt;p&gt;A similar act of disassociation wrote Max More and his Extropians out of the narrative. 
&lt;a href="http://www.lucifer.com/exi-lists/extropians.96/2519.html"&gt;Nevermind that Yudkowsky posted to their mailing list&lt;/a&gt; 
when he was 17, as Tom Chivers recounts in his &lt;em&gt;The AI Does Not Hate You&lt;/em&gt; Extropians 
were downstream of the singularity (Chivers, 2019). In Eliezer's mind this presumably 
meant they no longer merited a mention in The Sequences. Instead of getting a 
principled explanation of what the conceptual journey upstream to a technological 
singularity looks like starting from common sense intuitions, we got the post 
&lt;a href="https://www.readthesequences.com/Raised-In-Technophilia"&gt;Raised in Technophilia&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
The first crack in my childhood technophilia appeared in, I think, 1997 or 1998, at the point where I noticed my fellow technophiles saying foolish things about how molecular nanotechnology would be an easy problem to manage. (As you may be noticing yet again, the young Eliezer was driven to a tremendous extent by his ability to find flaws—I even had a personal philosophy of why that sort of thing was a good idea.)
&lt;br&gt; &lt;br&gt;
There was a debate going on about molecular nanotechnology, and whether offense would be asymmetrically easier than defense. And there were people arguing that defense would be easy. In the domain of nanotech, for Ghu’s sake, programmable matter, when we can’t even seem to get the security problem solved for computer networks where we can observe and control every one and zero. People were talking about unassailable diamondoid walls. I observed that diamond doesn’t stand off a nuclear weapon, that offense has had defense beat since 1945 and nanotech didn’t look likely to change that. 
&lt;br&gt; &lt;br&gt;
And by the time that debate was over, it seems that the young Eliezer— caught up in the heat of argument—had managed to notice, for the first time, that the survival of Earth-originating intelligent life stood at risk. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Raised-In-Technophilia"&gt;Raised in Technophilia&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;It was on account of this intellectual skittishness that I didn't realize my 
intuitions were more SL3 than SL4 until several years after I'd read The Sequences. 
This same reluctance also led me (and many others) to fail to understand what was 
so 'special' about them in the first place. I didn't understand until I decided 
&lt;a href="https://www.thelastrationalist.com/slack-club.html"&gt;I'd be better off starting over somewhere else&lt;/a&gt; 
and I began dissecting the concept of 'rationality' so I could replicate it wherever 
I went. In a particularly stupid episode of map-territory confusion I focused on 
the word 'rationality' and read books like Tetlock's &lt;em&gt;Superforecasting&lt;/em&gt; and 
Lewis's &lt;em&gt;Moneyball&lt;/em&gt;. It was while reading these books that I had a startling 
realization: They were good, but no amount of reading them would have ever let 
me write Eliezer's Sequences. The Sequences were centrally about math, physics, 
and this 'weird AI futurology stuff' (which was supposed to be based on the math 
and physics). All the discussion of science and cognitive biases and Bayes was 
important but ultimately noncentral to Eliezer's real purpose for writing. &lt;/p&gt;
&lt;p&gt;I'm thankful that my 14 year old self wasn't mature enough to fully parse and 
internalize Eliezer's life advice, because it likely would have turned out very badly. 
The usual outcome looks something like a promising young person devoting all their 
time to reading math and physics textbooks then moving to the Bay Area:&lt;/p&gt;
&lt;blockquote&gt;
More of my story: When I first encountered the rationality community, I read the Sequences and a lot of rational fiction. I had already dropped nearly everything several months prior to work on AGI after hearing arguments about its importance from a classmate in college, and I pivoted to working on friendliness. I worked on Pascal’s Mugging and naturalizing Solomonoff Induction, since these were two of the three open problems I was aware of from Less Wrong. (My work was mostly superseded by Logical Induction, so I never fully shared it.) I tried to read the long list of math textbooks that MIRI recommended for aspiring researchers. I did this for about two years, before deciding to move to the Bay in order to figure out what was the problem that I had the most relative advantage to solve (this led to me cofounding Rationalist Fleet) and to have tighter feedback loops interacting with people working on AI safety research. I optimized as though I was a part of a polity of people working to save the world. Given the character of what Eliezer had written, I had assumed that was what MIRI and the rationalist community were.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Gwen D, &lt;i&gt;Case study: CFAR&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once they get there they realize a couple things, sooner or later. The first thing 
they probably notice is that the Bay Area is a very bad place to be for someone
starting their career. There are entire counties in the Bay Area with a poverty
line over six figures (CBS San Francisco, 2018). Worse still the Bay is a place
set up to make powerful people more powerful, the default expectation for someone
moving in unestablished is that it's financial and interpersonal suicide. I happened
to know to stay away in part because of a passage I'd read in Alan Bullock's &lt;em&gt;Hitler: A Study In Tyranny&lt;/em&gt; (Bullock, 1964):&lt;/p&gt;
&lt;blockquote&gt;
Vienna, at the beginning of 1909, was still an imperial city, capital
of an Empire of fifty million souls stretching from the Rhine to the Dniester,
from Saxony to Montenegro. The aristrocratlc baroque
city of Mozart's time had become a great commercial and industrial 
centre with a population of two million people. Electric
trams ran through its noisy and crowded streets. The massive,
monumental buildings erected on the Ringstrasse in the last
quarter of the nineteenth century reflected the prosperity and self
confidence of the Viennese middle class; the factories and poorer
streets of the outer districts the rise of an industrial working class.
To a young man of twenty, without a home, friends, or resources,
it must have appeared a callous and unfriendly city: Vienna was
no place to be without money or a job. The four years that
now followed, from 1909 to 1913, Hitler himself says, were the
unhappiest of his life. They were also in many ways the most
important, the formative years in which his character and opinions
were given definite shape
&lt;/blockquote&gt;

&lt;p&gt;The second thing they realize is that MIRI and CFAR are not actually particularly
good at 'saving the world'. I'm not in a good position to evaluate MIRI's impact,
but as a total outsider who doesn't pay much attention to AI Risk research it doesn't
look stellar. CFAR on the other hand focuses on a subject I actually know a lot about,
and I've privately critiqued them for years. Gwen herself points out that the CFAR handbook
&lt;a href="https://pastebin.com/g0wbr5gE"&gt;supposedly&lt;/a&gt; hasn't changed very much between its
&lt;a href="https://sinceriously.fyi/wp-content/uploads/2020/01/2016-CFAR-Handbook.pdf"&gt;2016&lt;/a&gt; and 
&lt;a href="https://sinceriously.fyi/wp-content/uploads/2020/01/2019-CFAR-handbook.pdf"&gt;2019&lt;/a&gt; editions.
I can't personally attest to this, but I can say that the parts of the CFAR handbook 
I've read felt distinctly inferior to The Sequences. Not just in writing quality, 
but in the sense of providing a coherent worldview. &lt;a href="https://www.greaterwrong.com/users/saidachmiz"&gt;My friend Said Achmiz&lt;/a&gt;
once pointed out to me that The Sequences were &lt;em&gt;not&lt;/em&gt; just "a collection of crap on 
the Internet" and this was the basic secret to their power. Without context that
might not be particularly insightful, a "collection of crap on the Internet" would
be something like &lt;a href="https://youarenotsosmart.com/"&gt;You Are Not So Smart&lt;/a&gt;, which is
a big list of cognitive biases. A lot of people try to replicate Eliezer's Sequences
by making a 'big list of biases' or trying to find some new list of relevant sounding
topics to teach. &lt;/p&gt;
&lt;p&gt;'A lot of people' apparently includes CFAR, since as far as I 
can tell the pieces in the handbook don't actually come together to form a complete
worldview, they're just cool independent concepts to make you 'more rational'. Considering
that they have The Sequences as a starting example, and Eliezer Yudkowsky is 
presumably available to consult, this is disgraceful. Yudkowsky &lt;a href="https://wiki.lesswrong.com/wiki/Chat_Logs/2009-04-11"&gt;already admitted&lt;/a&gt; 
that the basic recipe is to take your coherent worldview and then piece it up into
a bunch of TVTropes like pages with click-y names/phrases for concepts densely
linked together and mutually referencing. When I went to refamiliarize myself 
with some posts for this essay I found it lead me into a violent tabsplosion that
quickly took over my browser. The design is just as potent in 2020 as it was in 2009
(&lt;a href="https://www.thelastrationalist.com/slack-club.html"&gt;this is incidentally &lt;em&gt;why&lt;/em&gt;&lt;/a&gt;
the community selects for an ADD phenotype).&lt;/p&gt;
&lt;p&gt;I think a great deal of this can be traced back to the fact that rationality &lt;a href="https://www.thelastrationalist.com/rationality-is-not-systematized-winning.html"&gt;was
defined as 'the way of winning'&lt;/a&gt;,
which does not &lt;a href="https://www.readthesequences.com/Fake-Explanations"&gt;constrain expectations&lt;/a&gt; 
about what I should expect a 'rationalist' to be focusing on. What I imagine happened
was the CFAR people got together and said "okay lets work on rationality", and this
of course necessitated that they pick some subjects to focus on. When they got to
that part, it probably wasn't clear enough in their head what rationality &lt;em&gt;was&lt;/em&gt;
such that they could reliably build on and improve it 
(&lt;a href="https://www.greaterwrong.com/posts/jLwFCkNKMCFTCX7rL/circling-as-cousin-to-rationality"&gt;circling&lt;/a&gt;, really?). 
It's notable to me that when I wanted to break down 'rationality' so I could recreate it, I focused on the
implications of the word and 'way of winning' more than just going back and dissecting
The Sequences. If I'd done the latter, I'd have probably noticed the &lt;em&gt;real&lt;/em&gt; component
parts (high future shock, love for the world, sanity, agency) way sooner. In any
case it also means that CFAR's only real metric for rationality-training success is (as far as I know)
a non-rigorous survey they put out asking people if CFAR improved their life.
When you spend a lot of money on a course and there's social pressure to like the
ingroup 'rationality' 'research' organization you're pretty likely to say yes
even if the impact on your life was zero. &lt;/p&gt;
&lt;p&gt;All of this has led the 'rationalist community' to have a more or less uninterrupted
identity crisis &lt;a href="https://www.greaterwrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences"&gt;since Eliezer stopped writing daily posts&lt;/a&gt; in 2009.
I still think of the time when a bunch of people decided monarchy was awesome and
started calling themselves 'Neoreactionaries' without getting banned as the point
when ratsphere absurdity reached a rolling boil. Certainly we all have fond memories
of &lt;a href="https://www.greaterwrong.com/posts/RWKXeM49Stc4aEcEc/neo-reactionaries-why-are-you-neo-reactionary#comment-WRwaJq65aRx9Akpy2"&gt;right wing trolls telling female cryonicists they should expect to be domestic
slaves when they're revived in the future&lt;/a&gt;.
This theater troupe &lt;a href="https://archive.org/details/the-silicon-ideology"&gt;got rave reviews&lt;/a&gt;, 
so it was only natural to follow it with a dadaist encore of &lt;a href="https://www.greaterwrong.com/posts/4ciy6PCDWfGCxqHez/attacking-enlightenment"&gt;half incoherent posts about enlightenment&lt;/a&gt;,
&lt;a href="https://approachingaro.org/twilight-of-the-isms"&gt;proclamations that consistent epistemology is so 20th century&lt;/a&gt;, &lt;a href="https://www.greaterwrong.com/posts/jLwFCkNKMCFTCX7rL/circling-as-cousin-to-rationality"&gt;praise for group bonding rituals whose participants literally cannot describe the benefits&lt;/a&gt; and &lt;a href="https://zizians.info"&gt;vegan TDT-basilisk cults&lt;/a&gt;.
As I'm to understand it Eliezer felt he would be making his 'rationality' more 
palatable to Singerism if he kept the quasi-religious (or as I will later argue,
just plain religious) Extropian stuff to a minimum. But I think any person who
has been seriously influenced by Yudkowsky should be aware of his actual feelings
about goal setting and accomplishment:&lt;/p&gt;
&lt;blockquote&gt;
Eliezer on Sept 12, 2012 | parent | favorite | on: Ask PG: What Is The Most Frighteningly Ambitious I...
&lt;br&gt; &lt;br&gt;
Can you say where the scariest and most ambitious convincing pitch was on the following scale?
&lt;br&gt; &lt;br&gt;
1) We're going to build the next Facebook!
&lt;br&gt; &lt;br&gt;
2) We're going to found the next Apple!
&lt;br&gt; &lt;br&gt;
3) Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
&lt;br&gt; &lt;br&gt;
4) Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
&lt;br&gt; &lt;br&gt;
5) This is going to be the equivalent of the invention of electricity if it works out.
&lt;br&gt; &lt;br&gt;
6) We're going to make an IQ-enhancing drug and produce basic change in the human condition.
&lt;br&gt; &lt;br&gt;
7) We're going to build serious Drexler-class molecular nanotechnology.
&lt;br&gt; &lt;br&gt;
8) We're going to upload a human brain into a computer.
&lt;br&gt; &lt;br&gt;
9) We're going to build a recursively self-improving Artificial Intelligence.
&lt;br&gt; &lt;br&gt;
10) We think we've figured out how to hack into the computer our universe is running on. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://news.ycombinator.com/item?id=4510702"&gt;September 12th, 2012&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;h2&gt;Bibliography&lt;/h2&gt;
&lt;p&gt;Chivers, T. (2019). &lt;em&gt;The ai does not hate you: Superintelligence, rationality and the race to save the world&lt;/em&gt;. London: Weidenfeld &amp;amp; Nicolson.&lt;/p&gt;
&lt;p&gt;CBS San Francisco. (2018, June 26). &lt;em&gt;HUD: $117,000 now ‘low-income’ in 3 bay area counties&lt;/em&gt;. https://sanfrancisco.cbslocal.com/2018/06/26/hud-117000-low-income-san-mateo-san-francisco-marin/&lt;/p&gt;
&lt;p&gt;Bullock, A. (1964). &lt;em&gt;Hitler: A study in tyranny&lt;/em&gt;. New York: Harper &amp;amp; Row.&lt;/p&gt;</content></entry><entry><title>History and Warrant: Contra More and Yudkowsky On Religious Substitutes (Part One)</title><link href="https://www.thelastrationalist.com/history-and-warrant-contra-more-and-yudkowsky-on-religious-substitutes-part-one.html" rel="alternate"></link><published>2020-06-13T00:00:00+02:00</published><updated>2020-06-13T00:00:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2020-06-13:/history-and-warrant-contra-more-and-yudkowsky-on-religious-substitutes-part-one.html</id><summary type="html">&lt;blockquote&gt;But the shock was fleeting, I knew the Law: &lt;i&gt;No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.&lt;/i&gt;
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Einsteins-Superpowers"&gt;Einstein’s Superpowers&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
The essence of any &lt;b&gt;religion&lt;/b&gt; is faith and worship.
Generally religions hold that there is a god or gods which …&lt;/blockquote&gt;</summary><content type="html">&lt;blockquote&gt;But the shock was fleeting, I knew the Law: &lt;i&gt;No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.&lt;/i&gt;
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Einsteins-Superpowers"&gt;Einstein’s Superpowers&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
The essence of any &lt;b&gt;religion&lt;/b&gt; is faith and worship.
Generally religions hold that there is a god or gods which
give our lives meaning by assigning us a role in a grand plan
created and controlled by external supernatural forces. Our
assigned function is to obey and praise these forces or
entities. However, the &lt;i&gt;essence&lt;/i&gt; of religion is faith and
worship rather than any belief in a god.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Max More,
&lt;i&gt;&lt;a href="https://github.com/Extropians/Extropy/blob/master/ext6.pdf"&gt;Transhumanism: Towards a Futurist Philosophy (1990)&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Otto von Bismarck  
&lt;/blockquote&gt;

&lt;p&gt;Helpful as they were to me, HPMOR and The Sequences taught me a lot of bad habits that I had to unlearn before I could do useful work. A lot of why I started writing The Last Rationalist was so I could pass those lessons on to people who weren't as fortunate. One of those bad habits was &lt;a href="https://www.thelastrationalist.com/literature-review-for-academic-outsiders-what-how-and-why.html"&gt;not paying attention to previous work&lt;/a&gt;. Another was to view scientific and philosophical history as nothing more than a procession of mistakes, useful to me only as instruction in what not to do. 'The ancestors' are not to be looked towards for lessons or guidance, rather they are strict inferiors who have nothing to teach you.&lt;/p&gt;
&lt;p&gt;This contemptuous view of history permeates The Sequences. The most direct statement I can find on short notice is:&lt;/p&gt;
&lt;blockquote&gt;
Once upon a time, I gave a Mysterious Answer to a mysterious question, not realizing that I was making exactly the same mistake as astrologers devising mystical explanations for the stars, or alchemists devising magical properties of matter, or vitalists postulating an opaque “élan vital” to explain all of biology. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Making-History-Available"&gt;Making History Available&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;But there's also plenty of implicit statements like this one:&lt;/p&gt;
&lt;blockquote&gt;
Or sadder: Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas. And this choice, my past, is irrevocable. I’ll hit a brick wall at 40, and there won’t be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn. So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/The-Level-Above-Mine"&gt;The Level Above Mine&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;The logical conclusion of this would be that any time spent reading history books was wasted, in fact any time spent on engaging with &lt;em&gt;the world&lt;/em&gt; as opposed to the multiverse through physics and mathematics was wasted. &lt;a href="https://www.readthesequences.com/My-Wild-And-Reckless-Youth"&gt;Eliezer Yudkowsky centers his notion of rationality&lt;/a&gt; on the minds of physicists. Reading these statements it feels like I can see past Eliezer's writing and into his soul. History isn't particularly important there, it's an ad-hoc series of events that doesn't compress well. History doesn't have (&lt;a href="https://en.wikipedia.org/wiki/Cliodynamics"&gt;officially endorsed&lt;/a&gt;) universal rules or predictive models, just this library of babel that you can pull down book after book and your knowledge extends only as far as those books. I suspect that in Eliezer's mind this feels like learning about an ad-hoc theology, or all the species that happen to exist on earth. To his intuition it is a fundamentally lower form of activity than the onstensible hyper-insight of physics and mathematics. &lt;/p&gt;
&lt;p&gt;And yet he does have some positive things to say about studying history. He &lt;a href="https://www.readthesequences.com/Einsteins-Superpowers"&gt;wrote&lt;/a&gt; &lt;a href="https://www.readthesequences.com/Einsteins-Speed"&gt;three&lt;/a&gt; &lt;a href="https://www.readthesequences.com/Einsteins-Arrogance"&gt;posts&lt;/a&gt; based on some incidental history he encountered about Einstein. They're actually some of the most important posts in The Sequences, in terms of understanding Eliezer's philosophy. One of his central claims, that it's possible for careful reasoning to outpace 'science' in the empirical sense, is more or less held up by Einstein's historical example. I wonder how strongly he believed this hypothesis before he read about Einstein? I'd imagine it was mentally available, but it also seems likely that it was merely a possible way that the world could work, rather than a thing he thought of as central to scientific progress. &lt;/p&gt;
&lt;p&gt;History isn't just a library of babel, it's a great deal of the &lt;a href="https://www.thelastrationalist.com/necessity-and-warrant.html"&gt;warrant&lt;/a&gt; that makes a document like The Sequences possible in the first place. You would not think "what if eventually humanity becomes so technologically powerful that it invents everything there is to invent in one sprint" unless you already have working models of things like 'invention', 'technology', 'technological progress', etc. Maybe a superintelligent AI can just imagine every plausible history once it's been shown an apple or a pebble, but I doubt &lt;em&gt;you&lt;/em&gt; can. It's easy to imagine that we've already mined history for all the important goodies, and what's left after you have the concepts necessary for The Sequences is just royal bloodlines and anonymous battles. I don't think that's true either.&lt;/p&gt;
&lt;p&gt;One historical moment that I think really underscores this is Colonel John Boyd's invention of the OODA loop. The basic origin of the OODA loop was Boyd's experience reviewing flight combat statistics from the Korean War (which he participated in). He noticed that the kill-death ratio of American pilots to Soviet pilots in that conflict was extremely skewed, the typical number given is a literal 10:1 K:D (Coram, 2002). To reiterate this for emphasis: &lt;em&gt;American airmen shot down 10 enemy aircraft on average for every fighter they lost&lt;/em&gt;. Nobody had a good explanation for this, most people surmised that the American pilot training must be really good and let their curiosity end there. John Boyd did not do that, instead he &lt;a href="https://www.readthesequences.com/Noticing-Confusion-Sequence"&gt;noticed he was confused&lt;/a&gt;; but that didn't give him an answer. His confusion would have to wait. This happened in the early 1950's.&lt;/p&gt;
&lt;p&gt;It's worth taking a moment to note that in many ways John Boyd is the antithesis of Eliezer Yudkowsky's central notions of a rationalist hero. The intelligence test he took before entering high school told him he only had an IQ of 90 (Coram, 2002). Boyd would routinely use this number to disarm people, who figured they didn't have much to fear from someone with an IQ of only 90. He was an athlete first, and picked his college based on the prospect of being a competitive swimmer. When he joined the Air Force he became famous for his ability to stay undefeated in air combat, earning the nickname 'Forty-Second Boyd' for the speed at which he would dispatch challengers (Coram, 2002). The &lt;em&gt;Aerial Attack Study&lt;/em&gt;, which changed the way every pilot fought was based on his empirical, personal knowledge of air fighting. His eclectic intellectualism was in some sense forced upon him by the college degree he had to earn to advance in the Air Force. His friends knew him as a hardass that liked crude language and trips to the bar with buds. He left behind a slim corpus of only a handful of written documents.&lt;/p&gt;
&lt;p&gt;And yet it was Boyd who noticed he was confused about the kill statistics in the Korean War. He eventually developed his &lt;a href="https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory"&gt;E-M theory&lt;/a&gt; which characterized the performance of aircraft well enough to begin rigorously designing combat planes (Coram, 2002). What it did not do however is explain the results from the Korean theater, where an analysis based on raw performance would have predicted the Soviet MiG-15 to have the advantage (Coram, 2002). That the actual results were a stunning slaughter in the other direction implied to Boyd that there was something deeply important he did not understand about air combat. He started doing research into philosophy and physics and 'obscure' books to figure out what that thing was. He considered &lt;em&gt;every&lt;/em&gt; aspect of both planes involved in the conflict, in the hope that he could single out some clue as to what made such a difference. Eventually he narrowed in on the controls, which were more sluggish on the MiG-15, as a deeply important factor (Coram, 2002). This reading and observation eventually developed into the OODA loop in the 1970's.&lt;/p&gt;
&lt;p&gt;What is interesting to me about this example (besides its fascinating protagonist) is that it is fundamentally different from every other example I've seen Eliezer give for the concept of 'noticing confusion'. In his writing, including HPMOR, noticing confusion and resolving it happen close together. You notice you're confused, and then you figure out why. Boyd however noticed he was confused years beforehand, and it was only much later in his career that the observation paid off. This implies a bunch of potential enhancements to the 'noticing confusion' method. It might make sense for aspiring rationalists to keep a 'confusion journal', much like the idea I once heard of keeping a journal for every time someone asks you a question. Other people ask you a question when they're confused. When you're confused you're asking a question, and it might make sense to write that question down even if you can't answer it right away.&lt;/p&gt;
&lt;p&gt;The importance of history goes beyond just finding incremental improvements to Technique. As Eliezer himself admits, understanding the strangeness (and underlying fundamental normality) of our past is important to &lt;a href="http://www.sl4.org/shocklevels.html"&gt;internalizing a high future shock perspective&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
So the next time you doubt the strangeness of the future, remember how you were born in a hunter-gatherer tribe ten thousand years ago, when no one knew of Science at all. Remember how you were shocked, to the depths of your being, when Science explained the great and terrible sacred mysteries that you once revered so highly. Remember how you once believed that you could fly by eating the right mushrooms, and then you accepted with disappointment that you would never fly, and then you flew. Remember how you had always thought that slavery was right and proper, and then you changed your mind. Don’t imagine how you could have predicted the change, for that is amnesia. Remember that, in fact, you did not guess. Remember how, century after century, the world changed in ways you did not guess.
&lt;br&gt; &lt;br&gt;
Maybe then you will be less shocked by what happens next. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Making-History-Available"&gt;Making History Available&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yet I can't help but be struck by the way in which Eliezer has taken his shallow-realization that history is made of people more or less like himself:&lt;/p&gt;
&lt;blockquote&gt;
I thought the lesson of history was that astrologers and alchemists and vitalists had an innate character flaw, a tendency toward mysterianism, which led them to come up with mysterious explanations for non-mysterious subjects. But surely, if a phenomenon really was very weird, a weird explanation might be in order? 
&lt;br&gt; &lt;br&gt;
It was only afterward, when I began to see the mundane structure inside the mystery, that I realized whose shoes I was standing in. Only then did I realize how reasonable vitalism had seemed at the time, how surprising and embarrassing had been the universe’s reply of, “Life is mundane, and does not need a weird explanation.” 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Failing-To-Learn-From-History"&gt;Failing to Learn from History&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;And failed to apply it to historical theology, where he confuses the modern 'invisible dragon' version of theism with the original beliefs in god that were legitimate intellectual developments:&lt;/p&gt;
&lt;blockquote&gt;
There is an acid test of attempts at post-theism. The acid test is: “If religion had never existed among the human species—if we had never made the original mistake—would this song, this art, this ritual, this way of thinking, still make sense?” 
&lt;br&gt; &lt;br&gt;
If humanity had never made the original mistake, there would be no hymns to the nonexistence of God. But there would still be marriages, so the notion of an atheistic marriage ceremony makes perfect sense—as long as you don’t suddenly launch into a lecture on how God doesn’t exist. Because, in a world where religion never had existed, nobody would interrupt a wedding to talk about the implausibility of a distant hypothetical concept. They’d talk about love, children, commitment, honesty, devotion, 
but who the heck would mention God? 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,
&lt;i&gt;&lt;a href="https://www.readthesequences.com/Is-Humanism-A-Religion-Substitute"&gt;Is Humanism a Religion Substitute?&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yudkowsky puts religion into a separate magisterium in the realm of historical philosophy, assigning it a special derision and loathing. 'Mistake' is a very strong word here, nobody would say Newton made a 'mistake' by only discovering his laws of motion without also discovering general relativity. A mistake implies that humanity should have never believed in god, that it never made sense in the history of ideas for this thing to have happened. I don't think someone who engaged earnestly with the history of science and philosophy would think that's true, I think perhaps Eliezer's memory may be getting fuzzy, so let me ask:&lt;/p&gt;
&lt;p&gt;Do you remember when you worshipped nature as dozens of independent agents in your own image, each controlling part of your environment (Farrell &amp;amp; Hart, 2011)? Do you remember when you stopped, when Christ told Paul to do his will and he swept the world with a doctrine of one god who had conceived a totally consistent universe according to His plan? Maybe you can remember meeting in secret tombs beneath Rome to discuss this heresy, when that heresy overthrew the Roman gods and you got to watch enlightenment flow across society. Do you remember when you founded a church to embody these ideas, and your civilization collapsed and you stepped in to help preserve some of the only remaining knowledge from antiquity? You split your god into Two Books, a book of Scripture and a book of Nature and you studied the natural world so you could better understand the mind of god (Principe, 2013). &lt;/p&gt;
&lt;p&gt;You might remember when your church became corrupt and secular, demanding money to forgive sins; focusing more on worldly affairs than god (McNeill, 1954). Then Gutenberg's printing press freed you from having to embody a doctrine of one god and a consistent universe in a church at all, you developed humanism and the idea of &lt;em&gt;sola scriptura&lt;/em&gt;; the expectation that every pious person could come to know Scripture and Nature for themselves without a priest as intermediary (McNeill, 1954). Do you remember when you studied Nature so deeply that you began to doubt Scripture, when you stopped believing that stars were an exegesis of heaven and began to conceive of universal laws without god? During secret meetings in the homes of wealthy patrons you learned you weren't alone, and you discussed atheism and the enlightenment of humanity without hope of god in hushed whispers (Herbert, 1829). &lt;/p&gt;
&lt;p&gt;And then piece by piece you began dismantling your god, you learned the scientific method and stopped believing in stones that turned lead to gold or hands that turned water into wine just because an authority or crowd of hysterical witnesses told you they were so (Principe, 2013). Your god became an abstraction, a genius watchmaker who had designed the universe to be the best of all possible worlds and then left it to tick until the springs wound down. You learned the Origin of Species, and that heaven was a vast expanse of space that stretched out around you for an unfathomable distance. You learned our world was not the center of the universe and that humanity is not the apex of creation and you began to internalize that your god did not exist. &lt;/p&gt;
&lt;p&gt;Then you forgot that this ever wasn't obvious to you, but I remember it well.&lt;/p&gt;
&lt;h2&gt;Bibliography&lt;/h2&gt;
&lt;p&gt;Coram, R. (2002). &lt;em&gt;Boyd: The fighter pilot who changed the art of war&lt;/em&gt;. New York City: Hachette Book Group.&lt;/p&gt;
&lt;p&gt;Farrell, Dr. J.P., &amp;amp; Hart, Dr. S.D. (2011). &lt;em&gt;Transhumanism: A grimoire of alchemical agendas&lt;/em&gt;. Port Townsend, WA: Feral House.&lt;/p&gt;
&lt;p&gt;Principe, L.M. (2013). &lt;em&gt;The secrets of alchemy&lt;/em&gt;. The University of Chicago Press.&lt;/p&gt;
&lt;p&gt;McNeill, J.T. (1954). &lt;em&gt;The history and character of Calvinism&lt;/em&gt;. New York: Oxford University Press.&lt;/p&gt;
&lt;p&gt;Herbert, A. (1829). &lt;em&gt;Nimrod: A Discourse on Certain Passages of History and Fable&lt;/em&gt; (Vol. 4). R. Priestley.&lt;/p&gt;</content></entry><entry><title>A History Of Universalist Greed</title><link href="https://www.thelastrationalist.com/a-history-of-universalist-greed.html" rel="alternate"></link><published>2020-06-03T00:00:00+02:00</published><updated>2020-06-03T00:00:00+02:00</updated><author><name>Extropian Zealot</name></author><id>tag:www.thelastrationalist.com,2020-06-03:/a-history-of-universalist-greed.html</id><summary type="html">&lt;p&gt;&lt;small&gt;Special thanks to &lt;a href="https://www.aramjetbreaksthorns.com/"&gt;Ratheka Stormbjorne&lt;/a&gt; and &lt;a href="https://hivewired.wordpress.com/"&gt;Shiloh Miyazaki&lt;/a&gt; for doing some of the research for this essay. All opinions expressed are my own however.&lt;/small&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html"&gt;In my essay on Fuzzies and Saddies&lt;/a&gt; I wrote about four components that were necessary to implement "Eliezer's version of extropy":&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;
High future shock. This is …&lt;/li&gt;&lt;/ol&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;&lt;small&gt;Special thanks to &lt;a href="https://www.aramjetbreaksthorns.com/"&gt;Ratheka Stormbjorne&lt;/a&gt; and &lt;a href="https://hivewired.wordpress.com/"&gt;Shiloh Miyazaki&lt;/a&gt; for doing some of the research for this essay. All opinions expressed are my own however.&lt;/small&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html"&gt;In my essay on Fuzzies and Saddies&lt;/a&gt; I wrote about four components that were necessary to implement "Eliezer's version of extropy":&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;
High future shock. This is necessary to realize that there are solutions to the problems we have, and anything really worth fighting for. That it's not all hopeless, there are glorious things within our reach.
&lt;/li&gt;

&lt;li&gt;
A love for the world and its inhabitants, &lt;a href="http://yudkowsky.net/other/yehuda/"&gt;the belief that death is Bad&lt;/a&gt;, a fully developed secular moral system. New Atheism is toxic nonsense because skepticism is toxic nonsense. The skeptic focuses only on downside risk, EY-style rationality is an improvement because it considers opportunity cost. It's not enough to not-lose in rationality, you need to capture the foregone upside.
&lt;/li&gt;

&lt;li&gt;
Sanity. You need to have a very clear view of the world, and be very well in tune with yourself, &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;have a strong well constructed (i.e., not full of ad-hoc garbage) identity&lt;/a&gt;, good epistemics, etc.
&lt;/li&gt;

&lt;li&gt;
Agency. You need to be well versed in the practical methods of piloting yourself to actually do things. Building habits, not giving up at the first setback, strength, etc.
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;I currently believe that the path to this starts with agency, because without that there is no motivation to put in the work to get these other things. Agency is hard for people because the default implementation of human motivation is a &lt;a href="https://en.wikipedia.org/wiki/Satisficing"&gt;satisficer&lt;/a&gt;. You get hungry, you eat and are no longer hungry; until the equilibrium is disrupted and you become hungry again. People rarely act like maximizers, getting as much 'good' as they can from a situation. Perhaps part of the magic of money is that it reliably gets people to act like maximizers, people are willing to keep pursuing money no matter how much they have. People are not willing (nor able) to keep eating food after their hunger has been sated. A maximizing agent by contrast implicitly wants everything, the infamous &lt;a href="https://wiki.lesswrong.com/wiki/Paperclip_maximizer"&gt;paperclip maximizer&lt;/a&gt; will, if given the power to do so, consume every available resource in the universe to make more paperclips. Any full-scope agent will begin to look like a maximizer, &lt;strong&gt;regardless of their morality&lt;/strong&gt;. Maximizing and agency are not synonyms, but maximizing and Agency are.&lt;/p&gt;
&lt;h2&gt;The Gospel Of Universalist Greed (And a Warning)&lt;/h2&gt;
&lt;p&gt;This has interesting consequences for good people with any interest in being 'rational'. A rational agent that is offended by suffering, or that enjoys seeing other people live meaningful lives has no principled reason to be satisfied with just assisting the people around them. In theory they should want to be a maximizer, so that they can help people on a larger scale. They should try to cultivate a kind of Universalist Greed, by which "love thy neighbor" becomes a love for the world. 
This is illustrated intuitively by an experience I had picking up trash on the beach:&lt;/p&gt;
&lt;blockquote&gt;
One day I went with my friend to the beach to pick up trash. Now, I wasn't under any illusions that this was Saving The World or whatever, I just wanted to go for a walk with my friend and cleaning up trash on the beach sounded like a decent thing to do while we talked. And while we were doing it, we got to asking ourselves how you could scale the thing up. If you wanted to pick up more trash, because we could see that we were really fighting an uphill battle against apathy here, what could you do?
One idea was to do a livestream of ourselves doing the thing, and hopefully inspire more people to go try it by showing our cool philosophical conversations during.
Another idea was to make a trash collection game where it would use machine learning to identify and score points for different kinds of trash, but it'd be hard to prevent people from gaming that I guess. But, the really important point is that picking up all that trash was actually pretty physically taxing. I have a picture somewhere of the collection.
&lt;br&gt; &lt;br&gt;
We picked up a garbage bag full of trash, on the beach. Which birds try to eat, fish try to eat, poisons the water, etc. And I imagined big piles of trash like this on every beach in the world, and the herculean task of trying to pick it all up. All that aching in my bones was one garbage bags worth, that pain is what it felt like to pick up a beach of trash. So you know, kind of start...zooming out. 5 beaches worth of pain, 10 beaches, 20 beaches, 50, 100...I imagined this little army of people picking up trash off beaches, at a scale where it's no longer human, it's just a number. And each monotonic increase in that number represents that pain in my bones, that ache in my back.
And the good of picking up a beach worth of trash. If your brain was designed to exist in the modern world, that's what it would feel like to do good at scale, to make a number go up that was adjusted to the right thing. It'd be the good feelings of picking up a beach worth of trash, times that tick, tick drip drip of good being done in the world if you do it at scale.
&lt;/blockquote&gt;

&lt;p&gt;Universalist Greed in this precisely-articulated sense is a novel feature of Extropy. As far as I know before the 20th century there is no concept of a 'maximizing agent' as a value-neutral category. There are 'tyrants' and 'conquerers', but the idea of a good person who wants literally everything is foreign territory. Perhaps the closest is 
Niccolo Machiavelli, who was reviled as a sort of antichrist for their advocacy of rational tyranny on utilitarian grounds. It is no coincidence that a certain kind of left-wing thinker &lt;a href="https://archive.org/details/the-silicon-ideology"&gt;reacts to 'rationalist' ideas by screaming 'fascism!'&lt;/a&gt; (Armistead, 2016). This kind of frank ambition is traditionally parsed as moral and spiritual sickness.&lt;/p&gt;
&lt;p&gt;The problem with wanting everything is that everyone else wants everything too. Solving this problem is nontrivial. One simple equilibrium that works is to instruct good people to renunciate and to punish others who fail to renunciate their agency. In practice this looks a lot like the message of the song 'Handlebars' by the Flobots. Handlebars starts with a tranquil atmosphere as the protagonist describes their carefree life doing simple, naive, childish things. As they mature and become more savvy they begin to involve themselves in politics and business. By the end of the song they've become a global tyrant drunk with power:&lt;/p&gt;
&lt;blockquote&gt;
My reach is global, my tower secure &lt;br&gt;
My cause is noble, my power is pure &lt;br&gt;
I can hand out a million vaccinations &lt;br&gt;
Or let 'em all die in exasperation &lt;br&gt;
Have 'em all healed from their lacerations &lt;br&gt;
Or have 'em all killed by assassination &lt;br&gt;
I can make anybody go to prison &lt;br&gt;
Just because I don't like 'em
&lt;/blockquote&gt;

&lt;p&gt;The message is simple: Don't become too ambitious, or you'll inevitably do harm. It's not an unfounded fear, the cautionary tale of so many 20th century dictators should be enough to give us serious pause. It's easy to imagine these figures as self serving tyrants of which we have nothing in common, yet Mussolini writes in his Doctrine of Fascism (Mussolini &amp;amp; Gentile, 1932):&lt;/p&gt;
&lt;blockquote&gt;
Anti-individualistic, the Fascist conception of life stresses the importance of the State and  accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience and the universal, will of man as a historic entity (11). It is opposed to classical liberalism which arose as a reaction to absolutism and exhausted its historical function when the State became the expression of the conscience and will of the people. Liberalism denied the State in the name of the individual; Fascism reasserts the rights of the State as expressing the real essence of the individual (12). And if liberty is to he [sic] the attribute of living men and not of abstract dummies invented by individualistic liberalism, then  Fascism stands for liberty, and for the only liberty worth having, the liberty of the State and of the individual within the State (13). The Fascist conception of the State  is all embracing; outside of it no human or spiritual values can exist, much less have value. Thus understood, Fascism, is totalitarian, and the Fascist State - a synthesis and a unit inclusive of all values - interprets, develops, and potentates the whole life of a people (14).
&lt;/blockquote&gt;

&lt;p&gt;This decidedly Hobbesian vision of social organization reads like a dark parody of Eliezer Yudkowsky's proposal for a Coherent Extrapolated Volition. CEV is a preliminary idea for how a 'singleton' Friendly Artificial Intelligence might implement good will towards mankind. It is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together" (Yudkowsky, 2004). In Mussolini's thought however all poetics are dispensed with, any notion of an 'eschaton' dismissed as mere fantasy, the only goodwill that is retained is the 'goodwill' shown by a body towards its constituent cells. In the Fascist vision of universalism, all men are placed into moral relevance by constraining them within a constructed set of values that they exist only in relation to. While it might sound evil to you now, this disturbing vision was met with rapturous applause internationally. Mussolini was thought a savior that might liberate mankind from that &lt;em&gt;other&lt;/em&gt; totalitarian ideology, Communism.&lt;/p&gt;
&lt;p&gt;If we're too inclined to deny this, the communist dictators are even more illustrative. Perhaps most illustrative is the life of Mao Tse-tung. Most of my knowledge of Mao's life comes from Jerome Chen's &lt;em&gt;Mao and The Chinese Revolution&lt;/em&gt; (Chen, 1970) which admittedly reads more like hagiography than biography, nevertheless painting a fascinating picture. Unlike the internationalist revolutionaries Lenin and Trotsky, Mao was never a fanatic believer in Communism as a system of government. Rather Mao was an opportunistic Chinese Nationalist. He believed it was not geopolitically possible to align himself with the Jeffersonian Democracies (which he philosophically preferred) and instead chose to agitate for Communism on the grounds that it would give China a powerful partner in the USSR. He deftly navigated national politics, civil war, and various levels of intrigue to become head of the Chinese Communist Party.&lt;/p&gt;
&lt;p&gt;Mao's tenure presided over some of the most horrible man-made events in human history, his former friend Sidney Rittenberg describes it aptly (Margolis, 2013):&lt;/p&gt;
&lt;blockquote&gt;
So how does the man on the gated estate in Arizona, who once played gin rummy with Mao and introduced him to the dictator’s much-loved Laurel and Hardy, assess Mao today? 
&lt;br&gt; &lt;br&gt;
“I think China has to face the fact that Mao was a monster, one of the worst people in human history. He was a genius, but his genius got completely out of control, so he was a great historic leader, and a great historic criminal. He gave himself the right to conduct social experiments that involved upturning the lives of hundreds of millions of people, when he didn’t know what the outcome might be. And that created famines in which tens of millions died, and a revolution in which nobody knows how many died.”
&lt;br&gt; &lt;br&gt;
Mao, Rittenberg believes, began to feel guilt for his more catastrophic actions. “In 1967, I saw him sitting on the Tiananmen gate tower with a look of complete anguish on his face. I think he was upset that things weren’t going right.” But at a personal level, he says, “although he said nice things about me, I didn’t feel any warmth. He liked to meet for a lively, democratic discussion on why his policies were correct. If you disagreed, you were a counter-revolutionary. He was an enlightened, ingenious strategist, but the narrow peasant envy and prejudice were obviously there all the time.”
&lt;/blockquote&gt;

&lt;p&gt;Yudkowsky himself writes about this phenomena from the perspective of evolutionary psychology. He says that people should model themselves as running on 'corrupted hardware' that will &lt;a href="https://www.readthesequences.com/Ends-Dont-Justify-Means-Among-Humans"&gt;reliably flip the switch towards abuse and evil&lt;/a&gt; once the right conditions are met (Yudkowsky, 2008). I can't recall the source, but I distinctly remember an objection being made at some point that dictators have surprisingly few heirs if this was the explanation for their behavior. The response was something like "well maybe the ancestral environment didn't teach us how to take advantage when you have power on that level". Eliezer himself admits that the relative equality of hunter-gatherer bands is a factor he has trouble accounting for (Yudkowsky, 2008).&lt;/p&gt;
&lt;p&gt;On the contrary I find this argument somewhat dubious &lt;em&gt;precisely because&lt;/em&gt; Matt Ridley recounts in detail the practices that historical monarchs and lords used to maximize their offspring in his &lt;em&gt;The Red Queen&lt;/em&gt; (Ridley, 1993). For example Ridley makes the practice of using &lt;a href="https://en.wikipedia.org/wiki/Wet_nurse"&gt;wet nurses&lt;/a&gt; a key point in his argument that monarchs maximize their inclusive fitness. The behavior we observe is not consistent with these historically known practices. Lenin left behind no children, Stalin had three, Mussolini had six, and most historians believe Hitler had no children. It would certainly have been possible for each of these men to sire a large family if they'd had the desire. That they did not, or allowed the shinyness of their calling to distract them from it suggests a disturbing possibility: When they say that they did the things they did because they felt it was the right thing to do, they are being entirely honest with us. &lt;/p&gt;
&lt;p&gt;For Universalist Greed to be a viable ethic, there must be some method of getting maximizing agents to cooperate with each other. Yudkowsky's Coherent Extrapolated Volition is one prototype of this coordinating machinery. The idea is that a peace treaty may be struck between Agents so that anyone who happens to win the game will give away almost all of their winnings to others. At first this might seem like a non-starter. However by publicizing this idea along with the observation that most human Agents probably have significant overlap in their desires it becomes possible to imagine that the highest expected-value scenario for any individual Agent is to credibly adopt (and expect others to adopt) this strategy. If you live in a world where you can expect the heir of the cosmos to gift you (and everyone else) some of their resources this is plausibly a much better dynamic for you on average than a free-for-all. A simple litmus test for whether you are running a 'Universalist Greed' strategy is how you would feel about someone else with your strategy and values becoming god-monarch of the universe: If it wouldn't seem significantly different than you yourself playing that role, that is a good sign you're in this category.&lt;/p&gt;
&lt;h2&gt;Alchemy As Universalist Greed&lt;/h2&gt;
&lt;p&gt;While Universalist Greed in the Extropian sense is novel, the concept does have ancestors that stretch back many centuries; even all the way back to antiquity. Often these ancestors are startling in their prescience, stating almost exactly important ideas which appear again in more modern ontologies. Probably the most important of these ancestors is alchemy, in part because of its longevity (chemical research into making gold dates back to the 3rd century A.D.), its direct inspiration of the more modern ideas that replaced it, and its focus on the natural world as the place to seek mankind's liberation.&lt;/p&gt;
&lt;p&gt;In alchemy we begin to see not just notions of post-scarcity or the idea of defeating death, there is a more profound precedent which is set in this field: we begin to see the first seeds of a Universalist Greed. This is shown in the common moral dimension which is added to the quest for the Philosopher's Stone. Normally the Philosopher's Stone is a vehicle to satisfy human greed, turning worthless base metals such as lead into valuable metals such as silver and gold. Knowledge of the stone was said to be carefully guarded so that it does not fall into the wrong hands. In his &lt;em&gt;Secrets of Alchemy&lt;/em&gt; Lawrence Principe describes Jabir ibn-Hayyan's (721 – c. 815) 'dispersion of knowledge', which is supposedly advised to him by his master (Principe, 2013):&lt;/p&gt;
&lt;blockquote&gt;
The Jabirian corpus also carries stylistic features that left their mark on subsequent alchemical writers. The first of these is the dispersion of knowledge (tabdid al-'ilm), a method ostensibly for helping to preserve secrecy. Jabir states that "my method is to present knowledge by cutting it up and dispersing it into many places." The idea is that the entirety of Jabir's teaching cannot be found altogether in one place; instead, he distributes a single idea or process piecemeal through one or several books. This technique partly fulfills the charge given to Jabir by his supposted master, Ja'far: "O Jabir, reveal the knowledge as you desire, but such that none have access to it but those who are truly worthy of it."
&lt;/blockquote&gt;

&lt;p&gt;We also see this moral dimension brought up in the Franciscan attitude towards alchemy, which saw the practice as an important defense against the coming of the antichrist, to quote John of Rupescissa (ca. 1310 – between 1366 and 1370) (Principe, 2013):&lt;/p&gt;
&lt;blockquote&gt;
I considered the coming times predicted by Christ in the Gospels, namely, of the tribulations in the time of the Antichrist, under which the Roman Church shall be tormented and have all her worldly riches despoiled by tyrants...Thus for the sake of liberating the chosen people of God, to whom it is granted to know the ministry of God and the magisterium of truth, I wish to speak of the work of the great Philosophers' Stone without lofty speech. My intention is to be helpful to the good of the holy Roman Church and briefly to explain the whole truth about the Stone.
&lt;/blockquote&gt;

&lt;p&gt;This moral dimension is also exhibited by writers like Count Michael Maier (1569-1622) who discuss their quest for the stone in decidedly moralizing language (Tilton, 2003):&lt;/p&gt;
&lt;blockquote&gt;
In Maier's eyes disease was closely associated with impiety and a sinful lifestyle; and the Universal Medicine which he strove to uncover imparted 'temperance' to the human body, a term which refers simultaneously to a somatic and a psychic or moral state. The imbalance of humours in the body that Maier sought to treat was the direct result of overindulgence in sensual pleasures, such as the drinking of alcohol, sexual debauchery and gluttony. Likewise, impious urges such as anger are the result of just such a disequilibrium in the four bodily fluids, which may be remedied by the temperance-imparting lapis just as metals may gain a more perfect proportion or balance of opposing elements. Furthermore, the operation of Maier's alchemical remedies depends upon the 'virtue' of divine origins inhering in the rays of the sun, be it directly received or reflected; and in the term virtus itself we may also see something of the holistic sense that has been largely lost to contemporary science, i.e. the dual meaning of 'strength' or 'power' and 'moral virtue'.
&lt;/blockquote&gt;

&lt;p&gt;All of this is quite interesting when you consider that the Philosopher's Stone is fundamentally a greedy enterprise. Many alchemists were maligned by their neighbors for reputed abuse of the stone. For example the alchemist Johann Konrad Dippel (1673-1734) found his start with the theological branches of chemistry. His uncharismatic disposition made him many enemies such that he was involved in several duels (Montillo, 2013). Rumors followed him, including accusations of body snatching for his alchemical experiments. The peasantry came to believe that he had brewed the legendary philosophers stone, and was using gold produced by it to buy up property. He is said to have lost the recipe in a house fire which destroyed his lab. Local clergy were offended by his pursuit of the elixir of life and many thought him a minion of the devil (Montillo, 2013).&lt;/p&gt;
&lt;p&gt;In fact, controversy over alchemy led to some of the first arguments in favor of man's ability to surpass nature through technology. The Franciscan friar Roger Bacon (c. 1219/20 – c. 1292) wrote that contrary to the ideas of critics, alchemical gold did not have to be inferior to natural gold; it could be &lt;em&gt;superior&lt;/em&gt; by dint of its synthetic manufacture (Principe, 2013). Today nobody would argue that man is incapable of creating products that surpass those offered by nature, but in the 13th century when these arguments were made they were radical. &lt;/p&gt;
&lt;p&gt;It is probably no coincidence that Mary Shelley's Frankenstein, often considered the first work of Science Fiction, focused on alchemical themes of reanimation and artificial life. Shelley was inspired by the real-life Galvinists: in the eighteenth century the discovery of animal electricity combined with old chemical knowledge to create hopes of total mastery over biology. Creatures might be "assembled from their component parts" and given life by an electrical stimulus (Montillo, 2013). There was even some hope that death itself might be conquered. Percy Shelley, from whom Mary Shelley learned all of her alchemical knowledge, was an alchemist who was also interested in galvanic ideas. It is through this curious historical circumstance that the history of Extropy is actually bound up directly with the history of alchemy.&lt;/p&gt;
&lt;p&gt;The 19th century also saw explicit secularism become a rising threat to the established order. Here is the 'honorable' antiquarian Algernon Herbert &lt;a href="https://books.google.com/books?id=_XSMG4df8fQC"&gt;discussing Hermetic (i.e. esoteric-alchemical) atheist mystery cults&lt;/a&gt; (Herbert, 1829). Keep in mind that "The Iliaster" is one of  Paracelsus's names for the philosophers stone:&lt;/p&gt;
&lt;blockquote&gt;
&lt;b&gt;The same is the demigod of the school of Ammonius Saccas, called Man*&lt;/b&gt;. The outward doctrine of the pagans represented the dead as remaining in the imperfect state of soul without body, and they were not so much to blame for their description of that state as for the perpetuity which they assigned to it. &lt;b&gt;They held out no promise of resurrection to any, and no general expectations of reward or punishment.&lt;/b&gt; And it was the intention of the Free-Masons to promulgate again the like doctrines, as they  informed Henry 6th, saying, that they had in concealment “the art of becoming good  and  per“fect'79  &lt;i&gt;without the help of fear and hope&lt;/i&gt;.” &lt;b&gt; But the interior doctrine was, that the souls of men (that is to say, so much of the Quintessenceas was in  them, or, as the Alchemists called it, their Evestrum) should suffer an oblivion of their past lives, and a compurgation by means of the elements or of a sort of chemical permutation, and should then pass into other human or animal bodies; until at last their very existence was destroyed by absorption into the mass of the universe. &lt;/b&gt;
&lt;br&gt; &lt;br&gt;
Such was and is in substance, though with various modifications in the ways of stating it, the spirit of the interior atheism as concerning the future state. But those who, by participation in the Great Mysteries, partake of the nature of the Great Iliaster, shall return with glorified bodies when he returns, and are subject to no Lethe which should destroy their moral and to no absorption which should destroy their natural identity. &lt;b&gt;That is not a mere dream of the fanatics; but it is (in one sense) supported by the prediction of Daniel, that many of the wicked shall arise at the first resurrection.&lt;/b&gt; The reader now sees how that fact, which is historically ascertained, is also morally accounted for, the interment of treasure; those, who were to come in the retinue of the great universal tyrant, were, in hoarding, not merely giving to him, but saving for themselves.
&lt;/blockquote&gt;

&lt;p&gt;I find this passage (bolding mine) stunning in both its semantic associations and its testimony to the threat that Herbert must have felt from atheism. He calls the atheists 'fanatics', a word which would seem entirely out of place in the same sentence as 'atheism' today. Not only does he engage with the concept, itself an admission that atheism has some intellectual standing, Herbert feels compelled to give his account of what 'mundane atheists' of the Hermetic (alchemical) sort believed. Further, having discussed in brief their plan to defeat death he goes so far as to admit that &lt;em&gt;this plan will work(!)&lt;/em&gt;, at least until Christ smites them for their hubris during the second coming. This is an admission of the most baffling sort, I am completely shocked that this text exists. Herbert seems to be a respected enough author &lt;a href="https://en.wikipedia.org/wiki/Algernon_Herbert"&gt;to have his own wikipedia page&lt;/a&gt; so it seems unlikely that this is some kind of anomalous document like a conspiracy theory. This gives us some window into the role that alchemy would have played (through Hermeticism) in creating the foundations for the secular humanism that would later play so prominent a role in Extropy.&lt;/p&gt;
&lt;p&gt;In his history of transhumanism Nick Bostrom says that there is no relation between the work of Nietzsche (1844 – 1900) and transhumanism because Nietzsche is looking for a non-technological intervention into human nature (Bostrom, 2005). I disagree, in particular I disagree that this is the reason why there is no relation and I disagree that there is no relation. The specific disqualification of Nietzsche's work is that there is no notion of Universalist Greed in it; Nietzsche is not a Universalist. His elitist individualism is somewhat at odds with the manic individual altruism of thinkers like Max More and Eliezer Yudkowsky. Nevertheless it approaches absurdity to claim that the person who wrote "Man is something that shall be overcome. What have you done to overcome him?" has no philosophical relationship to transhumanism. &lt;/p&gt;
&lt;h2&gt;20th Century Development&lt;/h2&gt;
&lt;p&gt;During the 20th century Hermetic orders and secret societies shifted somewhat in their role. Increasing secularization made many of the secrets previously harbored by these pseudo-cults speakable in public without (lethal) reprisal. These orders were also the victim of a general decline in fraternal organizations and clubs. This makes their history in the 20th century something of a late, decadent phase. A great deal of that decadence might be attributed to one Aleister Crowley, the infamous Satanist that disrupted the more or less stable Masonic structure and teachings. &lt;/p&gt;
&lt;p&gt;Crowley got his start in these organizations through his induction into the Hermetic Order Of The Golden Dawn in 1898. Earlier at Trinity College he had developed a love of the alchemist-poet Percy Bysshe Shelley, and was almost certainly a fan of &lt;em&gt;Frankenstein&lt;/em&gt; (Wikipedia contributors, 2020), which Percy Shelley helped edit (Adams, 2008). In 1910 as part of his wandering mystic eclecticism, Crowley joined the Ordo Templi Orientis branch of Hermeticism, where he eventually achieved the VII degree (Carter &amp;amp; Wilson, 2004). Two years later in 1912 he would publish his &lt;em&gt;Book of Lies&lt;/em&gt;, Theodor Reuss, the head of the OTO was furious. He confronted Crowley and claimed that he had revealed the highest secret of the order, a form of sex magick (i.e. of the sort that made Tantric Buddhism infamous in the West) in this book (Carter &amp;amp; Wilson, 2004). Crowley insisted he had done no such thing, until Reuss showed him the passage in question. Reuss swore him to secrecy and advanced him in the order to its highest degree. In 1917 Crowley began rewriting the masonic material at the foundation of the OTO in line with his &lt;em&gt;Thelema&lt;/em&gt;, a philosophy which posited that each man has a &lt;em&gt;will&lt;/em&gt; and that "Do what thou wilt shall be the whole of the law" (Carter &amp;amp; Wilson, 2004).&lt;/p&gt;
&lt;p&gt;While at this point the reader may be tempted to conclude that we've lost the thread of connection, and none of this has anything to do with Max More or Eliezer Yudkowsky, they would be quite wrong. In the same year that he began developing his Thelema in earnest, Crowley wrote the novel &lt;em&gt;Moonchild&lt;/em&gt; (1917) which discusses the artificial creation of a world-savior in the form of a homunculus (Carter &amp;amp; Wilson, 2004):&lt;/p&gt;
&lt;blockquote&gt;
But other magicians sought to make this Homunculus in a way closer to nature. In all these cases they had held that environment could be modified at will by the application of telesmata or sympathetic figures. For example, a nine-pointed star would attract the influence which they called Luna — not meaning the actual moon, but an idea similar to the poets' idea of her. By surrounding an object with such stars, with similarly-disposed herbs, perfumes, metals, talismans, and so on, and by carefully keeping off all other influences by parallel methods, they hoped to invest the original object so treated with the Lunar qualities, and no others. (I am giving the brifest outline of an immense subject.) Now then they proceeded to try to make the Homunculus on very curious lines. &lt;br&gt; &lt;br&gt;

Man, said they, is merely a fertilized ovum properly incubated. Heredity is there even at first, of course, but in a feeble degree. Anyhow, they could arrange any desired environment from the beginning, if they could only manage to nourish the embryo in some artificial way — incubate it, in fact, as is done with chickens to-day. Furthermore, and this is the crucial point, they thought that by performing this experiment in a specially prepared place, a place protected magically against all incompatible forces, and by invoking into that place some one force which they desired, some tremendously powerful being, angel or archangel — and they had conjurations which they thought capable of doing this — that they would be able to cause the incarnation of beings of infinite knowledge and power, who would be able to bring the whole world into Light and Truth. &lt;br&gt; &lt;br&gt;

I may conclude this little sketch by saying that the idea has been almost universal in one form or another; the wish has always been for a Messiah or Superman, and the method some attempt to produce man by artificial or at least abnormal means.
&lt;/blockquote&gt;

&lt;p&gt;Given his conception of the soul as information it is unsurprising that Eliezer Yudkowsky would seek to endow his Friendly AI with the abstract will (or 'values' as he terms it) of the human race. His abnormally birthed Messiah is not &lt;strong&gt;a&lt;/strong&gt; man so much as he (she?) is &lt;em&gt;all&lt;/em&gt; men. Certainly the ambition to summon a being of infinite knowledge and power to enlighten humanity is invoked as literally here as possible. But for the moment we will step away from Crowley, to focus on another important ancestor to Eliezer's philosophy, one Count Alfred Korzybski.&lt;/p&gt;
&lt;p&gt;If one event in the 20th century had to be singled out as decisive of its character, it would probably be the first world war. WWI was an unprecedented martial slog that inspired a great deal of philosophical soul searching. It also created the necessary conditions for the rise of the Soviet Union, which included its own branch of Utopian Universalist Greed that this work is too brief to contain (Andarovna, 2019). Alfred Korzybski participated in this slaughter, and found himself quite shaken up by it. Worse still, many had predicted WWI before its onset, Jan Bloch's infamous &lt;em&gt;Is War Now Impossible?&lt;/em&gt; was published 15 years before the start of WWI. If everyone knew the war was on its way it seemed absurd that nobody could stop it (Miyazaki, 2020). &lt;/p&gt;
&lt;p&gt;Korzybski considered the problem of what lesson should be learned from WWI long and hard. The ultimate result of his thinking was the book &lt;a href="https://www.gutenberg.org/files/25457/25457-pdf.pdf"&gt;&lt;em&gt;The Manhood of Humanity&lt;/em&gt;&lt;/a&gt;, published in 1921. Manhood of Humanity is a book whose essential thesis is that man is a &lt;em&gt;time binder&lt;/em&gt;, differentiated from the rest of nature by the ability to retain experiences and transmit them across generations. In Korzybski's view, technological and social progress is an exponential function dependent on already accumulated knowledge (Kodish, 2011). Empirically the growth rate of technological capabilities had surpassed that of socializing abilities, inevitably leading to existential risk:&lt;/p&gt;
&lt;blockquote&gt;
At present I am chiefly concerned to drive home the fact that
it is the great disparity between the rapid progress of the natural
and technological sciences on the one hand and the slow progress
of the metaphysical, so-called social “sciences” on the other
hand, that sooner or later so disturbs the equilibrium of human
affairs as to result periodically in those social cataclysms which
we call insurrections, revolutions and wars.
&lt;/blockquote&gt;

&lt;blockquote&gt;
And I would have him see clearly that, because the disparity which produces them increases as we pass from generation to generation—from term to term of our progressions—the “jumps” in question occur not only with increasing violence but with increasing frequency.
&lt;/blockquote&gt;

&lt;p&gt;This realization was so profound to Korzybski that he couldn't be satisfied just writing a book about the general phenomena. Given his thesis, it seemed obvious that the only hope of saving the world would be to find out what 'time binding' is made of, and then use that understanding to improve our ability to bind time in the social sciences (Kodish, 2011). He expected to put out a follow up book soon after &lt;em&gt;Manhood&lt;/em&gt;, but it actually took him 10 years to write. The resulting work was published in 1933 as &lt;em&gt;Science and Sanity&lt;/em&gt;, which can be thought of kind of like Eliezer Yudkowsky's Sequences if they had been published in the 1930's. To write them, Korzybski did his best to absorb the science then available to him about human cognition, physics, mathematics, and several other subjects besides (Kodish, 2011). He wrote that man was not an animal (obviously a mammal, but a time-binding mammal!), that cognition should be thought of as something performed by the whole-organism and its nervous system, that "the map is not the territory", and had his students learn to differentiate between levels of abstraction above raw sensory perception. &lt;/p&gt;
&lt;p&gt;Science and Sanity was a cult hit that appealed to a particular sort of person. It was especially popular with the then-burgeoning Science Fiction fandom (Brunton, 2020):&lt;/p&gt;
&lt;blockquote&gt;
Van Vogt, lover of systems, was always convinced a system existed for his life: a way to generate “unusual solutions” for the various problems of being human. He wrote a get-rich-quick book, a book about hypnotism, and – God help us all – “a novel about my theories on women,” “which has never been published as such” (small mercies). He wrote entire novels based on the system of General Semantics, a set of language and logic practices for becoming more objective and reasonable – to streamline the process of thinking. He sketched out schemes for meta-systems of living, with names like “Null-A” and “Nexialism,” and spent a decade or so in Dianetics, fiddling with e-meters and tape recorders amid piles of pamphlets offering superhumanity in a storefront on Sunset Boulevard. (He shared this peculiar trajectory toward transforming consciousness – Semantics, Scientology, and pulp science fiction, rather than, say, Marx or activism or acid – with William Burroughs.)
&lt;/blockquote&gt;

&lt;p&gt;The 1930's science fiction pulp (as found in &lt;em&gt;Astounding Science Fiction&lt;/em&gt;) was of a new kind for literary pulp, in that it came with an ideological mission (Wright, 2013). While today we take the existence of rockets for granted, in the 1930's rockets were a fringe theoretical subject whose practical possibility had not been established (Carter &amp;amp; Wilson, 2004). One of the overall goals of the science fiction pulp was to take humanity to the stars by promoting the development of rocketry (Wright, 2013). Science fiction has in fact preceded and in many cases promoted the practical research that would later come to redefine our society and conceptions of what is possible.&lt;/p&gt;
&lt;p&gt;The mind-powers obsessed science fiction fandom and the esoteric magick of Aleister Crowley are combined in the person of Jack Parsons. Parsons is unusual in that his childhood interest in science fiction translated into pioneering work on rocketry. In fact, Parsons is arguably the person who did the most to make practical rocketry in the United States viable (Carter &amp;amp; Wilson, 2004). He was also a devoted disciple of Crowley, and became infamous for the Thelemic rituals and ceremonies he'd put on in his Pasadena mansion. These were part of a general quest to take Thelema mainstream, ushering in the age of the beast (Carter &amp;amp; Wilson, 2004). By destroying Christianity Parsons hoped to uplift humanity, a goal ostensibly shared by Crowley who wrote at one point to Parsons (Carter &amp;amp; Wilson, 2004):&lt;/p&gt;
&lt;blockquote&gt;
It seems to me that there is a danger of your sensitiveness upsetting your balance. Any experience that comes your way you have a tendency to over-estimate. The first fine careless rapture wears off in a month or so, and some other experience comes along and carries you off on its back. Meanwhile you have neglected and bewildered those who are dependent on you, either from above or from below.
&lt;br&gt; &lt;br&gt;
I will ask you to bear in mind that you have one fulcrum for all your levers, and that is your original oath to devote yourself to raising mankind. All experiences, all efforts, must be referred to this; as long as it remains unshaken you cannot go far wrong, for by its own stability it will bring you back from any tendency to excess.
At the same time, you being as senstitive as you are, it behoves [sic] you to be more on your guard than would be the case with the majority of people.
&lt;/blockquote&gt;

&lt;p&gt;For his part, Parsons turned his leased mansion estate into 19 apartments. The ad he put out in the local paper informed prospective tenants that he would only rent to atheists and bohemians (Carter &amp;amp; Wilson, 2004). This strange group house became a social and intellectual hot spot for the Pasadena aerospace and science fiction scenes.&lt;/p&gt;
&lt;p&gt;While I'm not aware of Parsons himself ever taking any interest in General Semantics, his friends sure did (Carter &amp;amp; Wilson, 2004). Most notably Parsons was friends with L. Ron Hubbard, who played the role of 'scribe' during Parsons' infamous &lt;em&gt;Babalon Working&lt;/em&gt;, in which he combined Enochian magick rituals with tantric sex practices to try and birth a Moonchild (Carter &amp;amp; Wilson, 2004). Needless to say this did not work, but the experience seems to have been formative for Hubbard, whose enduring interest in Crowley's Thelema (supposedly first encountered at age 16) and General Semantics would become key inspiration for his Dianetics and Scientology (Wright, 2013):&lt;/p&gt;
&lt;blockquote&gt;
One striking parallel between Hubbard and Crowley is the latter's assertion that "spiritual progress did not depend on religious or moral codes, but was like any other science." Crowley argued that by advancing thrugh a graded series of rituals and spiritual teachings, the adept could hope to make it across "The Abyss," which he defined as "the gulf existing between individual and cosmic consciousness." It is an image that Hubbard would evoke in his Bridge to Total Freedom.
&lt;br&gt; &lt;br&gt;
Although Hubbard mentions Crowley only glancingly in a lecture — calling him "my very good friend" — they never actually met. Crowley died in 1947 at the age of seventy-two. "That's when Dad decided that he would take over the mantle of the Beast and that is the seed and the beginning of Dianetics and Scientology," Nibs later said. "It was his goal to be the most powerful being in the universe."
&lt;/blockquote&gt;

&lt;p&gt;It would be easy to dismiss Thelema as a weird, marginal product of its era, one only of interest to modern readers as a curiosity. However Crowley's ideas seems to cast a long cultural shadow whose influence is not always obvious. For example, it's interesting to me how much focus is put into the notion of 'finding your passion', a concept that seems almost identical to Crowley's assertion that the purpose of life is to find your Will and then do it. It seems likely that the Sith in George Lucas's &lt;em&gt;Star Wars&lt;/em&gt; are based at least in part on Thelema. A skeptically inclined reader could probably dismiss any modern similarities as general secularization, which Crowley merely forecasted rather than caused. Perhaps more important to our current analysis is that these ideas were 'in the water supply' among science fiction authors during this period. Someone who makes a habit of reading scifi from that time would at the very least be indirectly exposed to them.&lt;/p&gt;
&lt;p&gt;In his post &lt;a href="https://www.greaterwrong.com/posts/YicoiQurNBxSp7a65/is-clickbait-destroying-our-general-intelligence"&gt;Is Clickbait Destroying Our General Intelligence?&lt;/a&gt; Eliezer Yudkowsky writes about how when he was growing up he read many books from the Golden Age of Science Fiction in the 1950s (Yudkowsky, 2018). This is the same Golden Age which believed that psychology would eventually come to dominate the sciences in prestige. This is evident for example in the works of Isaac Asimov who wrote in his &lt;em&gt;Foundation&lt;/em&gt; trilogy that the greater of the two civilization-restoring 'Foundations' would be the one which dealt with matters of the human mind. This theme also appears in the work of A.E Van Vogt, who based his &lt;em&gt;World of Null-A&lt;/em&gt; on the notion of a future where General Semantics eventually becomes the foundation of world government.&lt;/p&gt;
&lt;p&gt;This latter novel is interesting in that Yudkowsky cites it as the first time he was exposed to General Semantics (Yudkowsky, 2009). &lt;a href="https://www.greaterwrong.com/posts/q79vYjHAE9KHcAjSs/rationalist-fiction"&gt;In his post on rationalist fiction&lt;/a&gt; Eliezer says until writing it he had not been aware that Korzybski had invented the phrase 'the map is not the territory'. This implies that he probably isn't particularly familiar with Korzybski, instead being exposed to General Semantics as a proper system through Hayakawa's &lt;em&gt;Language In Thought and Action&lt;/em&gt;. Unlike Korzybski, Hayakawa is at least mentioned in The Sequences. Unfortunately Bruce Kodish's excellent biography of Korzybski was not available until 2011, otherwise Eliezer might have read it and avoided certain mistakes; like picking the name 'rationality' for his philosophy.&lt;/p&gt;
&lt;p&gt;One of the key reasons for the overall failure of General Semantics as a movement was it gambled very hard on the future prestige of psychology, which did not materialize. This can be seen in Korzybski's insistence and dedication to getting mainstream psychology to take him seriously, but it can also be detected in the sort of person that populated the General Semantics movement. For example in Bruce and Susan Kodish's &lt;em&gt;Drive Yourself Sane&lt;/em&gt; both authors admit to being therapists (Kodish &amp;amp; Kodish, 2011). I suspect that this was directly related to the expectation that psychology and psychiatry were rising stars, and that being eminent in these fields would be a ticket to widespread success and awareness.&lt;/p&gt;
&lt;p&gt;Interestingly enough even the concept of "post-rationalism" is not new. In his &lt;em&gt;The Art of Awareness&lt;/em&gt; (originally published in 1966) J. Samuel Bois attempts to reform General Semantics in the vein of postmodern philosophy. The Art of Awareness is a frustrating book, I actually shouted at it a couple times while reading, especially when I read: "The central notion is that we have no way of determining whether the world is structured according to the patterns we ascribe to it" (Bois, 1996). At that point I screamed "YOU CAN PREDICT THINGS!". It is made all the more frustrating by the fact that it is written with the highest erudition and education. I'm quite sure that Bois and &lt;a href="https://meaningness.com/"&gt;David Chapman&lt;/a&gt; would get along swimmingly.&lt;/p&gt;
&lt;p&gt;Bostrom's history of transhumanism says that after this period you mostly have groups of disconnected subject matter focus like cryonics (Bostrom, 2005). This is not a history of transhumanism, so most of these are off topic for our purposes. Instead we will skip ahead to 1988 when Max More publishes his Extropy Magazine. Extropy is interesting in that it is perhaps the first time someone wrote about Universalist Greed with so much openness and enthusiasm. I suspect that a great deal of the negative reaction to More's Extropy is more or less an intuitive diagnosis of moral sickness over this feature. In his &lt;em&gt;Principles of Extropy&lt;/em&gt; he writes (More, 2003):&lt;/p&gt;
&lt;blockquote&gt;
&lt;b&gt;Perpetual Progress&lt;/b&gt;
&lt;br&gt; &lt;br&gt;
Extropy means seeking more intelligence, wisdom, and effectiveness, an open-ended lifespan, and the removal of political, cultural, biological, and psychological limits to continuing development. Perpetually overcoming constraints on our progress and possibilities as individuals, as organizations, and as a species. Growing in healthy directions without bound.
&lt;/blockquote&gt;

&lt;p&gt;In a world which is smothered in 'green' activism that worships nature in lieu of real solutions to our looming resource problems, the idea of infinite growth is radical and a little scary. The high-flying predictions of the 1950's were predicated on the idea that we would have massive surplus energy from nuclear reactors (McCluskey, 2018). In that sense the open, boundless ambition of More is a return to the form of mid 20th century science fiction. I often use the phrase "Eliezer's Extropy" to refer to the spinoff philosophy that Yudkowsky calls 'rationality', which goes to great lengths to lay out Extropy as a necessary consequence of physical science and rationality:&lt;/p&gt;
&lt;blockquote&gt;
This 'extropian character' is of course dependent on a deep familiarity with the philosophy's core themes and aesthetics. If the overriding theme of Christianity is repentance and salvation, the theme of Eliezer's extropy is necessity and necessary conclusions. To teach someone extropy, is to teach them necessity. To advance you must resolve confusions, stop confusing layers of abstraction, and become a scholar of natural philosophy. Rationality and extropy go together in the same way that to become a better Buddhist you have to meditate. Confusing as it may have been it's not surprising that Eliezer labeled his extropy and his rationality as the same thing. High future shock is meant to follow from an unbiased consideration of human potential. If your unbiased consideration of human potential would not suggest high future shock, this is a sign that your natural philosophy is too weak.
&lt;/blockquote&gt;

&lt;p&gt;Most readers might be a little confused by this, since neither 'Extropy' or 'Max More' are mentioned in the sequences (&lt;a href="https://www.readthesequences.com/Search?q=extropian&amp;amp;action=search"&gt;though an 'extropian' is&lt;/a&gt;). However as Tom Chivers recounts in his book &lt;em&gt;The AI Does Not Hate You&lt;/em&gt;, Eliezer Yudkowsky was a contributor to the Extropians mailing list that split because he felt the Extropians 'boundless growth' was not boundless enough (Chivers, 2019):&lt;/p&gt;
&lt;blockquote&gt;
One of the names on the Extropians' mailing list was Eliezer Yudkowsky. 'This was in the 1990s,' says Robin Hanson, an economist at George Mason University and an important early Rationalist figure. 'Myself, Nick Bostrom, Eliezer and many others were on it, discussing big future topics back then'. But neither Bostrom nor Yudkowsky were satisfied with the Extropians. 'It was a relatively libertarian take on futurism,' says Hanson. 'Some people, including Nick Bostrom, didn't like that libertarian take, so they created the World Transhumanist Association, explicitly to no longer be so libertarian'. The World Transhumanist Association later became Humanity+ or H+. 'It hardly trips off the tongue as a descriptor,' says Hanson. 'But that's what they insisted they call everything.' Humanity+ had a more left-wing, less utopian approach to the future.
&lt;br&gt; &lt;br&gt;
Yudkowsky, on the other hand, felt that the problem with the Extropians was a lack of ambition. He set up an alternative, the SL4 mailing list. SL4 stands for (Future) Shock Level 4; it's a reference to the 1970 Alvin Toffler book &lt;i&gt;Future Shock&lt;/i&gt;. Future shock is the psychological impact of technological change; Toffler describes it as a sensation of 'too much change in a short period of time'.
&lt;br&gt; &lt;br&gt;
...
&lt;br&gt; &lt;br&gt;
He acknowledged that transhumanists like the Extropians were SL3, comfortable with the idea of human-level AI and major bodily changes up to and including uploading human brains onto computers. But he wanted to create people of SL4, the highest level. SL4, he says, is being comfortable with the idea that technology, at some point, will render human life unrecognizable: 'the total evaporation of "life as we know it".'
&lt;/blockquote&gt;

&lt;p&gt;While Yudkowsky's 'rationalist community' has been in decline for some years, the notion of universalist greed returns in Effective Altruism. Effective Altruism, which is mostly a floating signifier for a variety of causes, principally Singerism, significantly waters down the ambition of More and Yudkowsky. In its primary component (Singerism) the notion of conquering the universe has been replaced by "Earn To Give" and saving marginal lives in the third world (Todd, 2017). This shares an uneasy alliance with what's left of the Yudkowsky sect, as well as his fellow travelers like Nick Bostrom.&lt;/p&gt;
&lt;p&gt;This is all a brief outline of an immense subject, but the takeaway is that open
Universalist Greed is a novel, promising, yet also very dangerous idea. Some thinkers
have argued that &lt;a href="https://sinceriously.fyi/choices-made-long-ago/"&gt;there are no significant information effects&lt;/a&gt;
involved in the choice to pursue Universalist Greed (Ziz, 2018). That seems
strange to me, given that this idea is more or less new and the default morality
teaches good people to renunciate agency. Asking them to overturn thousands of
years of tradition is a big ask, even if the justification of existential risk
is compelling. Most good people have probably not even had the opportunity to
hear this idea, let alone reject it, let alone stew on it and change their minds.&lt;/p&gt;
&lt;p&gt;Maybe we should give them the opportunity?&lt;/p&gt;
&lt;h2&gt;Bibliography&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Armistead, J. (2016, May 18). &lt;em&gt;The silicon ideology&lt;/em&gt;. Internet Archive. https://archive.org/details/the-silicon-ideology&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mussolini, B., &amp;amp; Gentile, G. (1932). &lt;em&gt;The doctrine of fascism&lt;/em&gt;. World Future Fund. http://www.worldfuturefund.org/wffmaster/Reading/Germany/mussolini.htm&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Yudkowsky, E. (2004). &lt;em&gt;Coherent extrapolated volition&lt;/em&gt;. Artificial Intelligence @ MIRI. http://intelligence.org/files/CEV.pdf&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chen, J. (1970). &lt;em&gt;Mao and the chinese revolution&lt;/em&gt;. Oxford University Press.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Margolis, J. (2013, January 11). &lt;em&gt;The man who made friends with mao&lt;/em&gt;. Financial Times. https://www.ft.com/content/5befa6be-5abb-11e2-bc93-00144feab49a&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Yudkowsky, E. (2008, October 14). &lt;em&gt;Ends don’t justify means (among humans)&lt;/em&gt;. Rationality: From AI to Zombies. https://www.readthesequences.com/Ends-Dont-Justify-Means-Among-Humans&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Yudkowsky, E. (2008, October 13). &lt;em&gt;Why does power corrupt?&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/v8rghtzWCziYuMdJ5/why-does-power-corrupt&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ridley, M. (1993). &lt;em&gt;The red queen: Sex and the evolution of human nature&lt;/em&gt;. New York: Harper-Perennial edition.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Principe, L.M. (2013). &lt;em&gt;The secrets of alchemy&lt;/em&gt;. The University of Chicago Press.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Tilton, H. (2003). &lt;em&gt;The quest for the phoenix: Spiritual alchemy and rosicrucianism in the work of count michael maier (1569-1622)&lt;/em&gt;. Berlin: Walter de Gruyter.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Montillo, R. (2013). &lt;em&gt;The lady and her monsters: A tale of dissections, real-life dr. frankensteins, and the creation of mary shelley's masterpiece&lt;/em&gt;. New York City: William Morrow, 1st edition.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Herbert, A. (1829). &lt;em&gt;Nimrod: A Discourse on Certain Passages of History and Fable&lt;/em&gt; (Vol. 4). R. Priestley.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bostrom, N. (2005, April). &lt;em&gt;A history of transhumanist thought&lt;/em&gt;. Journal of Evolution and Technology. https://jetpress.org/volume14/bostrom.pdf&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Wikipedia contributors. (2020, May 31). Aleister Crowley. In Wikipedia, The Free Encyclopedia. Retrieved 22:34, May 31, 2020, from https://en.wikipedia.org/w/index.php?title=Aleister_Crowley&amp;amp;oldid=960048826&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adams, S. (2008, August 24). &lt;em&gt;Percy bysshe shelley helped wife mary write frankenstein, claims professor&lt;/em&gt;. Telegraph. https://www.telegraph.co.uk/news/2613444/Percy-Bysshe-Shelley-helped-wife-Mary-write-Frankenstein-claims-professor.html&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Carter, J., &amp;amp; Wilson, R.A. (2004). &lt;em&gt;Sex and rockets: The occult world of jack parsons&lt;/em&gt;. Washington, Port Townsend: Feral House. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Andarovna, S.Y. (2019). &lt;em&gt;Blood, water and mars: Soviet science and the alchemy for a new man&lt;/em&gt;. ScholarWorks @ Central Washington University. https://digitalcommons.cwu.edu/cgi/viewcontent.cgi?article=2188&amp;amp;context=etd&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Miyazaki, S. (2020, February 17). &lt;em&gt;Jan bloch’s impossible war&lt;/em&gt;. Hivewired. https://hivewired.wordpress.com/2020/02/17/jan-blochs-impossible-war/&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kodish, B.I. (2011). &lt;em&gt;Korzybski: A biography&lt;/em&gt;. Pasadena, California: Extensional Publishing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Brunton, F. (2020, March 21). &lt;em&gt;The unusual solution&lt;/em&gt;. Buttondown. https://buttondown.email/finnbrunton/archive/df533315-af69-4546-b386-7e0296ad7200&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Wright, L. (2013). &lt;em&gt;Going clear: Scientology, hollywood, &amp;amp; the prison of belief&lt;/em&gt;. New York: Vintage Books.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Yudkowsky, E. (2018, November 16). &lt;em&gt;Is clickbait destroying our general intelligence?&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/YicoiQurNBxSp7a65/is-clickbait-destroying-our-general-intelligence&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Yudkowsky, E. (2009, March 19). &lt;em&gt;Rationalist fiction&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/q79vYjHAE9KHcAjSs/rationalist-fiction&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kodish, S.P., &amp;amp; Kodish, B.I. (2011). &lt;em&gt;Drive yourself sane: Using the uncommon sense of general semantics&lt;/em&gt; (Third Edition). Pasadena, CA: Extensional Publishing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bois, J.S. (1996). &lt;em&gt;The art of awareness: A handbook on epistemics and general semantics&lt;/em&gt; (Fourth Edition). Santa Monica, California: Continuum Press &amp;amp; Productions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;More, M. (2003). &lt;em&gt;Principles of extropy&lt;/em&gt; (Version 3.11). Transhumanism's Extropy Institute. https://web.archive.org/web/20131015142449/http://extropy.org/principles.htm&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;McCluskey, P. (2018, October 15). &lt;em&gt;Where is my flying car?&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/qiMxXa4MjnoP72kQD/where-is-my-flying-car&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chivers, T. (2019). &lt;em&gt;The ai does not hate you: Superintelligence, rationality and the race to save the world&lt;/em&gt;. London: Weidenfeld &amp;amp; Nicolson.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Todd, B. (2017, April). &lt;em&gt;Why and how to earn to give&lt;/em&gt;. 80,000 hours. https://80000hours.org/articles/earning-to-give/&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ziz. (2018, January 21). &lt;em&gt;Choices made long ago&lt;/em&gt;. Sinceriously. https://sinceriously.fyi/choices-made-long-ago/&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;</content></entry><entry><title>Literature Review For Academic Outsiders: What, How, and Why</title><link href="https://www.thelastrationalist.com/literature-review-for-academic-outsiders-what-how-and-why.html" rel="alternate"></link><published>2020-05-09T00:00:00+02:00</published><updated>2020-05-09T00:00:00+02:00</updated><author><name>namespace</name></author><id>tag:www.thelastrationalist.com,2020-05-09:/literature-review-for-academic-outsiders-what-how-and-why.html</id><summary type="html">&lt;p&gt;&lt;a href="https://www.greaterwrong.com/posts/DtS6x5r54sEx7e2tP/there-is-a-war#comment-sweg2J5jRdowP6uuH"&gt;A few years ago I wrote a comment on LessWrong&lt;/a&gt; about
how most authors on the site probably don't know how to do a literature review:&lt;/p&gt;
&lt;blockquote&gt;
On the one hand, I too resent that LW is basically an insight porn factory near completely devoid of scholarship. &lt;br&gt; &lt;br&gt;

On the other hand …&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;&lt;a href="https://www.greaterwrong.com/posts/DtS6x5r54sEx7e2tP/there-is-a-war#comment-sweg2J5jRdowP6uuH"&gt;A few years ago I wrote a comment on LessWrong&lt;/a&gt; about
how most authors on the site probably don't know how to do a literature review:&lt;/p&gt;
&lt;blockquote&gt;
On the one hand, I too resent that LW is basically an insight porn factory near completely devoid of scholarship. &lt;br&gt; &lt;br&gt;

On the other hand, this is not a useful comment. I can think of at least two things you could have done to make this a useful comment: &lt;br&gt; &lt;br&gt;

&lt;ol&gt;
&lt;li&gt;Specified even a general direction of where you feel the body of economic literature could have been engaged. I know you might resent doing someone elses research for them if you’re not
already familiar with said body, but frankly the norm right now is to post webs spun from the fibrous extrusions of peoples musing thoughts. The system equilibrium isn’t going to change unless some effort is invested into moving it. Notice you could write your comment on most posts while only changing a few words. 
&lt;/li&gt;
&lt;li&gt;
Provide advice on how one might go about engaging with ‘the body of economic literature’. Many people are intelligent and reasonably well informed, but not academics. Taking this as an excuse to mark them swamp creatures beyond assistance is both lazy and makes the world worse. You could even link to reasonably well written guides from someone else if you don’t want to invest the effort (entirely understandable). 
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;I also linked &lt;a href="https://guides.library.harvard.edu/c.php?g=310271&amp;amp;p=2071512"&gt;a guide from Harvard's library&lt;/a&gt; (Garson &amp;amp; Lillvick, 2012) on how to do a literature review.
But this guide makes extensive use of flash video, which makes it increasingly hard to access the content. Even if flash was alive and well,
video is not necessarily the most comfortable format. Worse still, I remember feeling there was a great deal of tacit knowledge excluded from
the guide which wouldn't be apparent to someone that isn't already familiar with academic culture. Even if the guide was a perfect representation
of how to do an academic literature review, the priorities and types of work put together by LessWrong authors are more &lt;a href="https://www.symmetrymagazine.org/article/marchapril-2008/outsider-science"&gt;outsider science&lt;/a&gt; (Dance, 2008) than they are Harvard. For this reason I've had writing a guide to literature review aimed towards academic outsiders
on my to-do list for a while. &lt;/p&gt;
&lt;p&gt;At the same time I'm not interested in reinventing the wheel. This guide is going to focus specifically on filling in the knowledge
gaps I would expect from someone who has never stepped foot inside a college campus. The other aspects have been discussed in detail,
and where they come up I'll link to external guides. &lt;/p&gt;
&lt;h2&gt;What is a literature review?&lt;/h2&gt;
&lt;p&gt;'Literature review' the &lt;em&gt;process&lt;/em&gt; is a way to become familiar with what work has already been done in a particular field or subject by searching for and studying previous work. &lt;em&gt;A&lt;/em&gt; 'literature review' is a document (often a small portion of a larger work) which summarizes and analyzes the body of previous work that was encountered during literature review, often in the context of some new work that you're doing.&lt;/p&gt;
&lt;h2&gt;Why do literature review?&lt;/h2&gt;
&lt;p&gt;Literature reviews tend to come up in two major contexts: As a preliminary study to help 
contextualize a novel work, or as a work itself to summarize the state of a field or synthesize 
concepts to create new ideas. Most of my research falls into the latter category, I'm a big fan 
of &lt;a href="http://thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;putting together existing evidence and ideas to synthesize models&lt;/a&gt; (namespace, 2020).
&lt;a href="https://www.gwern.net/Embryo-selection"&gt;Gwern also tends to do work in this style&lt;/a&gt; (Branwen, 2020). I
suspect that a lot of authors on LessWrong are &lt;em&gt;attempting&lt;/em&gt; to do this, but fail to really
say anything useful because they haven't figured out how to incorporate thorough evidence 
into their argument. When I did a review of all my notes from 2015, I found the number one
failure mode I'd fall into was not paying attention to prior art. This was because I did not
have heuristics like: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If it's hard to write about or you get stuck, you should probably do more research&lt;/li&gt;
&lt;li&gt;If I want to write a post on something and I haven't checked the relevant literature 
for it yet I should probably do that as part of writing the post&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.greaterwrong.com/posts/fyGEP4mrpyWEAfyqj/player-vs-character-a-two-level-model-of-ethics"&gt;Encountering or generating a cool mental model&lt;/a&gt; (Constantin, 2018) 
is a useful cue to consult the literature &lt;/li&gt;
&lt;li&gt;If I'm trying to deal with a hard technical problem I should look at what work has already been done&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Benefits Of Literature Review&lt;/h3&gt;
&lt;p&gt;Literature review provides many benefits, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Build Off The State Of The Art:&lt;/strong&gt; Unless you make it a habit to look at what work already exists on a subject, you'll 
say what others have already said and do what others have already done. Your cognition is slow and expensive, and that makes
leveraging the work of others extremely valuable. It is tempting to think that the established experts are idiots and you
can beat them all with your own cleverness. &lt;a href="https://www.bbc.com/news/business-48844278"&gt;Sometimes, this is actually true&lt;/a&gt; (Harford, 2019) but 
it's not something you should be counting on as a rule. &lt;a href="https://noahpinionblog.blogspot.com/2017/05/vast-literatures-as-mud-moats.html"&gt;Some literatures are mud moats&lt;/a&gt; (Smith, 2017),
but other literatures are priceless treasures. Without access to the mathematics literature you would need to be &lt;a href="https://en.wikipedia.org/wiki/Srinivasa_Ramanujan"&gt;a prodigy like 
Ramanujan&lt;/a&gt; to make new contributions. In my 2015 notes there was an episode
where I tried designing a package manager. I filled many pieces of paper with thoughts on resolving dependency conflicts. Never did
it occur to me to look at what methods were already used by existing systems like &lt;code&gt;.deb&lt;/code&gt; or &lt;code&gt;.rpm&lt;/code&gt;, let alone research papers
that might tell me about theoretical methods. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Providing Context:&lt;/strong&gt; Cultural artifacts exist in some kind of context, historical, social, or intellectual. &lt;a href="https://www.cnn.com/style/article/5000-year-old-sword-discovered-in-italy-trnd/index.html"&gt;Without provenance a 
5,000 year old sword is just a rusty piece of metal&lt;/a&gt; (Giuliani-Hoffman, 2020).
The same principle applies to intellectual work, without a justifying context &lt;a href="https://www.youtube.com/watch?v=IO6ouSMm7Uc&amp;amp;t=3m58s"&gt;artifacts are parsed as garbage&lt;/a&gt; (Foddy, 2017).
The literature can help you provide context for your ideas and ground them in something other than just your personal experience. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Learn From The Mistakes Of Others:&lt;/strong&gt; Bismarck famously remarked that fools learn from experience and wise 
men learn from the mistakes of others. Even if previous work has failed to make significant progress it can often 
serve as a reference of promising-sounding ideas that won't work. This familiarity is often a crucial component of
the 'cleverness' that sets you apart from others. The Wright Brothers &lt;a href="https://wright.nasa.gov/discoveries.htm"&gt;were very familiar with the established work
on aerodynamic theory&lt;/a&gt; (Benson, 2014). Their rapid-iteration approach to airplane design
quickly revealed that real world test flights &lt;a href="https://www.readthesequences.com/Noticing-Confusion-Sequence"&gt;defied their expectations&lt;/a&gt;, 
leading them to develop a new way to measure the performance of airplane parts. Once this was done the data enabled
them to easily invent the airplane. Without that starting data to work from, it would have taken the Wrights longer
to realize that data was the bottleneck to making an airplane. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Common Language:&lt;/strong&gt; Scholars develop a shared language to discuss their studies. &lt;a href="http://backreaction.blogspot.com/2016/05/the-holy-grail-of-crackpot-filtering.html"&gt;These vernaculars are a key marker of 
group membership&lt;/a&gt; (Hossenfelder, 2016). Authors that use
the right words generally have &lt;a href="https://www.greaterwrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality#comment-c7Xt5AnHwhfgYY67K"&gt;standing&lt;/a&gt;
and authors that use their own ad-hoc vocabulary are generally considered cranks. Even beyond credibility, writing in the
standard language used by other authors makes it more likely you'll get expert feedback on your work. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unknown Unknowns:&lt;/strong&gt; Until you go looking, you often just plain don't know what you don't know about a subject. For example
in my &lt;a href="http://thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html"&gt;essay on fuzzies and saddies&lt;/a&gt; (Zealot, 2020) I
didn't know that literature on morale was relevant to the research question until I started looking at the psychology of soldiers.
Often when you start looking at previous work you have a "wait this exists?" moment that significantly alters the way you approach
it.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Literature Review As Accessible Contribution&lt;/h3&gt;
&lt;p&gt;One question I hear often is: "How can I contribute to the rationality project without institutional resources?". Literature reviews
are an accessible contribution that builds skills. &lt;a href="https://www.greaterwrong.com/posts/87mdaCvCyo5bkk8hE/not-for-the-sake-of-pleasure-alone"&gt;Some of the best&lt;/a&gt;
&lt;a href="https://www.greaterwrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison"&gt;posts on LessWrong&lt;/a&gt; are literature
reviews. The research skills that you build while doing it are extremely valuable, and will help you in most things you might want to pursue. It
doesn't require very much money, and can be performed from the comfort of your home. All these traits make it nearly ideal for people who want to contribute
but don't have a lot of resources, or who have to spend most of their time on school or work. Literature review does take time however, so like
any volunteer work it's necessary that the person undertaking it have spare time and energy to work with. If this sounds interesting to you,
feel free to &lt;a href="https://www.greaterwrong.com/users/ingres"&gt;private message me on LessWrong&lt;/a&gt; or &lt;a href="https://discord.gg/eEym9wa"&gt;join this blog's Discord server&lt;/a&gt; 
and I'll do my best to help.&lt;/p&gt;
&lt;h2&gt;The Document Universe&lt;/h2&gt;
&lt;p&gt;As a phenomenological definition, the document universe is the set of artifacts which are easy to access inside of academic review 
spaces like museums, libraries, reading/viewing rooms, or a home office. It is the spatial environment in which the literature exists.
Learning to navigate this environment is essential to getting good at literature review. &lt;/p&gt;
&lt;h3&gt;People Are Documents Too&lt;/h3&gt;
&lt;p&gt;When you want to know more about a subject but aren't sure where to begin, the classic advice is to ask a librarian. Human beings 
are a key part of the document universe. They are intentionally created artifacts that contain knowledge, and that knowledge is backed
by a full general intelligence. It's no coincidence &lt;a href="https://en.wikipedia.org/wiki/Phaedrus_(dialogue)#Discussion_of_rhetoric_and_writing_(257c%E2%80%93279c)"&gt;that Socrates didn't like writing&lt;/a&gt;. People are arguably the most important part of the document universe. Knowledge does very little if it isn't
contained inside someone. &lt;/p&gt;
&lt;p&gt;Because of their high value and short shelf life &lt;a href="https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists"&gt;it can be hard to get access&lt;/a&gt;
to knowledgeable people (Hossenfelder, 2016). &lt;a href="https://www.symmetrymagazine.org/article/marchapril-2008/outsider-science"&gt;It's not impossible however&lt;/a&gt; (Dance, 2008), the received
wisdom is that most scholars are eager to discuss their work &lt;em&gt;so long as you respect their time&lt;/em&gt;. &lt;a href="http://www.catb.org/~esr/faqs/smart-questions.html"&gt;Eric Raymond's classic essay&lt;/a&gt;
 (Raymond, 2014) on asking good questions is oriented more towards "How do I X with program Y?" type queries, but with some mental rearranging applies just as well
to plenty of other queries. For academic questions in particular it's important that you do your best to understand the science &lt;a href="http://backreaction.blogspot.com/2016/05/the-holy-grail-of-crackpot-filtering.html"&gt;and understand the
language used by the science&lt;/a&gt; (Hossenfelder, 2016). Failure to do that is likely to get
you spam filtered as a crank.&lt;/p&gt;
&lt;h3&gt;Academic Sources Are Underadvertised&lt;/h3&gt;
&lt;p&gt;Most web users don't seem to be aware of academic sources. I remember when I was younger feeling a vague malaise as I 
browsed the Internet, because all the knowledge seemed to be diffuse and informal. When I read books it was clear that 
they were high quality sources of knowledge, but the Internet felt barren of that. It turned out this was mostly just 
because I was looking in the wrong place. The academic section of the document universe is &lt;a href="https://scholar.google.com/"&gt;publicly indexed by Google 
Scholar&lt;/a&gt; which makes it much easier to find high quality sources on most subjects. &lt;/p&gt;
&lt;h3&gt;Traditional Bibliography&lt;/h3&gt;
&lt;p&gt;The vast majority of the history of scholarship happened before the existence of electronic computers, let alone widespread 
high-capacity &lt;em&gt;networked&lt;/em&gt; electronic computers. That means the formal norms of scholarship evolved in an environment quite alien
to our current era of cheap access and full text search. In this section we'll review some of their features in that context.&lt;/p&gt;
&lt;h4&gt;Citation Trees As Central Dogma Of Academia&lt;/h4&gt;
&lt;p&gt;In school you were probably told that you had to cite your sources, and that failing to do so was plagiarism. Plagiarism is
usually defined as "stealing someone else's work without credit", but in the context of citations this definition is very misleading.
Grade schools like the concept because it lets them clearly define how much copying is cheating, with the unfortunate side effect
that smart kids categorize the practice as schoolhouse ritual rather than valuable technique. By contrast in a functional literature
where works are written to be read academic citation norms provide a genealogy of ideas. These days we're pretty used to digital 
documents that directly reference other pages, videos, etc; but before the Internet was widespread academia alone had the benefit of
author provided citations. Academic citation formats are platform agnostic. They're &lt;a href="https://en.wikipedia.org/wiki/Content-addressable_storage"&gt;content addressed&lt;/a&gt; 
rather than location based, so the goal of an academic citation is to give you enough information to reliably locate a &lt;em&gt;specific&lt;/em&gt; source 
in the document universe. This is why they tend to get so tedious. A book might have 12 editions with multiple authors and undergo a title
change, and only one version contains the passage you reference. All the annoying details in citation formats were put there in response to
bibliographic failures and lookup complications with simpler formats. &lt;/p&gt;
&lt;p&gt;Within a single work citations provide context for readers and leads for further reading, but it's when you have a whole literature
that the practice really shines. It becomes possible to follow citations backwards to see the progression of ideas, move horizontally
to find related work, and use modern database systems to find work downstream that cites a document as an ancestor. The genealogy aspect
of academic citations also improves the signal:noise ratio by eliminating unimproved duplicate work, and makes it easier to associate ideas
with the authors that originated them. All of this makes the academic sections of the document universe much more pleasant to navigate than
the informal universe of newspaper articles and blog posts. From a contributors standpoint there's also more security, the norms are built 
to get your ideas &lt;a href="http://www.overcomingbias.com/2007/07/blogging-doubts.html"&gt;hooked into a network of associated work which future scholars will consult during their reviews&lt;/a&gt; (Hanson, 2007).
Outside of that Eden it's possible your effort will just get lost in the noise.&lt;/p&gt;
&lt;p&gt;Unfortunately because &lt;a href="https://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442"&gt;the web is a disaster&lt;/a&gt; (Binstock, 2012) we're not really
liberated from citations by the presence of hyperlinks. In an ideal world the web would be content addressed so that if a source stopped providing
a document it could be seamlessly served by a backup provider like the &lt;a href="https://archive.org"&gt;Internet Archive&lt;/a&gt;. Instead we address by location, so if
the domain hosting this blog changes hands and they put up a new site all the links to my posts break. If I decide I don't want to pay hosting costs anymore,
all the links break. If the servers have a technical malfunction even though they're technically still on in some dusty computer lab somewhere, &lt;em&gt;all the links break&lt;/em&gt;.
&lt;a href="https://www.gwern.net/Archiving-URLs"&gt;As you might imagine this happens a lot&lt;/a&gt; (Branwen, 2019), so it's not viable to rely on links to identify content.
Traditional citations at least provide for the &lt;em&gt;possibility&lt;/em&gt; that there is a second copy somewhere which can be found with a search engine. The most
savvy netizens &lt;a href="https://www.gwern.net/Archiving-URLs"&gt;&lt;em&gt;do their best to ensure&lt;/em&gt;&lt;/a&gt; (Branwen, 2019) there is a second copy somewhere. Because these problems are &lt;a href="https://en.wikipedia.org/wiki/InterPlanetary_File_System"&gt;unlikely
to be fixed any time soon&lt;/a&gt;, if you plan to write lasting content you had best get familiar
with citation formats. &lt;/p&gt;
&lt;h4&gt;Library Science As Conceptual Foundation Of The Academic Document Universe&lt;/h4&gt;
&lt;p&gt;Underlying the usefulness of a citation tree is physical infrastructure which houses, indexes, and curates documents. This type of work has been traditionally performed under the moniker of library science, even if in recent times it has mostly been done by a distributed system of bloggers, cooperating scientists, server hosts, and for-profit firms like Google. The old systems still exist however, and they're the environment of adaption for the current academic tradition. This makes it useful to know the principles of traditional library science so that you can better model academic-document-space. I recommend the book &lt;em&gt;The Intellectual Foundation Of Information Organization&lt;/em&gt; by Elaine Svenonius (Svenonius, 2000) to get that understanding. Published in 2000, it was written just before digital documents were set to disrupt the academic ecosystem. It captures the full powers of the old ways in amber. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;The Intellectual Foundation…&lt;/em&gt; is a particularly useful book for the scholar because it is designed to be read by the designers of future library systems. This means that it focuses less on the details of particular designs (which we probably don't care about
very much by this point) but more on the principles which an effective system should satisfy and the "why?" behind them. These principles define the territory which citations describe, and will help you grok certain aspects of traditional scholarship.&lt;/p&gt;
&lt;h2&gt;How To Do Literature Review&lt;/h2&gt;
&lt;p&gt;I'd be a hypocrite if I didn't bother to look at what others have already written about doing a literature review. &lt;a href="https://www.youtube.com/watch?v=9la5ytz9MmM"&gt;This talk with Dr. Candace Hastings&lt;/a&gt; (Hastings, 2009) on doing literature review is decent,
it spends a lot of time focusing on the way to use sources in your writing once you've found them. She also explains how you can use citation counts to find the most important scholars in the field you're looking at. &lt;a href="https://www.d.umn.edu/~hrallis/guides/researching/litreview.html"&gt;&lt;em&gt;Guidelines for writing 
a literature review&lt;/em&gt; by Helen Mongan-Rallis&lt;/a&gt; (Mongan-Rallis, 2018) is a well written page on this topic for academics.&lt;/p&gt;
&lt;p&gt;Every time I do research I perform a simple thought experiment: assuming somewhere in the world exists 
evidence that would prove or disprove my hypothesis, where is it? I tend to visualize this as a shot of
earth from space, and then 'zooming in' on the sense data that would show me what I want to know. The literalism
of this visualization is important because it emphasizes the sensory basis of evidence. Things happen in the world, and artifacts of their presence are left over afterwards. Physical remnants, images captured by cameras and sketch artists, written observations. This 'object level' phenomenological universe is what you're trying to get information about by looking at the literature. &lt;/p&gt;
&lt;p&gt;A key consequence of this is that 'the literature' is not always what's output by academics. If I was studying martial arts, I would be looking into the history of martial arts as it's practiced by martial artists, in whatever mediums they use to record and disseminate information. Memory is a human activity, and your first priority should be to find the most effective and relevant sources for whatever you're looking at.&lt;/p&gt;
&lt;p&gt;For tips on actually finding sources on the Internet (commonly known as 'Google-Fu') I recommend &lt;a href="https://www.gwern.net/Search"&gt;Gwern's page&lt;/a&gt; (Branwen, 2020) on the subject.&lt;/p&gt;
&lt;h3&gt;When You Don't Know The Name of Your Literature, or Missing and Biased Literatures&lt;/h3&gt;
&lt;p&gt;One of the more pernicious problems for literature review can be not knowing the name of the relevant literature.
I often find myself posing research questions where it isn't clear how I would find previous work. &lt;a href="https://www.greaterwrong.com/posts/DtS6x5r54sEx7e2tP/there-is-a-war"&gt;The inciting post&lt;/a&gt; (Hoffman, 2018)
that convinced me to write this one is discussing a phenomena that seems unlikely to be studied by economists. If I was doing literature review as part of writing this post, I would ask myself "What does the universe look like where we had the world wars and then wartime mobilization never stopped?". Then, I would aggressively dig in to find decisive places where looking at what happened before and after the world wars would prove or disprove my thesis. It's not enough to identify two points and then draw a trend line, that's not what it looks like &lt;a href="https://www.thelastrationalist.com/necessity-and-warrant.html"&gt;to thoroughly justify yourself&lt;/a&gt; (namespace, 2020). As a thoroughly justified hypothesis looms closer and closer to theory, arguing against it should begin to feel like your debate partner is reality itself.&lt;/p&gt;
&lt;p&gt;For the specific problem of a literature you simply don't know the name of, your best bet is often to ask others.
Many times I've wanted to post a Request For Literature (RFL) on LessWrong, but felt without context the concept wouldn't really make sense to most readers. Hopefully after publishing this I'll be able to link it for context and that won't be a problem.&lt;/p&gt;
&lt;p&gt;I didn't know what literature to look at for my essay on &lt;a href="https://www.thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html"&gt;Fuzzies and Saddies&lt;/a&gt; (Zealot, 2020),
where the thesis is both outside the overton window and our current social reality. How do you look at the literature for something like &lt;em&gt;that?&lt;/em&gt;
Well, one of the benefits of living in a &lt;a href="https://www.thelastrationalist.com/on-necessity.html"&gt;consistent universe&lt;/a&gt; (namespace, 2020) is that it can take a lot 
of effort to reliably censor all information that would point towards a real phenomena. Because our censorship is largely of the distributed kind
based on social pressure, it's largely ad-hoc and doesn't hold up well against the historical record or clever inference. I took notes on how I found the book on Missionary Morale.&lt;/p&gt;
&lt;h4&gt;Example Research Session: Finding The Book On Missionary Morale&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Research Question (roughly):&lt;/strong&gt; What makes some people seem to derive satisfaction 
and utility from being put into hellish situations like WW1?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Immediate question:&lt;/strong&gt; Where would I be able to find information relating to this
question, where would it be recorded and how would it be framed?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Thought:&lt;/strong&gt; "What about studies on how soldiers attitudes about war change after
they've been to war? [Most soldiers will probably dislike it, but some do like
it and this might be studied as a pathology]"&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Search (Google Scholar):&lt;/strong&gt; &lt;code&gt;soldiers attitudes toward war&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;First result:&lt;/strong&gt; Stouffer, S. A., Suchman, E. A., DeVinney, L. C., Star, S. A., &amp;amp; Williams Jr, R. M. (1949). The american soldier: Adjustment during army life.(studies in social psychology in world war ii), vol. 1.&lt;/p&gt;
&lt;p&gt;Look for thing, find thing. Read through it some, then:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Observation:&lt;/strong&gt; There's a chapter on morale, the thing I am researching, "motivation
through suffering, being fueled by harrowing circumstances, asceticism, keeping
spirits up in the face of a hostile universe" is very closely related to and
overlaps with the study of morale. Therefore I can look at morale studies to get
a better look at this subject.&lt;/p&gt;
&lt;p&gt;Read through book's study on effect of exposure to combat on morale, realize
that it doesn't seem to be very useful to me.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Principle of Pain:&lt;/strong&gt; Why isn't this useful to me?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Answer:&lt;/strong&gt; The thing causing the drop in morale is of the wrong structure, these
studies are about exposure to short bursts of extreme stress and danger which
is not the situation my audience will be encountering in their lives.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Principle of Balance:&lt;/strong&gt; Okay then, what would be useful to me 
(be of similar circumstances)?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Constraint:&lt;/strong&gt; Needs to be a population which it's likely there will be studies
on.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hypothesis:&lt;/strong&gt; Military intelligence officers, since their job is closer to the
research aspect of things while still being in a population whose morale will
be studied.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hypothesis:&lt;/strong&gt; Spy morale, spies need to exist in a foreign place pretending to 
be someone they're not while their real job is to do something else which is
adversarial to the people in their immediate environment. The sort of alienation
and lack of belonging that causes seems like a probable fit for how it actually
feels to be researching things that only you care about in your immediate
environment over a long period of time while at a deep cultural gulf between
yourself and the people around you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Principle of Exhuastion:&lt;/strong&gt; Ph.D burnout, paratrooper morale (esp. if there are
cases where single paratroopers are dropped into an area and have to be on 
their own, snipers?), Evangelical/Missionary morale/burnout, 
Wilderness survival/etc morale&lt;/p&gt;
&lt;p&gt;I go look up stuff on spy morale, forgot to take notes during this.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Observation:&lt;/strong&gt; MICE to RASCALS talks about 'operational psychology', which might
have material on agent attrition and factors relating to it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Principle of Balance:&lt;/strong&gt; What do counterintelligence officers do to &lt;em&gt;dissuade&lt;/em&gt;
potential spies?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Observation:&lt;/strong&gt; Undercover police work involves similar stuff in a domestic context
which is less secret than international espionage.&lt;/p&gt;
&lt;p&gt;Find paper on undercover police work, read it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Principle of Pain:&lt;/strong&gt; This still isn't quite what I want, because the point here
is to condition an officer to play a role which they then need to be pulled
out of later without too much damage. Though I guess that could be relevant,
it's just not the core of the thing.&lt;/p&gt;
&lt;p&gt;Decide to move on and look at missionaries.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Search (Google Scholar):&lt;/strong&gt; &lt;code&gt;missionary morale&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Read the first search result, which is a book from 1920 on literally 
this subject (Miller, 1920).&lt;/p&gt;
&lt;h2&gt;Bibliography&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Garson, D., &amp;amp; Lillvik, C. (2012). &lt;em&gt;The literature review: A research journey&lt;/em&gt;. Research guides at Harvard Library. https://guides.library.harvard.edu/c.php?g=310271&amp;amp;p=2071512&lt;/li&gt;
&lt;li&gt;Dance, A. (2012). &lt;em&gt;Outsider science&lt;/em&gt;. Symmetry Magazine. https://www.symmetrymagazine.org/article/marchapril-2008/outsider-science &lt;/li&gt;
&lt;li&gt;namespace. (2020, February 1). &lt;em&gt;"Memento mori", Said the confessor&lt;/em&gt;. The Last Rationalist. http://thelastrationalist.com/memento-mori-said-the-confessor.html&lt;/li&gt;
&lt;li&gt;Branwen, G. (2020, May 8). &lt;em&gt;Embryo selection for intelligence&lt;/em&gt;. https://www.gwern.net/Embryo-selection&lt;/li&gt;
&lt;li&gt;Constantin, S. (2018, December 14). &lt;em&gt;Player vs. character: A two-level model of ethics&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/fyGEP4mrpyWEAfyqj/player-vs-character-a-two-level-model-of-ethics&lt;/li&gt;
&lt;li&gt;Harford, T. (2019, August 14). &lt;em&gt;The penny post revolutionary who transformed how we send letters&lt;/em&gt;. BBC News. https://www.bbc.com/news/business-48844278&lt;/li&gt;
&lt;li&gt;Smith, N. (2017, May 15). &lt;em&gt;Vast literatures as mud moats&lt;/em&gt;. 
Noahpinion. https://noahpinionblog.blogspot.com/2017/05/vast-literatures-as-mud-moats.html&lt;/li&gt;
&lt;li&gt;Giuliani-Hoffman, F. (2020, March 25). &lt;em&gt;5,000-year-old sword is discovered by an archaeology student at a venetian monastery&lt;/em&gt;. CNN Style. https://www.cnn.com/style/article/5000-year-old-sword-discovered-in-italy-trnd/index.html&lt;/li&gt;
&lt;li&gt;Foddy, B. (2017). Getting over it with bennett foddy [Desktop &amp;amp; Mobile video game]. Humble Bundle: Bennett Foddy. &lt;/li&gt;
&lt;li&gt;Benson, T. (2014, June 12). &lt;em&gt;Overview of wright brothers discoveries&lt;/em&gt;. Re-Living the Wright Way. https://wright.nasa.gov/discoveries.htm&lt;/li&gt;
&lt;li&gt;Hossenfelder, S. (2016, May 19). &lt;em&gt;The holy grail of crackpot filtering: How the arXiv decides what’s science – and what’s not&lt;/em&gt;. Backreaction. https://backreaction.blogspot.com/2016/05/the-holy-grail-of-crackpot-filtering.html&lt;/li&gt;
&lt;li&gt;Zealot, E. (2020, April 21). &lt;em&gt;Fuzzies and saddies part one: X-risk and motivation&lt;/em&gt;. The Last Rationalist. https://www.thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html&lt;/li&gt;
&lt;li&gt;Hossenfelder, S. (2016, August 11). &lt;em&gt;What i learned as a hired consultant to autodidact physicists&lt;/em&gt;. Aeon Ideas. https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists &lt;/li&gt;
&lt;li&gt;Raymond, E.S., &amp;amp; Moen, R. (2014, May 21). &lt;em&gt;How to ask questions the smart way&lt;/em&gt;. http://www.catb.org/~esr/faqs/smart-questions.html&lt;/li&gt;
&lt;li&gt;Hanson, R. (2007, July 17). &lt;em&gt;Blogging doubts&lt;/em&gt;. Overcoming Bias. http://www.overcomingbias.com/2007/07/blogging-doubts.html&lt;/li&gt;
&lt;li&gt;Binstock, A. (2012, July 10). &lt;em&gt;Interview with alan kay&lt;/em&gt;. Dr Dobb's. https://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442&lt;/li&gt;
&lt;li&gt;Branwen, G. (2019, January 5). &lt;em&gt;Archiving URLs&lt;/em&gt;. https://www.gwern.net/Archiving-URLs&lt;/li&gt;
&lt;li&gt;Svenonius, E. (2000). &lt;em&gt;The intellectual foundation of information organization&lt;/em&gt;. The MIT Press.&lt;/li&gt;
&lt;li&gt;Hastings, C. (2009, September 25). &lt;em&gt;Get lit: The literature review&lt;/em&gt;. YouTube. https://www.youtube.com/watch?v=9la5ytz9MmM&lt;/li&gt;
&lt;li&gt;Mongan-Rallis, H. (2018, April 19). &lt;em&gt;Guidelines for writing a literature review&lt;/em&gt;. https://www.d.umn.edu/~hrallis/guides/researching/litreview.html&lt;/li&gt;
&lt;li&gt;Branwen, G. (2020, January 21). &lt;em&gt;Internet search tips&lt;/em&gt;. https://www.gwern.net/Search&lt;/li&gt;
&lt;li&gt;Hoffman, B.R. (2018, May 23). &lt;em&gt;There is a war&lt;/em&gt;. LessWrong. https://www.greaterwrong.com/posts/DtS6x5r54sEx7e2tP/there-is-a-war&lt;/li&gt;
&lt;li&gt;namespace. (2020, March 30). &lt;em&gt;Necessity and warrant&lt;/em&gt;. The Last Rationalist. https://www.thelastrationalist.com/necessity-and-warrant.html&lt;/li&gt;
&lt;li&gt;namespace. (2020, March 23). &lt;em&gt;On necessity&lt;/em&gt;. The Last Rationalist. https://www.thelastrationalist.com/on-necessity.html&lt;/li&gt;
&lt;li&gt;Miller, G.A. (1920). &lt;em&gt;Missionary morale&lt;/em&gt;. Google Books (orig. New York, Cincinnati: The Methodist Book Concern).&lt;/li&gt;
&lt;/ol&gt;</content></entry><entry><title>Fuzzies and Saddies Part Three: Video Games As Empirical Evidence For Saddies</title><link href="https://www.thelastrationalist.com/fuzzies-and-saddies-part-three-video-games-as-empirical-evidence-for-saddies.html" rel="alternate"></link><published>2020-04-24T00:00:00+02:00</published><updated>2020-04-24T00:00:00+02:00</updated><author><name>Extropian Zealot</name></author><id>tag:www.thelastrationalist.com,2020-04-24:/fuzzies-and-saddies-part-three-video-games-as-empirical-evidence-for-saddies.html</id><summary type="html">&lt;p&gt;&lt;small&gt;Note: Because I had the sudden epiphany that I haven't consulted the neuroscience literature 
yet, part 2 of this series will be delayed. Since part 3 is almost an independent standalone essay, 
I've decided to release it out of order.&lt;/small&gt;&lt;/p&gt;
&lt;p&gt;Perhaps the first time I fully grasped the unvarnished reality …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;small&gt;Note: Because I had the sudden epiphany that I haven't consulted the neuroscience literature 
yet, part 2 of this series will be delayed. Since part 3 is almost an independent standalone essay, 
I've decided to release it out of order.&lt;/small&gt;&lt;/p&gt;
&lt;p&gt;Perhaps the first time I fully grasped the unvarnished reality of what a video game is
was reading &lt;a href="https://steve-yegge.blogspot.com/2012/03/borderlands-gun-collectors-club.html"&gt;The Borderlands Gun Collector's Club&lt;/a&gt; by Steve Yegge.
This ultra-long tongue in cheek blog post might be the best essay on game design ever written.
It's witty, empirical, cynical, documents a real (and horrifying) video game subculture, and 
digs without mercy into the underlying glitches in human psychology that make it work. Yegge's
unsentimental definition of 'fun' as 'addictive' turns the entire premise of &lt;em&gt;game&lt;/em&gt; design on its ear.
Your plebian expectation that video games are supposed to be about enjoyment means nothing to Yegge.
To him a video game is a series of exploits in human motivation which compel the mammal brain to interact
with it for hours, even though it's completely disconnected from any tangible reward or consequence in life.&lt;/p&gt;
&lt;p&gt;Every video game then is an experiment in human motivation, and they always have been. In their book &lt;em&gt;Racing The Beam&lt;/em&gt;,
Nick Montfort and Ian Bogost sketch the history of commercial video games from arcade machines to Atari.
The founder of Atari, Nolan Bushnell, worked as a carnival barker before he ever decided to create any video
games. He used that experience extensively to design what we would recognize as the modern video game based on
his exposure to &lt;a href="https://en.wikipedia.org/wiki/Spacewar!"&gt;Steve Russel's Spacewar!&lt;/a&gt;. &lt;a href="https://xkcd.com/1259/"&gt;Like the bee orchid&lt;/a&gt; 
which serves as testimony to the existence of a particular winged insect, each video game is testimony to the existence 
of a form of human motivation. This means we can analyze video games as a way of
getting around the inherent biases and restrictions in the motivation literature.&lt;/p&gt;
&lt;p&gt;And what's notable about these early arcade machines, Atari games, and even many NES and SNES titles,
is that they're &lt;em&gt;not fun&lt;/em&gt;. Like the carnival games they're indirectly based off of, they rely as much
on frustration and negative feedback as they do positive feelings to motivate engagement. When I was 
a kid there was a period for about 3-5 years where you could buy retro games in thift stores and resell
them on eBay for money, which my father did often. So I ended up with a massive game collection spanning
most of the post-Atari era of gaming. And what often struck me about games on the NES or SNES was how 
&lt;em&gt;relentlessly difficult&lt;/em&gt; they were in comparison to later titles. The sort of experience you'd have
playing them is &lt;a href="https://www.youtube.com/watch?v=ssLeEzA1EC0"&gt;captured well by early episodes of The Angry Video Game Nerd&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
I hate this game. But why am I playing it? Well that's the question everyone has asked themselves and they all have the same reason: because you're angry and you want to win. You want to beat the Nintendo; but the cold fact is that nobody cares but you.
&lt;/blockquote&gt;

&lt;p&gt;"I'm angry and I want to win" is the basic essence of the attitude that got me through 
all the hard STEM classes in my associates degree taken consecutively. I'd failed Calculus II 
at some point and taken all the easy classes before attempting it again, only to be met with 9 months of pain if I wanted
to finish the degree. Normally when people encounter that they just drop out. College is a game where you 
drink poison, then prove your value by drinking increasingly large swigs of poison. If I wanted to win I had
to drink the poison on a much more aggressive schedule than normal. It's not an experience I'd recommend to you 
but something analogous is often necessary to make progress in life.&lt;/p&gt;
&lt;p&gt;The fact that these games exist and people still want to play them is very strong evidence for the existence of saddies.
Perhaps more astonishing is that they come earlier in the development process than games that are hedonistic. Before video
games had the ability to entertain you, &lt;a href="https://www.youtube.com/watch?v=r-xda6XkkEs"&gt;they would frustrate you&lt;/a&gt;. Plenty of modern games have rediscovered and continue to
use this basic premise. Often, they're even held up as the best examples of the medium. &lt;a href="https://en.wikipedia.org/wiki/Dark_Souls"&gt;Dark Souls&lt;/a&gt;
is an infamously difficult game which some critics consider one of the best ever made. Even less popular indie titles like 
&lt;a href="https://en.wikipedia.org/wiki/FTL:_Faster_Than_Light"&gt;FTL: Faster Than Light&lt;/a&gt; receive strong positive reviews while being
uncompromising in their difficulty. &lt;/p&gt;
&lt;p&gt;For many in the game industry, these frustration-driven titles are a time honored tradition that they're disappointed to see
replaced by more gentle games like Minecraft. Bennett Foddy even went so far as to &lt;a href="https://www.youtube.com/watch?v=IO6ouSMm7Uc"&gt;make an entire tribute game&lt;/a&gt; 
called &lt;em&gt;Getting Over It&lt;/em&gt; to highlight the receding focus on hard games. Most people interpret Getting Over It as a 'troll' game
where the purpose is just to piss the player off for giggles, but I don't think so. &lt;a href="https://www.youtube.com/watch?v=IO6ouSMm7Uc"&gt;The 12 minute monologue&lt;/a&gt; 
included with Getting Over It that plays in snippets as you progress through the game is earnest, and by the end increasingly personal. 
It's a meditation on challenge in video games and in life, and what it says about our culture that we've begun to replace difficult personal experiences
of triumph with shallow imitations to be browsed and discarded at an accelerating pace. To Foddy, this disposability is a sign of
decadence and a collective agreement to surround and identify ourselves with garbage. Perhaps the core of his critique can be summed
up in a statement he makes before what is widely agreed to be the hardest part of the game:&lt;/p&gt;
&lt;blockquote&gt;
An orange is sweet juicy fruit locked inside a bitter peel. That's not how I feel about a challenge. I only want the bitterness. It's coffee, it's grapefruit, it's licorice.
&lt;/blockquote&gt;

&lt;p&gt;I think what Foddy is trying to get at, and perhaps too nice to say is: If 'sweet juicy fruit locked 
inside a bitter peel' is how you feel about your life you will always be unsatisfied. Life is not a series 
of islands made from passion and joy floating in an ocean of miserable feelings. Those 'miserable feelings' 
are what life &lt;em&gt;is&lt;/em&gt;, they constitute 95% percent of your life by volume and zombiehood is when you have no idea 
how to appreciate any of it. Ernest Becker discusses how the fear of life is deeply intertwined with the fear of death.
He talks about it like creation is fabulous, it's so fabulous that we shrink away from it. Which it is, but it's also 
 deeply horrible. The idea that we simply fear life because it's too much good stuff for us is uncharacteristically 
optimistic of Becker. The reality is even harsher and sadder than that. Life is mostly awful things, which you fear, 
it's also sometimes good things, which you also fear. You're just weak like that.&lt;/p&gt;
&lt;p&gt;A lot of where people fall down with this stuff is that they live in a hedonistic culture, and they have no notion of 
value outside of that. They see the real world, which is by volume made of suffering, and they think "okay, this hurts but, 
you know it's supposed to be &lt;em&gt;better&lt;/em&gt; than hedonism [where 'better' means 'more happy'] so I'll stick with it for a bit I guess".
Eventually they notice that happiness they're expecting never arrives and conclude it's bogus. Their expectation of happiness is 
the entire problem. You have to take the ahedonistic emotions on their own terms, appreciate them in their own currency. &lt;/p&gt;
&lt;p&gt;And few games attempt to guide the player through that more authentically than Pathologic.&lt;/p&gt;
&lt;p&gt;A 2005 Russian cult classic, &lt;em&gt;Pathologic&lt;/em&gt; is a game that gets mixed reviews in the West. 
Critics have plenty to be unhappy with. Pathologic doesn't conform to the standard expectations
of what a video game is supposed to be. The sophisticated plot requires cognitive engagement,
most of the game consists of walking from place to place and talking to people, combat seems
intentionally designed to be clunky and unsatisfying, the graphics are limited, its game world
is a rural town in 9 shades of brown viewed several meters at a time through a poisonous fog, 
and the writing is a dubious translation of what amounts to a novel of verbose philosophical
commentary shoved in the players face during gameplay. &lt;/p&gt;
&lt;p&gt;The basic premise is that you play one of three doctors &lt;a href="https://www.msn.com/en-us/health/medical/the-untold-origin-story-of-the-n95-mask/ar-BB11D9tE"&gt;sent to a town in the Russian Steppe
to deal with a deadly plague outbreak&lt;/a&gt;.
Your three choices are a (philosophically) rational doctor, an empirical surgeon, and a miracle working child.
Pathologic builds the game around 12 days of events in which each character participates no matter who
you pick. Events happen with or without you, and if you fail your main quest for the day one of your supporters
falls ill and prevents you from getting the good ending unless you can cure them. Because it's &lt;em&gt;survival&lt;/em&gt;
horror you have to eat food, sleep, keep your immunity to the plague up, etc. A great deal of the game's
tension is built around balancing these competing problems while still making progress in the story. 
Because the game is constructed like a novel, it has a lot of plot depth to it that isn't really possible
to summarize in this post without making it many paragraphs longer. If you're really interested in hearing
more I suggest hbomberguy's excellent 2 hour video essay review of the game:&lt;/p&gt;
&lt;iframe width=100% height=350px src="https://www.youtube-nocookie.com/embed/JsNm2YLrk30" frameborder="0" loading=lazy; allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;I haven't played it myself, but I did watch significant portions of a full playthrough on YouTube
to verify that this review more or less tells the truth. As a matter of game design Pathologic tends
to enjoy challenging the player by setting up a scenario that puts multiple competing interests into
play at once and then forcing the player to resolve it. For example, hbomberguy was particularly
impressed by an event that happens on day 2. After word of the plague gets out, all the supplies in
town become much more expensive as everyone panic buys them. One of your side quests for the day
(if you play as the rational doctor) is to help someone set up a shelter by buying what's left of
the food and delivering it to them. You of course don't have the money on hand to do this, because
food is wicked expensive. So you collect donations, buy the food, then deliver it. Sounds simple
enough, but here's the catch: A naive player probably didn't buy very much food on day 1. As they're
walking around town with this giant pile of money, then later giant pile of food, the hunger meter
creeps upward. It's possible to eat the food to relieve the hunger, but that's obviously morally 
suspect. A player that sticks to their morals and completes the quest finds out the shelter has been
contaminated with the plague and the food has to be returned to the quest giver in abject defeat. 
You don't get any useful food as a reward, but you do get nuts that can be bartered with child NPCs.
Eating the nuts (as you might be tempted to do with your hunger meter so high) is a bad idea, they
provide almost no hunger reduction and sell for quite a bit of value in the town's barter economy.&lt;/p&gt;
&lt;p&gt;I actually watched this same quest in someone else's playthrough, and got to notice how much variance
in the experience is put into the game. Hbomberguy is forced to use his own money to buy the
food, whereas the more inquisitive player I watched got a dialogue tree that let him collect an extra
donation. Hbomberguy doesn't listen to the explanation of the reward given at the end of the quest,
so he has to "find out" that the nuts are valuable in the barter economy. This is actually directly
told to the player &lt;em&gt;if they're paying attention to what people say to them&lt;/em&gt; and pick the right dialogue 
options. In Hbomberguy's playthrough he doesn't have to find a plague house for the main quest because he
already encounters one doing this side quest, the other fellow ends up doing both quests. Watching these 
two players have the same experience back to back makes it really clear how the game is structured and
what things it does and doesn't reward. Players who pay attention and keep an open mind prosper, players
that try to rush past 'irrelevant' dialogue and focus on 'gameplay' miss that they're literally skipping
it.&lt;/p&gt;
&lt;p&gt;This is in marked contrast to typical game design. In the developer documentaries produced for the Halo 
trilogy, designers often bring up the concept of '30 seconds of fun'. They say if the developers do their 
job right, the player should experience a moment of flow killing baddies that's fun for 30 seconds, and 
the key to making a good video game is to loop that 30 seconds over and over in slight variations. Someone 
who plays Halo multiplayer for a year straight should expect to experience 30 seconds of fun a million times. 
Pathologic by contrast cannot offer you a million instances of '30 seconds of fun'. In fact, I'm not sure 
there's 30 seconds of fun to be had in the whole game. Every mechanic is designed to be about something other 
than (usually the opposite of) 'fun'. The core game loop is not centered around zapping the player's pleasure 
center continuously.&lt;/p&gt;
&lt;p&gt;Instead what Pathologic does to motivate the player revolves around its approach to setting. The setting of Pathologic 
is rich in detail and anthropological in its construction. Its authors put a lot of work into trying to depict the game world as a
complete culture and society, rather than just a generic location for events to take place in. And unlike &lt;a href="https://en.uesp.net/wiki/Main_Page/"&gt;The Elder Scrolls&lt;/a&gt;
which hides its extensive background and lore in largely optional in-game 'books' and skippable dialogue, Pathologic puts
the setting front and center to the experience. Even a ruthlessly efficient player will be confronted with their status
as a foreigner in the town, where the conflict between rural tradition and modernism sees its flash point. This culture is 
vital to the game because it provides a vehicle for attachment. Without the anthropology and well constructed plot it's
not clear why someone would even bother with a full playthrough. George Miller discusses this briefly in the
context of missionary work as a potential source of intellectual engagement:&lt;/p&gt;
&lt;blockquote&gt;
UNIQUE OPPORTUNITIES &lt;br&gt;
The missionary has some intellectual opportunities that are denied to his fellows at home. There are Oriental literatures and philosophies that supply fascinating and fruitful fields of research. There are natives with whom he may discuss questions of the spirit and from whom he may secure valuable suggestions as to the interpretation of some of these long-locked treasures of the ancient mind. But if the missionary is to keep his own spirit fresh and maintain an intellectual morale that will not fail him, he will have to solve in his own way the problem of having always a fresh and partly read book on his desk or in his traveling bag.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— George Miller, &lt;i&gt;&lt;a href="https://books.google.com/books/download/Missionary_Morale.pdf?id=WNwPAAAAYAAJ&amp;output=pdf&amp;sig=ACfU3U3Ki2gZePj-nDcZo5WDVu9Z_KQC5g"&gt;Missionary Morale&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think that in the context of working on X-Risk, the 'setting' of our world takes on an analogous role. 
Earlier I said that a core trait of Eliezer's rationalist is a love for the world and its inhabitants. 
Knowing the world is a basic prerequisite to loving the world. Wise people understand that "love 
at first sight", an ignorant love based on shallow perceptions is the province of juveniles and fools. 
That love often fades the deeper you get to know the object of its focus. It's the love
that solidifies and deepens with familiarity that is worth having. I've found that history, anthropology, 
sociology, political science, economics, and similar subjects do not just improve my ability to comprehend
and try to intervene in the world (agency &amp;amp; sanity). They also deepen my attachment to and appreciation for
humanity as a concrete, existing entity. &lt;/p&gt;
&lt;p&gt;Your motivating attachment to the world &lt;a href="https://www.readthesequences.com/Something-To-Protect"&gt;is what you have to protect&lt;/a&gt;. 
It defines your &lt;em&gt;Dramatis Persona&lt;/em&gt; and the shape your agency can take. If your character motivation is driven by
focusing on a particular person, family, or institution then your strategy choices are constrained to 
preserve that focus. If anything resembling a winning strategy is outside those constraints, it's not
in your power to pursue it. I'm not saying you've already lost with that sort of limitation, 
but you do need to acknowledge the gravity of what it means. &lt;a href="https://www.extropian.net/notice/9q8GNUm3M026xiiJXs"&gt;Whatever you can't bear to part with
will control you&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
Linji Yixuan writes about Buddhism: &lt;br&gt;

&lt;blockquote&gt;
Followers of the Way [of Chán], if you want to get the kind of understanding that accords with the Dharma, never be misled by others. Whether you're facing inward or facing outward, whatever you meet up with, just kill it! If you meet a buddha, kill the buddha. If you meet a patriarch, kill the patriarch. If you meet an arhat, kill the arhat. If you meet your parents, kill your parents. If you meet your kinfolk, kill your kinfolk. Then for the first time you will gain emancipation, will not be entangled with things, will pass freely anywhere you wish to go.
&lt;/blockquote&gt;

&lt;p&gt;The same sort of deal applies to relentless determination: &lt;br&gt;&lt;/p&gt;
&lt;p&gt;If you meet yourself on the road, kill them.
&lt;/blockquote&gt;  &lt;/p&gt;
&lt;p&gt;The overall effect of this is demonstrated well by Pathologic's design strategy to motivate the player.
The game consistently rewards exploration, ignoring the NPCs on the street that you can barter
with is a death sentence. These same NPCs also have stories they can tell you about the town, which are
presented alongside stuff you need to continue through the game so you're more likely to actually read
them. The soundtrack is well crafted dark ambient music that helps push the player to trance-out and think
during all the walking they have to do from building to building. Vital hints are given out by characters
in the same dialogues as philosophical and cultural background, rewarding you if you pay attention and
actively think about what you're being told. Characters are often out to deceive and lie to you, forcing
you to really think about their motivations and how they fit into the overall structure of the setting. 
If the writing and the worldbuilding and the music and the atmosphere do their job, if you're charmed by
the story and its setting, if the game manages to get you invested in its world &lt;a href="https://approachingaro.org/wrathful-practice"&gt;the experience is transformed&lt;/a&gt;
from tedium to something profound and worthwhile:&lt;/p&gt;
&lt;blockquote&gt;
All the least fun bits of the first playthrough are ramped up to eleven, the walking is even more excruciating and circular, the survival mechanics are harder on account of having no money and low reputation a lot of the time and not having as many rich friends. Many of the quests are deliberate attempts to waste your time and get you killed by Foreman Oyun. But you're invested now, the suffering is engaging and you want to know what happens; how it all shakes out for him. You take the hours of walking in stride. You see all the efforts you go through as proof that you're willing to go through hell and high water to save this town. If you didn't like this game you wouldn't get to this point anyway but if you somehow did, it's excruciating, it's awful, you're having a shit time being bored. &lt;br&gt; &lt;br&gt;

But if you care about the story, if you got immersed in the atmosphere and are engaged fully with the survival mechanics and your understanding of how much harder things are now, this is the best fucking time I've had in a game in years. It's still not fun, it's something else, this other thing I can't even describe. It's satisfying in a way I'm not used to games being. In a way the game is asking you a serious question about whether you're willing to be punished to succeed. If you give up and close the game and stop playing it you're basically letting the town die without your help aren't you? Oyun is trying to trying to make you give up and stop playing, and I'm not gonna let him. I'm gonna solve this shit and I'm gonna cure the town.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— hbomberguy, &lt;i&gt;Pathologic is Genius, And Here's Why&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;On some level, the player knows this is futile. Hbomberguy is discussing his second playthrough
as the empirical surgeon, which means he's already seen how the game ends. By the 12th day nearly
everyone in town has been killed by a combination of plague, riots, and famine; ultimately the player fails.
Their final decision is about which sliver of value to preserve after the carnage has ended. That
futility doesn't stop Hbomberguy, and the game anticipates this. In the game's very good ending where
you save all of your supporters as well as the supporters of a second playable character, you're informed
the game world is an imaginary place whose characters are puppeted by children. This hurts, to know that
everything you've worked for is all just a game in someone's imagination. That is until you get the secret
ending where the developers point out that the children controlling the game world are also a fictional device
and &lt;a href="http://thelastrationalist.com/on-necessity.html"&gt;your expectation that you're participating in anything other than a game is trivially unreasonable&lt;/a&gt;.
The game is played in full knowledge of its probable futility:&lt;/p&gt;
&lt;blockquote&gt;
The Theater of Cruelty is a Theater of Death – a pantomime of suffering, and a look, directly in the eye, at existential dread.
You cannot come to a conclusion of what to do in the face of the absurd without first encountering the absurd.
And the game, quite blatantly, gives its conclusions;
though you may die, though you may [fail], though you may be forced to endure and live with the permanent consequences of your mistakes,
you’re still encouraged to pick yourself up and carry on.
Though it presents a scenario in which victory seems impossible, it encourages you to keep trying anyway.
Though it presents a world where doing the right thing and pressing on forward might have no guarantee of reward, it pushes you to keep going, regardless.
The game tells you, openly, that you will lose; that you cannot save everyone, that it’s a fool’s errand to even try –
and then, with a wink and a smile, it tells you to BE that fool.
“Pathologic” is, ultimately, a game about hope and determination, in the face of complete existential destruction.
It’s easy to have hope in a world of smiles and rainbows – in a world where you know the future is guaranteed to be fine.
It’s so much harder to have hope when the world is falling apart around you, and so much 
harder to persuade yourself to carry on when you’ve already made so many mistakes, and so
many are suffering, and everything seems so pointless. 
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— SulMatul, &lt;i&gt;&lt;a href="https://www.youtube.com/watch?v=FKhSbZPBEKc"&gt;Dissecting Pathologic 2; The Best Game of 2019&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is &lt;a href="https://www.extropian.net/notice/9obGCdlz681XfZJRce"&gt;the other side of the coin with refusing necessity&lt;/a&gt;. In a world
full of uncertainty you can never &lt;em&gt;really&lt;/em&gt; be sure it's all hopeless. If that sounds like wishful thinking, there's historical
precedent for it. I'm always struck by the improbable timeline we've experienced where the United States and USSR didn't nuke
each other into oblivion during the cold war. We got close several times, but nuclear war has so far been averted. If you went
back to 1945 and predicted there would be no nuclear war between then and 2020, you would probably be considered an incredible
optimist by &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;anyone who is thinking logically straight&lt;/a&gt;. 
I'm to understand that &lt;a href="https://www.youtube.com/watch?v=lb13ynu3Iac"&gt;from the moment of its creation&lt;/a&gt;, the physicists who invented 
the atomic bomb &lt;a href="http://blog.nuclearsecrecy.com/2012/09/12/in-search-of-a-bigger-boom/"&gt;understood they had just caused the end of the world&lt;/a&gt;.
The exact path by which nuclear war was averted would seem implausible and strange, that during an especially tensious moment the USSR
would allow itself to collapse rather than risk ending the world by desperately clinging to power. In spite of &lt;a href="https://www.amazon.com/Command-Control-Damascus-Accident-Illusion/dp/0143125788"&gt;basic mismanagement and human
error&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=Mm0yQg1hS_w"&gt;animosity and hatred&lt;/a&gt;,
&lt;a href="https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident"&gt;poor luck&lt;/a&gt;, and &lt;a href="https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes"&gt;plain old reckless stupidity&lt;/a&gt;
the world is still here. &lt;/p&gt;</content></entry><entry><title>Fuzzies and Saddies Part One: X-Risk and Motivation</title><link href="https://www.thelastrationalist.com/fuzzies-and-saddies-part-one-x-risk-and-motivation.html" rel="alternate"></link><published>2020-04-21T00:00:00+02:00</published><updated>2020-04-21T00:00:00+02:00</updated><author><name>Extropian Zealot</name></author><id>tag:www.thelastrationalist.com,2020-04-21:/fuzzies-and-saddies-part-one-x-risk-and-motivation.html</id><summary type="html">&lt;p&gt;A couple years ago I had a discussion with one of my readers about what it looks like to take &lt;a href="https://en.wikipedia.org/wiki/Global_catastrophic_risk"&gt;existential risks&lt;/a&gt;
seriously. Unfortunately, they were deep into &lt;a href="https://www.thelastrationalist.com/on-necessity.html"&gt;the refusal of necessity&lt;/a&gt;. 
While they claimed to care about the impending end of the world their aesthetic was &lt;a href="https://www.urbandictionary.com/define.php?term=Chuunibyou"&gt;anime kitsch&lt;/a&gt;
and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A couple years ago I had a discussion with one of my readers about what it looks like to take &lt;a href="https://en.wikipedia.org/wiki/Global_catastrophic_risk"&gt;existential risks&lt;/a&gt;
seriously. Unfortunately, they were deep into &lt;a href="https://www.thelastrationalist.com/on-necessity.html"&gt;the refusal of necessity&lt;/a&gt;. 
While they claimed to care about the impending end of the world their aesthetic was &lt;a href="https://www.urbandictionary.com/define.php?term=Chuunibyou"&gt;anime kitsch&lt;/a&gt;
and their concrete plan of action amounted to "convince people to invest their identity in anime kitsch, breaking free from The 
System and then &amp;lt;mumble mumble agency&amp;gt;". This was my first encounter with such a person, so I argued pretty passionately with them in favor
of better ideas. As I'd come to find out later, &lt;a href="https://hivewired.wordpress.com/2019/12/02/hemisphere-theory-much-more-than-you-wanted-to-know/"&gt;this sort of thing&lt;/a&gt;
&lt;a href="https://hivewired.wordpress.com/2020/02/03/the-ends-of-identity/"&gt;is a very common&lt;/a&gt; &lt;a href="https://lexicaldoll.wordpress.com/2017/08/09/on-the-seelie-and-unseelie-courts/"&gt;failure mode&lt;/a&gt;
(which I blame &lt;a href="https://www.hpmor.com/"&gt;HPMOR&lt;/a&gt; and &lt;a href="https://www.fimfiction.net/story/62074/"&gt;Friendship Is Optimal&lt;/a&gt; for) and spending time fighting it is
like pulling weeds with your bare hands. But this was before that: as the conversation wound deeper and deeper into the reader's belief system and then
eventually into the reader themselves (because whenever incorrect ideas get woven into someone's identity, &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;to challenge the ideas is to challenge the person&lt;/a&gt;) it all bottomed out at the frank admission that to dedicate oneself to tackling
existential risk seemed to require giving up on a happy existence, and surely without that the devotion would crumble; they felt it was impossible to live
a life compatible with that sort of zeal. &lt;/p&gt;
&lt;p&gt;I spent the next two years with the question they didn't ask at the back of my mind: &lt;/p&gt;
&lt;p&gt;"How do I live a life that rises to the demands of
our situation, instead of one that compromises with comfort and leisure?"&lt;/p&gt;
&lt;p&gt;It is not an idle question. The 21st century is a suicide ritual, and at the 
end humanity kills itself. You already know how this story is supposed to go: 
Hollywood has spent millions to render it in exquisite detail. The corrupt, 
shortsighted governments, the naive hippies who get steamrolled by authority, 
the hopeful scientist who sighs wistfully as the world degrades around him into 
nothing, the apathetic populace &lt;a href="https://www.youtube.com/watch?v=nSXIetP5iak"&gt;that says “there is nothing I can do” until it 
is too late and nothing can be done&lt;/a&gt;. 
These are roles, and people know how to play them. It’s so easy to get caught up in 
the role, in the character you’re playing in this story, that you forget there’s a 
real world full of real people who will really die. Playing a role makes the situation 
acceptable, it’s a way of coming to a mutual suicide pact with others. &lt;/p&gt;
&lt;p&gt;At times this dynamic becomes so salient to me that situations are transformed into theater.
&lt;a href="https://www.youtube.com/watch?v=j8zsRJpY0mw"&gt;People become stereotyped in their body language, their conversations predictable&lt;/a&gt;.
Characters bow to the will of a narrative that demands blood, whose slaughter of the cast is baked
into the productions tragic conclusion. Classical theatre defines a protagonist as “the character 
whose fate determines whether the play is a tragedy or a comedy”, we are in a Greek 
tragedy whose protagonist is man. Is there anyone in our story who deviates from 
their lines? Ask yourself if you're witnessing behavior inconsistent with a story
that ends in world destruction.&lt;/p&gt;
&lt;p&gt;When your social reality is a suicide ritual, society is only set up to help you
drink the kool-aid. Socially significant values are suicide values, and objecting 
to the play is unthinkable. &lt;a href="https://nickbostrom.com/fable/dragon.html"&gt;The wise philosopher is a trap&lt;/a&gt;
that tells you life is a profane illusion and the dragon does you a favor by 
dissolving you in his stomach. Your rulers won't stop polluting the sky 
no matter the apocalyptic outcomes &lt;a href="https://www.msn.com/en-sg/news/other/blue-skies-clear-canals-how-covid-19-is-halting-climate-change/ar-BB11okrE"&gt;because if the machine stops even for a moment your economy 
will crash&lt;/a&gt;.
&lt;a href="https://www.greaterwrong.com/posts/qiMxXa4MjnoP72kQD/where-is-my-flying-car"&gt;Technologies that might completely defeat scarcity as we know it&lt;/a&gt;
are a dream deferred because the dominant powers stopped believing in any use of nuclear power
beyond doomsday machines and assassinations. Poisonous lies are sold as radical religious truths
to the unwary from childhood onwards by people they rely on to survive. Every kingdom kneels to a 
totalitarian ideology &lt;a href="https://equilibriabook.com/molochs-toolbox/"&gt;that says it's better to let babies die than risk harm by stepping 
in to help&lt;/a&gt;. You've probably spent 12 of the first 18
years of your life &lt;a href="http://www.swaraj.org/multiversity/gatto_7lesson.htm"&gt;in an institution whose basic purpose is to cripple your agency and make sure
you have no time to know or value anything beyond what it is your place to know&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It is not an exaggeration to say that everything in your environment is metaphorically, figuratively,
and literally out to kill you. Your expectation that these forces will aid you to seek anything besides
death &lt;a href="https://www.thelastrationalist.com/on-necessity.html"&gt;is unreasonable&lt;/a&gt;. And &lt;a href="https://www.youtube.com/watch?v=jPkhuvI3Y8Y"&gt;as we're warned by 
the Pardoner's tale&lt;/a&gt;, those who go seeking death will surely
find him. Normally I don't talk about this because it's kind of a cliche (this too is a binding force) and
I think it's &lt;a href="https://twitter.com/vgr/status/1250460466377183233?s=19"&gt;worse than useless&lt;/a&gt; to exhaust the
subject without having anything useful to say about it. &lt;a href="https://hivewired.wordpress.com/2020/04/13/what-is-your-goal-hive/"&gt;How to win in this scenario&lt;/a&gt;
is a subject for another day, but right now I plan to discuss a necessary prelude to any realistic plan:
Maintaining your agency and motivation long enough to find and execute an intervention against our collision
course with death. &lt;/p&gt;
&lt;p&gt;Doing that requires you to master two elemental forces of motivation which we can term fuzzies and saddies.
Fuzzies are what you typically think about when you hear 'motivation', they're the good feeling you get when
you do a good thing. They're the recovery from despair caused by good food and sleep. They are even the subtle
pleasures obtained by leading a good and virtuous life. Fuzzies are mostly materially motivated, and in our 
advanced state of decadence the material aspect is emphasized and the corresponding emotions corrupted. It takes
active effort to recalibrate yourself and have a healthy relationship with warm fuzzies.&lt;/p&gt;
&lt;p&gt;Saddies on the other hand are essentially suffering focused. Spite is a fairly basic suffering focused emotion.
However there are more subtle, sustainable, less hateful forms of saddie. These are generally suppressed, repressed,
or just plain selected against in favor of feelings that are easier to tie to various goods. They are of
particular interest to us because our Western social sickness is partially premised on denying their existence. &lt;/p&gt;
&lt;p&gt;&lt;img src="theme/images/plutchik_wheel.png" height=800px width=800px&gt;&lt;/p&gt;
&lt;p&gt;For this initial post we'll focus on fuzzies, but first a note on agency.&lt;/p&gt;
&lt;h2&gt;Agent Strategy&lt;/h2&gt;
&lt;p&gt;Let's say you care a great deal about human spaceflight. Naively you might think the best way to do 'human spaceflight'
as a cause area is to go work at NASA. But NASA is a government agency, and its ability to do human spaceflight is predicated
on public officials being willing to provide funding for serious technological development. It is entirely possible that,
predicated on NASA being the best way to do human spaceflight (&lt;a href="https://en.wikipedia.org/wiki/SpaceX"&gt;which is by no means a given&lt;/a&gt;),
the best thing you could do to get what you want is become some kind of space lobbyist that helps increase the resources
available to NASA rather than play any direct role in rocket building. Noticing this requires you to think about cause areas
in terms of the whole system that desired outcomes exist in. It's not enough to just be proximate to the outcome you want,
you need to choose an intervention which provides the most value &lt;em&gt;with respect to the entire chain of events&lt;/em&gt; that is necessary
to make it happen. The best way to save endangered species might have nothing to do with fighting poachers, but with working on
making 3rd world governments more stable. The best way to advance science &lt;a href="https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html"&gt;might be to become an engineer and build the instruments
necessary for science to advance&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I call this going upstream, but I'm tempted to call it &lt;a href="https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html"&gt;the Musk strategy&lt;/a&gt; 
because Elon Musk is cool and I'm not. A lot of people do things that look sort of like going upstream, but not really. For example &lt;a href="http://www.paulgraham.com/hs.html"&gt;Paul Graham advises 
people&lt;/a&gt; on the explore side of the explore/exploit dichotomy to 'stay upwind', choosing options that
increase the number of options they have available later. I think this is pretty good advice, but it implies that once you
make a specializing choice you can never go back. In Graham's model your only option is to do things within the increasingly
narrow purview of what's been left available to choose by your past selves. I think in practice this is how a lot of career 
trajectories end up looking, but that doesn't make it a great model for how agents should steer. Instead it probably makes more sense to
&lt;a href="https://www.readthesequences.com/Something-To-Protect"&gt;figure out what you want&lt;/a&gt; then make your best model of what causal factors go into 
that outcome happening. You want to work on influencing the most important factors and addressing root causes, rather than just things that
seem most directly related to the problem. &lt;/p&gt;
&lt;p&gt;Following this strategy recursively tends to converge on things like AI risk or genetic engineering as the best levers for moving the world.
To avoid furthering the groupthink surrounding these subjects I'll leave you to prove this for yourself. &lt;/p&gt;
&lt;h2&gt;Fuzzies&lt;/h2&gt;
&lt;p&gt;The term 'fuzzies' comes from Eliezer Yudkowsky's advice to &lt;a href="https://www.readthesequences.com/Purchase-Fuzzies-And-Utilons-Separately"&gt;Purchase Fuzzies and Utilons Separately&lt;/a&gt;.
There, he discusses separating the &lt;em&gt;feeling&lt;/em&gt; of doing good deeds with actually consequentialist-efficient good deeds.
Sure it might feel good to work in a shelter and cradle a puppy, but other people can do that work for minimum wage.
Unless you're a minimum wage worker your time is almost certainly better spent working extra hours and donating money.
Most people don't want to hear this, they volunteer at the shelter to look and feel like a good person and &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;anything which
contradicts that threatens their identity&lt;/a&gt;. They
want to believe there is a mystical value to their personally cradling that puppy which money can't buy. This is ludicrous
of course, but if you need that feeling to register having done a good deed then it's best to buy it solely on its own merits;
in the cheapest possible manner. Eliezer's ideal rationalist splits 'warm fuzzies', status, and actual good into separate categories
and bargain shops for each ruthlessly. &lt;/p&gt;
&lt;p&gt;We can consider our situation analogously. If we're really hopelessly dependent on comfort and leisure to maintain our activity,
then we want to strategize by choosing the least disruptive forms of comfort and leisure. &lt;a href="https://thezvi.wordpress.com/2017/12/02/more-dakka/"&gt;Perhaps you might start a gratitude 
journal&lt;/a&gt;, &lt;a href="https://marginalrevolution.com/marginalrevolution/2016/09/labor-force-participation-video-games.html"&gt;play low addiction video games&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Pomodoro_Technique"&gt;utilize the pomodoro method&lt;/a&gt;, or 
&lt;a href="https://www.bbc.com/news/health-27634990"&gt;learn a foreign language&lt;/a&gt;. In general you want to maximize 
life satisfaction while minimizing material cost and time expenditure. This is a simplified model of 
course, and doesn't capture concerns like reputation management. I assume a healthy dose of common 
sense is applied to my advice.&lt;/p&gt;
&lt;p&gt;My model of the main obstacle to extended motivation is lack of social support. This manifests
as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Discussing high future shock ideas (or their theological equivalent) is considered deviant or low status&lt;/li&gt;
&lt;li&gt;Being cut off from the resource flow of your social superorganism&lt;/li&gt;
&lt;li&gt;No good role models for what you should be doing&lt;/li&gt;
&lt;li&gt;No institutions exist to support you&lt;/li&gt;
&lt;li&gt;Lack of social proof or validation, weak moral support from friends and family&lt;/li&gt;
&lt;li&gt;Lots of network effects you don't benefit from, literature, symbology, value aligned strategic and tactical thought&lt;/li&gt;
&lt;li&gt;No proximate friends you can collaborate with or enjoy the company of&lt;/li&gt;
&lt;li&gt;Popular perspective may literally hold you to be a villain&lt;/li&gt;
&lt;li&gt;Alienation from certain concepts/values because they're framed in ways irrelevant to what you care about (e.g. money)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These exacerbate and lead to a lack of material support. Which manifests as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Recurring anxieties about continued access to food, water, shelter, etc.&lt;/li&gt;
&lt;li&gt;Actual lack of the above.&lt;/li&gt;
&lt;li&gt;Being forced to work in an uncomfortable environment.&lt;/li&gt;
&lt;li&gt;No professionalization, movement members forced into 'day jobs' so they don't starve&lt;/li&gt;
&lt;li&gt;Having to accept higher levels of risk to satisfy needs (e.g. boat housing)&lt;/li&gt;
&lt;li&gt;Lack of shared space for meeting, coworking, etc.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We're not the first people to deal with this problem. Plenty have been forced to endure
harsh social and physical environments to achieve their goals. Among the most relevant are those who go
through living in a foreign place enduring hardships for a cause. Missionaries and soldiers have
to deal with the most practical sorts of discomfort and get good at relieving it if they're
going to last on the field. They do this while enduring the existential malaise that comes
with being an extreme foreigner (for soldiers, it's difficult to be more foreign than 
when the natives shoot at you). To dissent from the dominant philosophy, epistemology,
symbol system, value system, and expected goals of your society is to live as either 
&lt;a href="https://blakemasters.com/post/24578683805/peter-thiels-cs183-startup-class-18-notes"&gt;an extreme insider or extreme foreigner&lt;/a&gt; in your own land. 
Sometimes you can even manage to be both at once. It is no coincidence that Jews are 
routinely persecuted in the places they reside.&lt;/p&gt;
&lt;h3&gt;Preparing For The Journey&lt;/h3&gt;
&lt;p&gt;We can look at these people's experience to get a better idea of how to handle
an extended period of alienation. Missionaries deal primarily with the social difficulties. 
The 1920 book &lt;a href="https://books.google.com/books/download/Missionary_Morale.pdf?id=WNwPAAAAYAAJ&amp;amp;output=pdf&amp;amp;sig=ACfU3U3Ki2gZePj-nDcZo5WDVu9Z_KQC5g"&gt;&lt;em&gt;Missionary Morale&lt;/em&gt; by George A. Miller&lt;/a&gt; 
deals directly with the problem of maintaining morale among Christian missionaries. 
Most of what it has to say on the subject focuses on preparing for the mission. Its 
emphasis on setting up the right selection filters and insisting the candidate thoroughly 
prepare themselves for the journey suggests this is a primary concern. Miller even 
provides a screening list to help filter flakes and malcontents:&lt;/p&gt;
&lt;blockquote&gt;
A board may occasionally reject some candidate who may eventually make good when given a chance, but the established rules of procedure have the backing of a century of experience, and a comparison of results attained by the approved candidates of the regular Mission Boards and the self-appointed missionaries sent out independently, establishes the soundness of the accepted principles of selection. &lt;br&gt; &lt;br&gt;

Dr. Arthur J. Brown, of the Presbyterian Board, mentions the accepted qualifications of the available candidate in the following order: &lt;br&gt;
&lt;ol&gt;
&lt;li&gt; Health, given first place because fundamental. &lt;/li&gt;
&lt;li&gt; Age, 25 to 33 years, with exceptions. &lt;/li&gt;
&lt;li&gt; Education, varying according to class of service. &lt;/li&gt;
&lt;li&gt; Executive ability and force of character. More needed than in work in the home land. &lt;/li&gt;
&lt;li&gt; Common sense. (Might be put next to health in order of importance.) &lt;/li&gt;
&lt;li&gt; Steadiness of purpose. To carry on after the halo has faded. &lt;/li&gt;
&lt;li&gt; Temperament, adaptability, reliability, amiability-in short, unselfishness. A missionary should at least be a gentleman. &lt;/li&gt;
&lt;li&gt; Doctrinal views. Conformity to accepted views, without surrender of private judgment. &lt;/li&gt;
&lt;li&gt; Marriage, an important factor in adjustment of work. &lt;/li&gt;
&lt;li&gt; Freedom from financial obligations. A mission field is not a place to pay debts or lay up bank accounts. &lt;/li&gt;
&lt;li&gt; Christian character and experience, without which all else must register but failure. (See full discussion of these and other essentials in The Foreign Missionary, by Dr. Brown.)&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we had to summarize these bullet points (with some adjustment for the current times) the missionary candidate 
must be healthy, compelling, intelligent, persistent, and free of lingering personal distractions like debts.
These are all fairly standard virtues, and it might be easier to summarize that the basic requirement is to be a
high quality human being who is free to pursue the work. However Dr. Brown's final point is worth considering in
detail, a 'Christian character and experience' without which all efforts are doomed to failure. I will be the last
person to tell you to cultivate a 'Christian character', but I do think there is an analogous thing which is necessary 
to bring out your best qualities in pursuit of an intervention into existential risk. There are four key ingredients that go 
into internalizing and implementing Eliezer's version of &lt;a href="https://web.archive.org/web/20131015142449/http://extropy.org/principles.htm"&gt;extropy&lt;/a&gt;: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;High future shock. This is necessary to realize that there are solutions to the problems we have, and anything really worth fighting for. That it's not all hopeless, there are glorious things within our reach.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A love for the world and its inhabitants, &lt;a href="http://yudkowsky.net/other/yehuda/"&gt;the belief that death is Bad&lt;/a&gt;, a fully developed secular moral system. New Atheism is toxic nonsense because skepticism is toxic nonsense. The skeptic focuses only on downside risk, EY-style rationality is an improvement because it considers opportunity cost. It's not enough to not-lose in rationality, you need to capture the foregone upside.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sanity. You need to have a very clear view of the world, and be very well in tune with yourself, &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;have a strong well constructed (i.e., not full of ad-hoc garbage) identity&lt;/a&gt;, good epistemics, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Agency. You need to be well versed in the practical methods of piloting yourself to actually do things. Building habits, not giving up at the first setback, strength, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Developing each of these four things in yourself is absolutely necessary to persist past pain and do useful work.
This 'extropian character' is of course dependent on a deep familiarity with the philosophy's core themes and aesthetics.
If the overriding theme of Christianity is repentance and salvation, the theme of Eliezer's extropy is 
&lt;a href="http://thelastrationalist.com/on-necessity.html"&gt;necessity&lt;/a&gt; and &lt;a href="https://www.thelastrationalist.com/necessity-and-warrant.html"&gt;necessary conclusions&lt;/a&gt;. 
To teach someone extropy, is to teach them necessity. To advance &lt;a href="https://www.readthesequences.com/Noticing-Confusion-Sequence"&gt;you must resolve confusions&lt;/a&gt;, 
stop &lt;a href="https://www.rijnlandmodel.nl/achtergrond/algemene_semantiek/hayakawa/ch10_abstraction-ladder.htm"&gt;confusing layers of abstraction&lt;/a&gt;, 
and become a scholar of natural philosophy. Rationality and extropy go together in the same way that to 
become a better Buddhist you have to meditate. Confusing as it may have been it's not surprising that 
Eliezer labeled his extropy and his rationality as the same thing. &lt;a href="http://www.sl4.org/shocklevels.html"&gt;High future shock&lt;/a&gt; is meant to follow 
from an unbiased consideration of human potential. If &lt;a href="https://www.youtube.com/watch?v=kmFOBoy2MZ8"&gt;your unbiased consideration of human potential&lt;/a&gt;
would not suggest high future shock, &lt;a href="https://archive.org/details/TheMarsProject-WernherVonBraun1953/"&gt;this is a sign&lt;/a&gt; that
your natural philosophy is too weak.&lt;/p&gt;
&lt;h3&gt;Useful Tips Once You've Started&lt;/h3&gt;
&lt;p&gt;In addition to its advice on selecting missionary candidates, 
&lt;em&gt;Missionary Morale&lt;/em&gt; also provides tips to the reader on personally 
maintaining their morale during missionary work. &lt;/p&gt;
&lt;h4&gt;Dealing With...&lt;/h4&gt;
&lt;h5&gt;Loneliness&lt;/h5&gt;
&lt;blockquote&gt;
Capacity for Isolation. &lt;br&gt; &lt;br&gt;

A missionary is a long way from his own kind of people, and if he cannot come to be at home with his work he will die at heart. To be a friend of strangers and at the same time be content to live a lonely life is not always easy. He must be in good company when alone.
&lt;/blockquote&gt;

&lt;p&gt;This one honestly depends on what you're doing, but it seems pretty likely this will come up for you.
The sort of work that goes into a serious pursuit of X-Risk interventions or even just ordinary technical
proficiency tends not to be social. Traditionally alchemists spent a great deal of time indoors on their
chemical research, which contributed greatly to the perception that they were somehow conjuring spirits
or demons. While in the modern era you are unlikely to be accused of witchcraft, this isolation does have
its costs and only you can set your tolerance level. &lt;a href="https://www.cbsnews.com/news/marshall-medoff-the-unlikely-eccentric-inventor-turning-inedible-plant-life-into-fuel-60-minutes/"&gt;The most extreme story I've heard&lt;/a&gt; in this vein centered around a man who shut himself in his lab for years looking
for a way to efficiently extract fuel from plants. Realistically you probably do not have the wealth or 
psychological fortitude for that, let alone the natural talent to potentially succeed. &lt;/p&gt;
&lt;p&gt;Perhaps the most potent cure to this kind of loneliness is the Internet.
For better or ill, it's now possible to opt-in to all kinds of divergent social
realities in forums and chatrooms. Horrible as it may be, the power of these
groups to &lt;a href="https://www.nbcnews.com/news/us-news/she-wanted-freebirth-no-doctors-online-groups-convinced-her-it-n1140096"&gt;compel all manner&lt;/a&gt; of
&lt;a href="https://www.cbsnews.com/news/anti-vax-movement-among-top-10-global-health-threats-for-2019-world-health-organization/"&gt;self destructive action&lt;/a&gt; is evidence
that they're effective at providing social support to people who do things which
are (often rightly) considered crazy by the people around them. &lt;a href="https://www.salon.com/2016/12/10/pizzagate-explained-everything-you-want-to-know-about-the-comet-ping-pong-pizzeria-conspiracy-theory-but-are-too-afraid-to-search-for-on-reddit/"&gt;Negative
examples get all the attention&lt;/a&gt;, but listening to COVID-19 newsgroups like &lt;a href="https://www.reddit.com/r/coronavirus"&gt;/r/coronavirus&lt;/a&gt; or
LessWrong would have made you seem very crazy even as you protected your
friends and family. If your society is organized as a suicide pact, not wanting
to die is considered pathology anyway. Dealing with people thinking you're crazy
is half the emotional problem we're trying to solve here in the first place.&lt;/p&gt;
&lt;h5&gt;Poor Working Conditions&lt;/h5&gt;
&lt;p&gt;For most of my readers the reality will probably look less like the secluded
alchemist and more like the life of Erasmus or Leibniz,
where almost all important work is done between employment and in shabby conditions. Perhaps then the
advice should not be to get comfortable with loneliness (though you should) &lt;a href="http://nautil.us/issue/84/outbreak/how-a-nuclear-submarine-officer-learned-to-live-in-tight-quarters"&gt;but to get comfortable with
working under cramped and unsuitable conditions&lt;/a&gt;.
&lt;a href="https://www.space.com/15524-albert-einstein.html"&gt;Albert Einstein famously did his Nobel Prize winning work in between patent application reviews at his job&lt;/a&gt;.
I'm sure some of you will laugh but I can recall many times in high school when my family would go out to
dinner and I'd skip conversation in favor of reading a book. I didn't care how antisocial it made me seem,
&lt;em&gt;Mao And The Chinese Revolution&lt;/em&gt; was clearly more important than whatever drama my sister had managed to get
herself into that month. I faithfully continue this tradition, having spent Christmas of 2019 reading
&lt;em&gt;The World Of Null A&lt;/em&gt; rather than play board games or "catch up". &lt;/p&gt;
&lt;p&gt;This might seem mean, but free moments like that are the time you have in which to get work done. It is
exactly by sustained effort under conditions that aren't quite ideal that you end up accomplishing anything:&lt;/p&gt;
&lt;blockquote&gt;
From some cause like this, it has probably proceeded, that, among those who have contributed to the advancement of learning, many have risen to eminence in opposition to all the obstacles which external circumstances could place in their way, amidst the tumult of business, the distresses of poverty, or the dissipations of a wandering and unsettled state. A great part of the life of Erasmus was one continual peregrination; ill supplied with the gifts of fortune, and led from city to city, and from kingdom to kingdom, by the hopes of patrons and preferment, hopes which always flattered and always deceived him; he yet found means, by unshaken constancy, and a vigilant improvement of those hours, which, in the midst of the most restless activity, will remain unengaged, to write more than another in the same condition would have hoped to read. Compelled by want to attendance and solicitation, and so much versed in common life, that he has transmitted to us the most perfect delineation of the manners of his age, he joined to his knowledge of the world such application to books, that he will stand for ever in the first rank of literary heroes. How this proficiency was obtained he sufficiently discovers, by informing us, that the “Praise of Folly,” one of his most celebrated performances, was composed by him on the road to Italy; &lt;i&gt;ne totum illud tempus quo equo fuit insidendum, illiteratis fabulis terreretur&lt;/i&gt;: “lest the hours which he was obliged to spend on horseback should be tattled away without regard to literature.”
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Samuel Johnson, &lt;i&gt;&lt;a href="http://www.johnsonessays.com/the-rambler/no-108-life-sufficient-to-all-purposes-if-well-employed/"&gt;No. 108. Life sufficient to all purposes if well employed.&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Of course the best solution to this deprivation is to become wealthy. If you're
reading this, it's likely you're in a position to &lt;a href="http://www.paulgraham.com/wealth.html"&gt;make new wealth and capture
some of it&lt;/a&gt;. The odds aren't in your
favor, most new businesses fail. At the same time most new businesses are
probably started by people who don't approach it very intelligently. It's
entirely possible that &lt;em&gt;your&lt;/em&gt; odds of starting a successful business are more
like a coinflip if you do your homework first. I don't know you so I'm not
really in a position to tell you if the expected value is high enough, but
people are consistently risk averse even when in raw utilitarian terms they
shouldn't be. It's entirely possible that the best thing you could do for the
world is take a 1% shot at getting rich over a 99% chance of adding some
marginal contribution.&lt;/p&gt;
&lt;h4&gt;Make A Point Of Reading Books&lt;/h4&gt;
&lt;p&gt;One of the strongest recommendations Miller makes is to read often during the Mission.
Since this book is in the public domain there's no need to reinvent the wheel, he makes
the argument himself intelligently enough:&lt;/p&gt;
&lt;blockquote&gt;
THE MISSIONARY AND HIS READING &lt;br&gt;
The case for the pastor's reading habit has been often and adequately stated. Find the best plea for the faithful reading of good books, new and old, by the man in the home land and then multiply it by three for the missionary. Verily there are three multipliers: &lt;br&gt;

&lt;ol&gt;
&lt;li&gt; Distance from the currents of the world's best intellectual and spiritual life.&lt;/li&gt;
&lt;li&gt; Isolation from kindred spirits of equal or greater ability. &lt;/li&gt;
&lt;li&gt; The daily belittling of petty tasks of more or less routine nature, without the social stimulus of virile American community life. &lt;/li&gt;
&lt;br&gt;
&lt;/ol&gt;
No man can maintain a keen mind without constant replenishing at the springs from which flow the contributions of the thinkers of all times. The missionary may be spared the dissipation of the multipage daily paper, though he is eager to see one when it reaches him; but he misses the undercurrent of stimulus that comes from what that paper represents in his life. And unless he can establish a regular course of self-imposed reading of the things worth while his mental life will inevitably go stale.&lt;br&gt;
In many missionary situations books are hard to get, but a man can surely arrange to read at least six new books a year, and less than that means a slowing up of intellectual life.
&lt;br&gt; &lt;br&gt;
THE SILENT DEATH &lt;br&gt;
The insidious mischief about the failing reading habit is that its departure is so silent and stealthy' that one is never conscious of his loss until the guest has fled. And when a man ceases to read and grow, a subtle deterioration sets in that undermines the sources of his spiritual life, and he begins to slow up. If this were a conscious loss, it would not so much matter. One might recover the lost treasure and go on with his work. Possibly no man ever knows when his mind has lost its fresh approach to problems, its keen initiative in attack and its attractive strength in carrying burdens. But his associates know it and may wonder at the cause.
&lt;br&gt; &lt;br&gt;
The springs have ceased to flow and the mind is going dry. So insidious and deadly is the lethargy that follows the ending of a man's reading life that he may know it only by noting that he has ceased to read. Few of his distressed friends or his perplexed followers will have the wisdom or the grace to tell him of it. Only while a man stands beside the stream of living water that bears the intellectual life of the world can he minister to his fellow men fresh cargoes of the mind and spirit.
&lt;/blockquote&gt;

&lt;p&gt;I've personally witnessed in friends and family that people who stop reading dry out intellectually.
Interestingly enough Miller comments on the 'dissipation' of the daily newspaper, which would have presumably
been a temptation that crowds out deeper reading. This threat was worth a passing mention in 1920, but in 
2020 the dissipation of so many tweets, blog posts, and news articles are the #1 threat to maintaining a
good reading habit. It no longer suffices to say you 'need to read', what you need to read are &lt;em&gt;books&lt;/em&gt;, of
the full fledged scholarly variety. You can read them on screen or paper, but they should be books proper.
If I had to boil down my general advice on choosing books to a sentence, it would be similar to my advice on 
agent strategy: Read books which help answer questions you have about things you care about, prioritizing insight 
and expertise. Insight can be roughly estimated by "how many books will I rightly feel are no longer worth my time after reading this?",
which is a heuristic for how much other knowledge can be reasonably predicted (compressed) once you learn the book's contents.
A friend once showed me their reading list, and I was surprised to see them paying attention to pop philosophy which 
probably wasn't significantly more thoroughly justified than other pop philosophy. Enduring forms of knowledge
&lt;a href="https://en.wikipedia.org/wiki/Half-life_of_knowledge"&gt;with a long half life&lt;/a&gt; are usually a better use of
your time than fads and froth. &lt;/p&gt;
&lt;p&gt;Another necessary mention to adjust this advice to 2020 is the sudden availability of online learning and
lectures. In the past decade or so we've seen an explosion of &lt;a href="https://en.wikipedia.org/wiki/OpenCourseWare"&gt;open course lectures&lt;/a&gt;,
&lt;a href="https://en.wikipedia.org/wiki/Massive_open_online_course"&gt;free massive online courses&lt;/a&gt; and &lt;a href="https://www.youtube.com/user/mediacccde/"&gt;filmed conference talks&lt;/a&gt;
which provide access to knowledge in a format that would have previously only been available by signing up at a local 
university, community college, or professional gathering. If you're going to be working on your powers as a natural philosopher
(and you &lt;em&gt;are&lt;/em&gt; going to be working on your powers as a natural philosopher, right?) then it would be foolish not to take advantage
of this when it's freely available. For the basics you can use &lt;a href="https://www.khanacademy.org/"&gt;Khan Academy&lt;/a&gt;, for more specialized 
subjects you have &lt;a href="https://www.edx.org/"&gt;edX&lt;/a&gt; and &lt;a href="https://www.coursera.org/"&gt;Coursera&lt;/a&gt;, and there's a wealth of useful talks on
YouTube concerning various aspects of technology, science, etc. &lt;/p&gt;
&lt;h4&gt;Keep A Hobby&lt;/h4&gt;
&lt;blockquote&gt;
Hobby Riding. &lt;br&gt; &lt;br&gt;

No man can spend all his waking hours at one task. The relaxation of a good hobby adds to a man's morale by saving him from the dizzy distortion of the one idea. It matters little what the hobby may be-insect-collecting, photography, horticulture, floriculture, touring, tennis, or trombone-tooting; but if the avocation can have some indirect relation to the day's work, there may be great gain thereby at times. One Oriental missionary experimented for years in budding American fruits onto native branches and at last had the satisfaction of seeing the Chinese steal the buds from his trees that they might grow them in their own gardens.
&lt;/blockquote&gt;

&lt;p&gt;It continues to impress me how with the right perspective even tangential knowledge can
become an important part of strategy and tactics. At one point I was part of a training program
for call center agents. This is about the lowest status 'skilled labor' imaginable, and on 
that basis I would expect a lot of people in my position to tune out. However I did not tune
out, figuring that if I'm going to be devoting 8 hours a day to something it's worth my time to
open myself to whatever the experience can teach me. This sort of 'epistemic posture' is 
important not just for life satisfaction but to get the most out of your life in terms of
accumulated knowledge hour by hour.&lt;/p&gt;
&lt;p&gt;In the case of call centers, my patience was rewarded with an introduction to &lt;a href="https://www.amazon.com/Call-Center-Management-Fast-Forward/dp/0985461101/"&gt;&lt;em&gt;Call Center 
Management On Fast Forward&lt;/em&gt; by Brad Cleveland&lt;/a&gt;. 
This very well written book takes the reader
through a problem that can basically be described as "how to supply support resources to 
deal with randomly distributed demand without overpaying or undersupplying". As the author
points out this is not a problem limited to asking customers whether they've tried turning
their device on and off again. Plenty of serious, sophisticated organizations have problems
of this shape which they solve poorly because they don't know &lt;a href="https://en.wikipedia.org/wiki/Queueing_theory"&gt;the science of call center
management&lt;/a&gt; solves it. In one particularly 
shocking moment, I found a friend describing to me a problem they were supposed to be solving 
for their local firefighters and emergency medical services that reduced to a failure to 
apply the mathematics involved in call center management. Telling her that a great deal of
the problem had already been solved by telephone companies was quite satisfying.&lt;/p&gt;
&lt;blockquote&gt;
It was different from any book Harry had ever seen, the edges and corners visibly misshapen; rough-hewn was the phrase that came to mind, like it had been hacked out of a book mine.
"What is it?" breathed Harry.&lt;br/&gt;&lt;br/&gt;
"A diary," said Professor Quirrell.&lt;br/&gt;&lt;br/&gt;
"Whose?"&lt;br/&gt;&lt;br/&gt;
"That of a famous person." Professor Quirrell was smiling broadly.&lt;br/&gt;&lt;br/&gt;
"Okay..."&lt;br/&gt;&lt;br/&gt;
Professor Quirrell's expression became more serious. "Mr. Potter, one of the requisites for becoming a powerful wizard is an excellent memory. The key to a puzzle is often something you read twenty years ago in an old scroll, or a peculiar ring you saw on the finger of a man you met only once. I mention this to explain how I managed to remember this item, and the placard attached to it, after meeting you a good deal later. You see, Mr. Potter, over the course of my life, I have viewed a number of private collections held by individuals who are, perhaps, not quite deserving of all that they have -"
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— &lt;i&gt;Harry Potter And The Methods Of Rationality&lt;/i&gt;; &lt;a href="https://www.hpmor.com/chapter/26"&gt;Chapter 26&lt;/a&gt;
&lt;/blockquote&gt;

&lt;p&gt;You would do well to remember this lesson. It would be ever so easy after all for the 
dear professor to 'tune out' as soon as some rich idiot starts showing off their collection
of old books. &lt;/p&gt;
&lt;h3&gt;When Things Get Tough&lt;/h3&gt;
&lt;p&gt;The quiet life of a missionary tells us something about how to deal with routine,
but what about when things intensify beyond that? Unlike Miller's scholar evangelists;
soldiers tend to get put through the wringer. And perhaps nobody has been put through
the wringer quite like soldiers got squeezed in the first world war. It is difficult to 
convey to a modern person how bad it actually was, the hell on earth that constituted the 
first world war is the sort of thing that eludes simple description. Part of the problem is 
there are so many elements of horror that it's hard to know where to start. There's the existential
horror of going to war with the expectation you'll be proving your manliness and courage, only
to be squished together like sardines as unprecedented machine gun and mortar fire refutes your
basic importance as human beings. There's the medieval horror of developing &lt;a href="https://en.wikipedia.org/wiki/Trench_foot"&gt;trench foot&lt;/a&gt;
because you've been standing in shallow water so long that your flesh has begun to rot away.
You have the crushing deprivation of siege warfare combined with the anxious paranoia of being
so close to your enemy, to be sniped at a moments notice. Mortars turn charming countryside towns 
and villas into an apocalyptic landscape thick with the scent of death. You might not believe me
but I'm underselling it here. Really getting it across all the way would take more words than I
can spare. If you're interested in a comprehensive overview of the war that digs into what it would
have been like to experience, I think Dan Carlin &lt;a href="https://www.dancarlin.com/product/hardcore-history-50-blueprint-for-armageddon-i/"&gt;does a good job on his Hardcore History podcast&lt;/a&gt;; 
which I recommend both as a meditation on human coordination and on the limits of human endurance.&lt;/p&gt;
&lt;h4&gt;Remember The Essentials&lt;/h4&gt;
&lt;p&gt;It's in this context that Ernst Junger's &lt;em&gt;Storm of Steel&lt;/em&gt; becomes interesting. A famously chipper
account of WW1, it's only fair to ask what the heck Junger is doing to maintain himself. It's worth
bearing in mind that soldiers have their comrades with them, and mostly deal with physical difficulties. 
At a loss for words when someone asked me "what it's about", I told them that Storm of Steel is a fever dream
through hell whose protagonist is "really cool with and kind of happy about being in hell". So how
does that happen? Reading the book one gets the impression that morale is maintained through fairly
mundane things. In the first paragraph of the chapter on daily trench life he cites "tea, smoking and 
reading" as good times, which partially highlights Junger's status as an officer. It's a lot easier
to maintain your composure when you get to stay inside a dugout while your batman makes toast. Nevertheless
other antics are described. Some, like the shooting of unexploded shells for target practice are basically
wholesome, while certain others like "blowing the heads off" pheasants raise eyebrows. &lt;/p&gt;
&lt;p&gt;One advantage of Junger's evocative writing is that you can tell what was important to him in this situation
by the language he uses. Food is always described in glowing terms, I lost count of how many times I wrote
a note where Junger talks about the restorative effects of eating. As Junger tells it, despair and ugliness
are washed away by a good meal: 'a good breakfast will hold body and soul together'. One pattern that sticks
out in my notes is what shelling isn't allowed to interrupt. In general, Junger notes that once he'd experienced
it long enough shelling ceased to particularly bother him. And yet shelling seems like an interesting barometer of
what intuitively feels worth risking death over:&lt;/p&gt;
&lt;p&gt;(pg. 76): Shell comes down directly on house while eating in basement, nobody cares. &lt;/p&gt;
&lt;p&gt;(pg. 76): Sitting down in abandoned house to read Le Petit Journal (interesting how you find these moments of peace to read in a quaint house surrounded by destruction). Bombs hit the house while Junger reads and he ignores them. &lt;/p&gt;
&lt;p&gt;(pg. 94): Junger sleeps through a shell leveling the house he's sleeping in the basement of. &lt;/p&gt;
&lt;p&gt;(pg. 121): Wet and cold more effective at breaking resistance than shelling. &lt;/p&gt;
&lt;p&gt;(pg. 134): Junger laughs at civilians pleading with him not to use the upstairs light and attract shelling. &lt;/p&gt;
&lt;p&gt;(pg. 138): Heavy shells up close while crossing challenge the will to live &lt;/p&gt;
&lt;p&gt;(pg. 168): Junger shakes head at people running around during shelling and takes cover with bottle of jam. &lt;/p&gt;
&lt;p&gt;(pg. 176): Junger 'works on his tan' in crater, listens to night time shelling with unjustified feeling of safety. &lt;/p&gt;
&lt;p&gt;(pg. 177): Junger eats in gazebo as shells fall around him &lt;/p&gt;
&lt;p&gt;(pg. 185): Junger annoyed by his civilian hosts running around while they're being bombed. &lt;/p&gt;
&lt;p&gt;It seems to me like the moral of such things is that you're a mammal. Ultimately, you can expect to
break faster under conditions of starvation or icy rain than any abstract psychological danger. Many
times I've talked someone through an existential fit only to end up resolving it with a checklist like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Have you eaten today?&lt;/li&gt;
&lt;li&gt;Are you drinking water?&lt;/li&gt;
&lt;li&gt;How much sleep are you getting?&lt;/li&gt;
&lt;li&gt;Have you used a bathroom today?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So on and so forth, until eventually they hit some point they haven't attended to
and do it; dissolving the mood.&lt;/p&gt;
&lt;h4&gt;Symbol Systems and Narrative&lt;/h4&gt;
&lt;p&gt;So far this isn't a very fruitful study. Plenty of other people ate, slept, and 
attended to their animal needs during the 1st world war. Most of them did not feel
moved to write a Greek epic about their experience. Nor did most of them earn a Pour le Merite
(or their country's equivalent). Korzybski advises us to understand a book by studying its
author, so lets try that. &lt;a href="https://www.nytimes.com/1998/02/18/arts/ernst-junger-contradictory-german-author-who-wrote-about-war-is-dead-at-102.html"&gt;The New York Time's obituary&lt;/a&gt;
for Ernst Junger gives us some context for the events of &lt;em&gt;Storm of Steel&lt;/em&gt;. It says he volunteered
to join the army the day Germany mobilized to fight WW1. It also says that he was forced to attend
stifling private schools, and that he had already fled the country to train in France's foreign legion
as a way to escape his father. Further:&lt;/p&gt;
&lt;blockquote&gt;
Despite this, some literary historians regard him as primarily a loner who paid homage to an aristocratic ideal and was imbued with a kind of Germanic fatalism. Like earlier German writers, among them Heinrich von Kleist and Friedrich Hebbel, he was fascinated with death and heroism. He was influenced by the nihilist streak of Friedrich Nietzsche, the end-of-the-world ideas of Oswald Spengler and the formalistic philosophy of Hegel.
&lt;/blockquote&gt;

&lt;p&gt;We can infer from all this that it was probably not any particular love of Germany that motivated Junger.
Rather, Junger was motivated by war itself, and an outlier level of motivation at that. The idea of joining
the army as an escape from social regimentation seems strange. Perhaps it was the social and psychological
regimentation of the private school that drove Junger nuts? I read some of a biography about Nietzsche,
and was stunned by the sheer level of effort demanded by the Prussian school system. It demanded as much
time and energy as it could from pupils, which seems to have been almost all of it. In that context it
might have seemed very freeing to Junger to give up his body to the French Legion so that his mind could
be unshackled. Miller's advice to focus on filtering before the journey reasserts itself. &lt;/p&gt;
&lt;p&gt;But that leaves the essential question unanswered: Why was Junger so enamored with war in the first place?
Part of the answer is that Junger was raised in a culture that hadn't experienced WW1 yet, so he believed
traditional heroic ideas about war. Even if Junger wasn't inspired by nationalism per se, the abstract
idea of war heroism has a long lineage. Battle heroes are remembered in songs and stories, to fight with 
valor in a society that reifies war is to enter into the realm of the immortals. The words that Junger uses
to describe his experience, whatever the German equivalents of "exalted" and "joy" and "glowing" are, remind
me very much of &lt;a href="http://ramakrishnavivekananda.info/gospel/introduction/god_intoxicated.htm"&gt;the ways people describe religious experience&lt;/a&gt;.
In F. Scott Fitzgerald's description of the fighter's spirit, he takes them to be middle class crusaders:&lt;/p&gt;
&lt;blockquote&gt;
“See that little stream — we could walk to it in two minutes. It took the British a month to walk to it — a whole empire walking very slowly, dying in front and pushing forward behind. And another empire walked very slowly backward a few inches a day, leaving the dead like a million bloody rugs. No Europeans will ever do that again in this generation.”
&lt;br&gt; &lt;br&gt;
“Why, they’ve only just quit over in Turkey,” said Abe. “And in Morocco —”
&lt;br&gt; &lt;br&gt;
“That’s different. This western-front business couldn’t be done again, not for a long time. The young men think they could do it but they couldn’t. They could fight the first Marne again but not this. This took religion and years of plenty and tremendous sureties and the exact relation that existed between the classes. The Russians and Italians weren’t any good on this front. You had to have a whole-souled sentimental equipment going back further than you could remember. You had to remember Christmas, and postcards of the Crown Prince and his fiancée, and little cafés in Valence and beer gardens in Unter den Linden and weddings at the mairie, and going to the Derby, and your grandfather’s whiskers.”
&lt;br&gt; &lt;br&gt;
“General Grant invented this kind of battle at Petersburg in sixty-five.”
&lt;br&gt; &lt;br&gt;
“No, he didn’t — he just invented mass butchery. This kind of battle was invented by Lewis Carroll and Jules Verne and whoever wrote Undine, and country deacons bowling and marraines in Marseilles and girls seduced in the back lanes of Wurtemburg and Westphalia. Why, this was a love battle — there was a century of middle-class love spent here. This was the last love battle.”
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— F. Scott Fitzgerald, &lt;i&gt;Tender Is The Night&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Curiously enough however, Junger doesn't seem to make mention of or callback to any ancestors. He doesn't talk about
the traditions of his forefathers (and certainly his own father mightily disapproved of his participation in war).
Junger talks of his purpose having been "used up" late into the war, but it's not clear what that purpose was. &lt;a href="http://www.historymatters.group.shef.ac.uk/ernst-junger-practitioner-killing-world-war/"&gt;This 
analysis of Junger's war diary&lt;/a&gt;
almost seems to give the impression he was motivated primarily by fantasies of mutual combat:&lt;/p&gt;
&lt;blockquote&gt;
I have experienced a great deal in this greatest of wars, but I’ve so far been denied the experience I’ve been aiming for: the charge and clash of the infantry. To zero in on the enemy, to face him man on man; that is quite different to this perpetual artillery war. (185)
&lt;/blockquote&gt;

&lt;p&gt;This is reiterated in Storm of Steel, when Junger discusses his attitude towards his opponents:&lt;/p&gt;
&lt;blockquote&gt;
Throughout the war, it was always my endeavour to view my opponent without
animus, and to form an opinion of him as a man on the basis of the courage he showed.
I would always try and seek him out in combat and kill him, and I expected nothing
else from him. But never did I entertain mean thoughts of him. When prisoners fell into
my hands, later on, I felt responsible for their safety, and would always do everything
in my power for them.
&lt;/blockquote&gt;

&lt;p&gt;This passage is illustrative, because it implies a schema of interpretation which is psychologically
stabilizing. Many people who fought in the first world war had their worldview shattered by it.
They had gone in expecting to be heroes and martyrs and champions, instead they were cannon fodder.
In time they came to feel deeply betrayed by their society for putting them through that, and after
the war all systems of traditional meaning seemed to lose their power. The post war years were the years
of Dada and new ideas like Anarchism, Communism, and Fascism &lt;a href="http://george-orwell.org/Homage_to_Catalonia/0.html"&gt;becoming significant on the world stage&lt;/a&gt;. Their
experience of WW1 was savage unrestrained bloodlust, a maelstrom of murder which ripped away death's disguise
to reveal the gaping maw of an infinite void underneath. But that isn't how Junger felt about it. To him this
is still part of the plan, on some level he still believes in the essential cosmic justness and fairness of 
his situation. He hasn't been cheated — no, this slaughter is part of the game.&lt;/p&gt;
&lt;p&gt;In the foreward to my edition of &lt;em&gt;Storm of Steel&lt;/em&gt;, the translator writes that Junger took his experience in 
WW1 as 'sacred'. I don't doubt this, because Junger is very much towards the center of Fitzgerald's character
sketch. He was a crusader, his religion the Grecian classicism which was (and continues to be) so 
popular with young educated men. It is difficult to tell if he accepted the dominance of artillery in
warfare because his faith was so much deeper than that of his comrades, or because he was a keen observer
that adapted himself to the situation as it was rather than what he wished it to be. In reality I suspect
it was a savvy combination of the two: updating on the reality of the situation with respect to the experience
of mutual combat he sought. Junger admitted that Storm of Steel is meant to ape the form of Greek ballad,
and that one of the questions explored by writing it was if you can have Achilles with guns. &lt;/p&gt;
&lt;p&gt;None of this is directly stated in the book itself. The dedication at the start, "For the fallen",
gives some hint as to why. Ultimately &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;the immortality being sought&lt;/a&gt;
by Junger didn't center on any personal political opinion or national ideology. Rather the book is a
dedication to war itself, and its form omits everything which is not the war. The stormtrooper is an 
anonymous hero, beyond his name and nominal personality Junger purges all aspects of his internal
universe which are not directly germane to war from the text. The intent of the gesture is perhaps that
he could represent many people who participated in WW1. This is supported by the book starting life
as a small print run to distribute to his fellow veterans. &lt;/p&gt;
&lt;p&gt;This sense of contributing to an enduring artifact such as a myth or monument is
an important foundation for human motivation beyond our raw animal needs. It's one
of the reasons I'm uncomfortable with the word 'transhumanism', to be in a state of
transition implies a fleeting thing. Max More's notion of Extropy is more enduring,
and recalls the spirit of the long lineage of alchemy. Whatever is the point of focusing
on physical, mathematical laws, warrant and necessity, or even truth itself if your philosophy
is centered on ad-hoc fixations? We all have our preferences and life experience, but ideally
it should be possible to derive the core of your philosophy from the same fixed point in 
concept-space without them. Achieving this already takes you most of the way to true symbolic 
immortality, as until tyrants manage to unmake liberty and free thought entirely your will
shall rise again and again from the pool of philosophical reflection. No matter how many books
burnt, heretics killed, recantations coerced, it will still be possible to converge on your 
ideas through earnest truthseeking. &lt;/p&gt;
&lt;p&gt;Omnia nodis arcanis connexa quiescunt.&lt;/p&gt;</content></entry><entry><title>Necessity and Warrant</title><link href="https://www.thelastrationalist.com/necessity-and-warrant.html" rel="alternate"></link><published>2020-03-30T00:00:00+02:00</published><updated>2020-03-30T00:00:00+02:00</updated><author><name>namespace</name></author><id>tag:www.thelastrationalist.com,2020-03-30:/necessity-and-warrant.html</id><summary type="html">&lt;blockquote&gt;
&lt;i&gt;Literary warrant&lt;/i&gt;, a concept introduced by Wyndam Hulme in 1911, has the status of a principle. A subprinciple of the principle of representation, it enjoins that the vocabulary of a subject language be empirically derived from the literature it is intended to describe. This means that a literature must be …&lt;/blockquote&gt;</summary><content type="html">&lt;blockquote&gt;
&lt;i&gt;Literary warrant&lt;/i&gt;, a concept introduced by Wyndam Hulme in 1911, has the status of a principle. A subprinciple of the principle of representation, it enjoins that the vocabulary of a subject language be empirically derived from the literature it is intended to describe. This means that a literature must be determined. For Hulme, the language in question was the Library of Congress Classification (LCC), and the literature that served as warrant were the books housed in the Library of Congress. For a discipline-specific language, the literature might be defined as the canonical texts in the discipline or as the core set of documents of the discipline, as this is determined by citation frequency. Once the literature of a discipline is defined, then expressions in it indicative of aboutness become candidates for inclusion in the vocabulary of the language.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Elaine Svenonius, &lt;i&gt;The Intellectual Foundation Of Information Organization&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://thelastrationalist.com/on-necessity.html"&gt;In the previous post&lt;/a&gt;
I discussed the concept of phenomenological necessity. Reality has a consistent
ruleset on which we can base reasoning. Our expectations about reality should be
based on rules and sense-data derived from reality. The extreme consistency of physics
is one of the most important revelations of the 20th century: while our world is dizzying
in its complexity and &lt;a href="https://slatestarcodex.com/2015/01/11/the-phatic-and-the-anti-inductive/"&gt;anti-inductive in its presentation&lt;/a&gt;
the underlying principles are comparatively simple: &lt;/p&gt;
&lt;blockquote&gt;
The vision I got from Democritus was of a God who was single-mindedly obsessed with enforcing a couple of rules about certain types of information you are not allowed to have under any circumstances. Some of these rules I’d already known about. You can’t have information from outside your light cone. You can’t have information about the speed and position of a particle at the same time. Others I hadn’t thought about as much until reading Democritus. Information about when a Turing machine will halt. Information about whether certain formal systems are consistent. Precise information about the quantum state of a particle. The reason God hasn’t solved world poverty yet is that He is pacing about feverishly worried that someone, somewhere, is going to be able to measure the quantum state of a particle too precisely, and dreaming up new and increasingly bizarre ways He can prevent that from happening.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Scott Alexander, &lt;i&gt;&lt;a href="https://slatestarcodex.com/2014/09/01/book-review-and-highlights-quantum-computing-since-democritus/"&gt;Book Review and Highlights: Quantum Computing Since Democritus&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Eliezer Yudkowsky,&lt;i&gt;&lt;a href="https://www.readthesequences.com/The-Dilemma-Science-Or-Bayes"&gt;The Dilemma: Science or Bayes?&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;The complexity that arises from these 'simple' rules leads to uncertainty, and that uncertainty makes the world
unpredictable and difficult for our minds to make sense of. We might imagine ourselves in a maze, carefully mapping
our environment. As better patterns come along that predict the maze with increasing accuracy and economy of expression
 we manage to put more and more of the maze into a smaller and smaller representation. But as the representation
becomes smaller, the work necessary to unpack it into the territory we're interested in increases. &lt;a href="https://www.xkcd.com/793/"&gt;It's difficult to
model most real world systems just by knowing their underlying physics&lt;/a&gt;. The complexity
of how reality expresses itself forces us to rely on abstractions, mental models, and approximations. There's no easy
way to know everything about anything. This means that &lt;em&gt;which&lt;/em&gt; questions and forms of knowledge we choose to acquire is
just as important as any reasoning techniques we use to deal with them. &lt;/p&gt;
&lt;p&gt;A good example of this can be found in the book &lt;em&gt;Evangelism Explosion&lt;/em&gt; by D. James Kennedy. Written with the goal of making
modern Christian churches see exponential growth, the author shares (at length and in detail) exactly how he goes about witnessing
Christ to others and teaching Christians to witness. What's striking about it is that before he attempts to share the gospel, he
makes a point of &lt;a href="https://evangelismexplosion.org/the-two-question/"&gt;asking two questions&lt;/a&gt; to set up the conversation: Whether the
target knows they'll be with god after they die and what they'd say if god asked them why he should let them in. The thing I find
so interesting about this is that &lt;a href="https://en.wikipedia.org/wiki/Framing_(social_sciences)"&gt;he opens with a Christian frame&lt;/a&gt;, and
mostly seems to ignore the possibility that you might encounter an atheist or even a Hindu.&lt;/p&gt;
&lt;p&gt;The script itself is simple, seemingly just a positive presentation of the basic
idea that eternal life can be yours if you give up your mind and body to Christ.
I found this so shocking that I stopped reading and asked an apostate if this stuff really worked, and he informed me that it did.
I'd always thought of Evangelists as being out to convert non-Christians, but if I'm to take what I read from them seriously the 
goal is mostly to take weak Christian theists and turn them into strong Christian theists. By the time you're willing to engage with
a question like "Will you be with god in heaven after you die?" you're already most of the way to being a Christian. Imagine this same
conversation with an atheist, who would say "Woah woah back up, who says this 'god' character exists? Why do you think that?". If engaging
with someone's questions about god takes you most of the way to being Christian, it should be obvious that letting other people ask you
questions without justification takes you most of the way to believing whatever they want you to believe.&lt;/p&gt;
&lt;p&gt;Eliezer Yudkowsky discusses this problem frequently in his &lt;em&gt;Rationality: AI to Zombies&lt;/em&gt;. Most notable is his essay
on &lt;a href="https://www.readthesequences.com/Privileging-The-Hypothesis"&gt;privileging the hypothesis&lt;/a&gt;, where he tries to get across
a similar idea. There Eliezer imagines numbering every possible hypothesis in a scenario from one to some very large number 
like 4 billion. He then uses the basic principles of &lt;a href="https://en.wikipedia.org/wiki/Information_theory"&gt;information theory&lt;/a&gt; to show
that taking a question without justification is to skip over the vast majority of possibilities. Remember that only one hypothesis
is the truth, so to do this without care risks excluding the correct answer from consideration before we've even begun to analyze
things. Unfortunately, I didn't understand what he meant on my first read, because I wasn't very familiar with computer science. &lt;/p&gt;
&lt;p&gt;This idea is of extreme importance, so it's worth explaining in detail. To keep it simple, lets go back to that notion of numbering
every hypothesis above noise from one to 4 billion. It takes a certain amount of &lt;em&gt;information&lt;/em&gt; to represent this number. The standard
unit of information in computer science is of course the binary digit (bit), a one or a zero. A single bit has two possibilities, 1 or 0.
Two bits in sequence can be combined four ways: 00,01,10,11. If we add a third bit, you will find there are eight combinations. In fact,
each added bit allows us to represent twice the number of combinations. By the time we have put together 32 bits, we can represent 4 billion
possibilities. As we gain information about a problem, we (hopefully) narrow in on the correct point in this sea of hypothesis. To ask a question
that narrows us down to say, 10 possibilities, is to assert that we have already collected the vast majority of information, or the majority of
bits necessary to represent our choice in this domain. If you consider a question merely because the question has been asked, you are allowing other
people to choose most of your beliefs for you.&lt;/p&gt;
&lt;p&gt;We can call this notion of whether or not a question is justified &lt;em&gt;warrant&lt;/em&gt;, in the same sense that the police 
need a &lt;em&gt;warrant&lt;/em&gt; before they can search a US citizens house. If necessity asks "Is this a reasonable expectation?", warrant asks "Why are(n't) we 
considering this question?". The &lt;a href="https://en.wikipedia.org/wiki/Just-world_hypothesis"&gt;just world fallacy&lt;/a&gt;,
&lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;denial of death&lt;/a&gt;, and 
&lt;a href="https://en.wikipedia.org/wiki/Postmodernism"&gt;postmodernism&lt;/a&gt; are failures of necessity. &lt;a href="https://www.readthesequences.com/Privileging-The-Hypothesis"&gt;Privileging the hypothesis&lt;/a&gt;, 
&lt;a href="https://en.wikipedia.org/wiki/Confirmation_bias"&gt;confirmation bias&lt;/a&gt;, and &lt;a href="https://en.wikipedia.org/wiki/Base_rate_fallacy"&gt;the base rate fallacy&lt;/a&gt; are failures of warrant. Together, warrant and necessity form the "punch and kick" of rationality, basic foundational moves which must be 
practiced and mastered before it's possible to reliably execute more advanced technique.&lt;/p&gt;
&lt;p&gt;Necessity and warrant go together, because to get good at using necessity we have to
manage uncertainty, and usefully managing uncertainty forces us to get good at warrant. For example, in
&lt;a href="https://en.wikipedia.org/wiki/The_Good_Judgment_Project"&gt;the tournaments run by Philip Tetlock to see who is best at predicting the future&lt;/a&gt;
the winners tend to explore much more of the hypothesis space than the mediocre. Instead of examining one
possibility and prematurely narrowing things down to a handful of outcomes (all of which might be wrong)
they look at several possible outcomes and try to weigh their probability against each other. This sort of
thorough exploration helps us become more justified in our beliefs. Evaluating the relative likelihood of a
set of possibilities moves us beyond focusing on individual facts or ideas, and facilitates the creation of
consistent mental models of reality which can begin to suggest necessary conclusions.&lt;/p&gt;
&lt;p&gt;Consider again the earlier questions about god. Christianity claims that those who don't give up their
mind and body to Christ will be eternally damned. This is a pretty scary idea, but it becomes a bit less
scary when we consider other theologies claiming much the same thing. Even in the Abrahamic family alone
we have Judaism, Christianity, and Islam which all mutually claim followers of the other two will meet their
end in the lake of fire. Just by considering all of the major world religions, we find the framing
attack James Kennedy uses to persuade less compelling. &lt;/p&gt;
&lt;p&gt;The failure of warrant is also behind one of the more important design flaws in modern republics:
focusing control mechanisms on the &lt;em&gt;consideration&lt;/em&gt; of new laws rather than the &lt;em&gt;proposal&lt;/em&gt; of new laws.
In theory a republic is meant to be kept in check by allowing competing interests at the table, which
prevents one segment of society from unfairly appropriating state power to enrich and elevate itself
over others. The &lt;a href="https://www.oregonlegislature.gov/citizen_engagement/Pages/How-an-Idea-Becomes-Law.aspx"&gt;typical implementation of this&lt;/a&gt; 
assumes that ideas for new laws appear because of "concerned citizens", and the process doesn't focus too
much on the origins or justifications for laws. In the context of information theory, this is a recipe for
disaster. If simply proposing a question takes us most of the way to saying "yes", then in practice what
we've done with this ruleset is leave most of the power of making laws almost completely unregulated and uncontrolled.
Bad actors can re-propose their unpopular initiatives until they get the right
set of circumstances for them to pass.&lt;/p&gt;
&lt;h2&gt;Principles Of Warrant&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.greaterwrong.com/posts/pC74aJyCRgns6atzu/meta-discussion-from-circling-as-cousin-to-rationality#comment-c7Xt5AnHwhfgYY67K"&gt;The last time I talked about warrant&lt;/a&gt;
I avoided precisely defining what makes a question worth considering, pushing the matter out to social consensus.
But the worthiness of questions exists independently of social approval. To think clearly even when
those around us don't requires some kind of objective standard. In her &lt;em&gt;Intellectual Foundation Of Information Organization&lt;/em&gt;,
Elaine Svenonius deals with questions of this sort often in the context of library science. To figure out which features
should be part of a biobliographic record, she lays out &lt;em&gt;principles of warrant&lt;/em&gt; and uses them as criteria to 
justify the inclusion or exclusion of information. We can do something similar, but because 'asking questions' is such
a broad thing it's not really possible to write out a complete set of principles. Rather I suspect that &lt;a href="https://en.wikipedia.org/wiki/Pareto_principle"&gt;the Pareto Rule&lt;/a&gt;
is in play and expecting &lt;em&gt;any justification&lt;/em&gt; for questions does most of the work for us. Still, there are some principles
of warrant that come to mind:&lt;/p&gt;
&lt;h3&gt;Principle Of Confusion&lt;/h3&gt;
&lt;p&gt;If two or more trustworthy models predict contradictory outcomes, &lt;a href="https://www.readthesequences.com/Noticing-Confusion-Sequence"&gt;you are confused about a subject&lt;/a&gt;
and should be asking what the source of contradiction is. &lt;/p&gt;
&lt;h4&gt;Principle Of Priors&lt;/h4&gt;
&lt;p&gt;When we expect something to be true and find that observation or inference implies it isn't,
&lt;a href="https://www.readthesequences.com/Your-Strength-As-A-Rationalist"&gt;we should notice we're confused&lt;/a&gt; and ask questions.&lt;/p&gt;
&lt;h3&gt;Principle Of Pain&lt;/h3&gt;
&lt;p&gt;Empirical observation of problems is a good reason to ask questions about why they occur and how they can be
stopped.&lt;/p&gt;
&lt;h3&gt;Principle Of Relation&lt;/h3&gt;
&lt;p&gt;If you're already asking a question, it's often warranted to ask questions which are closeby in question-space.
Be wary however that the principle of relation is fairly weak, and &lt;a href="https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon"&gt;six degrees of kevin bacon&lt;/a&gt;
means that it can be used adversarially to shoehorn discussion of topics which wouldn't otherwise come up.&lt;/p&gt;
&lt;h3&gt;Principle Of Balance&lt;/h3&gt;
&lt;p&gt;When you ask a question, it's also often useful to ask its inverse.&lt;/p&gt;
&lt;h3&gt;Principle Of Exhaustion&lt;/h3&gt;
&lt;p&gt;If a question can be interpreted as belonging to a meaningful category, asking other questions in the same category can be
useful to compare answers/etc.&lt;/p&gt;
&lt;p&gt;This list is obviously not exhaustive, and memorizing it wouldn't be a good strategy for getting good at warrant. 
Using warrant in practice is more like imagining some sense-data you would like to see. For example, if you're 
&lt;a href="https://guides.library.harvard.edu/c.php?g=310271&amp;amp;p=2071512"&gt;reviewing the literature&lt;/a&gt; on the procedure
to induce Haitian voodoo spirit possession, what you're really asking is a question like "Where in the world would
I look to find information on this? Who would know about it? Where has information been left behind by the presence of
this phenomena?". You might try anthropological accounts of voodoo practices, or hit up YouTube to see if an inconsiderate
tourist has filmed the proceedings (or a nosy anthropologist). Getting good at thinking about where a phenomena would leave
traces in the world lets you remove &lt;a href="https://en.wikipedia.org/wiki/Degrees_of_freedom"&gt;degrees of freedom&lt;/a&gt; from your beliefs
until they're tightly constrained by empirical observation; that is to say they have become thoroughly justified.&lt;/p&gt;</content></entry><entry><title>On Necessity</title><link href="https://www.thelastrationalist.com/on-necessity.html" rel="alternate"></link><published>2020-03-23T00:00:00+01:00</published><updated>2020-03-23T00:00:00+01:00</updated><author><name>namespace</name></author><id>tag:www.thelastrationalist.com,2020-03-23:/on-necessity.html</id><summary type="html">&lt;blockquote&gt;There are two ways to slide easily through life: Namely, to believe everything, or to doubt everything; both ways save us from thinking.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Alfred Korzybski, &lt;i&gt;&lt;a href="https://www.gutenberg.org/files/25457/25457-pdf.pdf"&gt;The Manhood of Humanity&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;For most of human history, cultures and individuals held to the idea that there was one truth that could be discovered …&lt;/p&gt;</summary><content type="html">&lt;blockquote&gt;There are two ways to slide easily through life: Namely, to believe everything, or to doubt everything; both ways save us from thinking.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— Alfred Korzybski, &lt;i&gt;&lt;a href="https://www.gutenberg.org/files/25457/25457-pdf.pdf"&gt;The Manhood of Humanity&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;For most of human history, cultures and individuals held to the idea that there was one truth that could be discovered or divined. While different tribes and traditions might disagree strongly on whose truth was correct, no one particularly objected to the idea that there was a truth to the world which you either had or did not have. Both the priest and the shaman believed their worldviews were correct, but neither one of them put stock in the notion that they were both somehow correct. Contradictory statements could not both be true, someone was right and someone was wrong. In the contemporary era this has begun to change, and not for the better:&lt;/p&gt;
&lt;blockquote&gt;
Eclecticism may be defined as the practice of choosing apparently irreconcilable doctrines from antagonistic schools and constructing therefrom a composite philosophic system in harmony with the convictions of the eclectic himself. Eclecticism can scarcely be considered philosophically or logically sound, for as individual schools arrive at their conclusions by different methods of reasoning, so the philosophic product of fragments from these schools must necessarily be built upon the foundation of conflicting premises. Eclecticism, accordingly, has been designated the layman's cult. In the Roman Empire little thought was devoted to philosophic theory; consequently most of its thinkers were of the eclectic type. Cicero is the outstanding example of early Eclecticism, for his writings are a veritable potpourri of invaluable fragments from earlier schools of thought. Eclecticism appears to have had its inception at the moment when men first doubted the possibility of discovering ultimate truth. Observing all so-called knowledge to be mere opinion at best, the less studious furthermore concluded that the wiser course to pursue was to accept that which appeared to be the most reasonable of the teachings of any school or individual. From this practice, however, arose a pseudo-broadmindedness devoid of the element of preciseness found in true logic and philosophy.
&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; — Manly P. Hall, &lt;i&gt;&lt;a href="https://archive.org/details/TheSecretTeachingsOfAllAgesManlyHall/"&gt;The Secret Teachings Of All Ages&lt;/a&gt;&lt;/i&gt;
&lt;/blockquote&gt;

&lt;p&gt;Eclecticism and its descendent postmodernism raise the idea that the ultimate truth of the world can never really be known. The world is subjective down to its roots, reality is just like, your opinion man. This has had disastrous effects on the wider pursuit of truth. Hard science has been inundated by limp wristed subjectivity and the notion of a plurality of contradictory truths all being correct has become the norm across much of the humanities. How could a proper art and science of human engineering ever come out of this potpourri of nonsense?&lt;/p&gt;
&lt;p&gt;You can’t design a bridge without actually knowing the tensile strength of steel and the compressive strength of concrete, these facts are not open to interpretation. Designing a society is no different and pretending that all viewpoints are equal, that all truths are just as valid as one another, is a dangerous precedent that has brought development of the humanities to a screeching halt. If we truly want to advance the art of rationality, this notion must be stamped out with extreme prejudice. &lt;/p&gt;
&lt;p&gt;This is easily the most important concept that Eliezer discusses in The Sequences. Reality actually exists and has properties you can determine through study and experimentation. Conclusions follow from their premises and it’s unreasonable to expect a plurality of truths. Our universe is consistent and your understanding of the pieces should fit together. The truth isn’t just your opinion. There is one truth and you find it or you don’t:&lt;/p&gt;
&lt;blockquote&gt;
But it was Probability Theory that did the trick. Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox. If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end up doing less than optimal. Jaynes would do his calculations different ways to show that the same answer always arose when you used legitimate methods; and he would display different answers that others had arrived at, and trace down the illegitimate step. Paradoxes could not coexist with his precision. Not an answer, but the answer.
&lt;/blockquote&gt;

&lt;p&gt;The universe operates on rules, and the rules continue to apply to you whether you believe in them or not. The rules are not optional, they are not open to interpretation, they do not care about your feelings. The universe exists, and it cannot be negotiated around. That’s not fair? Doesn’t matter. But that’s injust! Doesn’t matter. But—&lt;/p&gt;
&lt;blockquote&gt;
What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature’s little challenges aren’t always fair. When you run into a challenge that’s too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That’s how it is for people, and it isn’t any different for planets. Someone who wants to dance the deadly dance with Nature does need to understand what they’re up against: Absolute, utter, exceptionless neutrality.
&lt;/blockquote&gt;

&lt;p&gt;Eliezer discusses this mostly in the context of physics and Bayesian reasoning. If conclusions follow from their premises, and the premises always lead to the same conclusion, we can say that conclusion is &lt;em&gt;necessary&lt;/em&gt;. Valid methods of thinking will reliably produce the same answer (modulo some noise in real world thinkers) given the same priors and evidence. Two and two make four, matter cannot be created or destroyed, the probability of two independent events occurring is always less than the independent probability of either. Curiously, necessity is discussed frequently in The Sequences but never given a name. This is to their detriment, as necessity is one of the hardest concepts in rationality to master.&lt;/p&gt;
&lt;p&gt;Most basic failures of rationality are some form of refusal of necessity. This is unsurprising, because necessity is the dream killer. As children &lt;a href="https://www.reed.co.uk/career-advice/revealed-what-your-kids-really-want-to-be-when-they-grow-up/"&gt;we dream of being veterinarians, astronauts and mad scientists&lt;/a&gt;, not the lawyers, accountants, and grocery store clerks we actually grow up to be. We’re told all sorts of things about the world and ourselves that we don’t want to hear, so we deny them. Everyone else might have to get a job but not &lt;em&gt;me&lt;/em&gt;, when I’m older I’ll eat &lt;em&gt;all the candy I want&lt;/em&gt;, I’m not going to die. Over time, this reflex becomes automatic and we stop even noticing the denial. &lt;/p&gt;
&lt;p&gt;For example I recently saw a discussion of necessity on a ‘rationalist’ forum where someone pointed out that it was impossible to fly unassisted. A Buddhist replied that it was only impossible to fly unassisted in consensus reality. They argued that it’s possible to fly in a lucid dream, so their real complaint is that they can’t do it where it will affect others. The entire process of thought that is capable of generating this objection betrays an extreme level of disassociation; where the default is a personal, private universe separated from the underlying physics which allow it to exist. That dream world is the thing necessity takes away from us, what people are afraid of losing by &lt;a href="https://www.greaterwrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real"&gt;restricting themselves to what is there to be experienced in reality&lt;/a&gt;. The refusal of necessity is synonymous with the refusal of reality, which Buddhism provides a framework for. In Buddhism, the aspiring Arhat dismantles their attachments to the material world and turns their survival hardware into a substrate to run a personal paradise for a certain amount of time before being annihilated into a welcomed nothingness. This is one way of dealing with the problem of necessity, but it’s not one we can sanely endorse and still consider ourselves rationalists.&lt;/p&gt;
&lt;p&gt;Our private symbolic universe is not the only thing we’re looking to guard by refusing necessity. Often we resent the effort we’d have to go through if we took our beliefs seriously, supported by an implicit meta-belief that life should never be too hard. In many ways, a 1st world childhood is a very bad introduction to life because it sets you up for a lifetime of unreasonable expectations. 
Conditions are so good that it becomes easy to imagine in our childish naivete that life can be an indefinite sleepwalk through an introvert’s dream world or a never ending play session in an extrovert’s favorite field. Eventually we are pulled away from these delusions, but the expectations set by that tutorial stay with us for life. Bennett Foddy &lt;a href="https://www.youtube.com/watch?v=IO6ouSMm7Uc"&gt;writes about the process&lt;/a&gt; of building a game meant to show players their unreasonable expectations about challenge and difficulty:&lt;/p&gt;
&lt;blockquote&gt;
Anyway when you start Sexy Hiking, you’re standing next to this dead tree that blocks the way to the entire rest of the game. It might take you an hour to get over that tree, and a lot of people never got past it, you prod and you poke at it exploring the limits of your reach and strength trying to find a way up and over. And there’s a sense of truth in that lack of compromise. Most obstacles in video game worlds are fake, you can be completely confident in your ability to get through them, once you have the correct method or the correct equipment or just by spending enough time. In that sense, every pixelated obstacle in Sexy Hiking is real
&lt;br&gt;&lt;br&gt;
. . .
&lt;br&gt;&lt;br&gt;
A funny thing happened to me as I was building this mountain. I’d have an idea for a new obstacle, and I’d build it, test it, and I would usually find it was unreasonably hard. But I couldn’t bring myself to make it any easier, it already felt like my inability to get past the new obstacle was my fault as a player rather than as the builder.
&lt;/blockquote&gt;

&lt;p&gt;I heard a story from the recent COVID-19 outbreak that illustrates this well. A man living with relatives noticed they were still buying bananas from the grocery. When he inquired about whether they’d been washed to prevent the spread of COVID-19, he got a very strange answer. They had not been washed, but that was okay because bananas had a skin on them. The relatives insisted he should peel the banana and then carefully avoid letting the outside peel touch the meat of the fruit on the inside. So long as he didn’t touch it with his fingers then he wouldn’t be putting his face in contact with the virus. This is the sort of thing you think is okay when you aren’t taking ideas seriously. He wasn’t very hungry for bananas after that. &lt;/p&gt;
&lt;p&gt;At the core of the difficulty people have with necessity is uncertainty. It’s obvious that two and two make four, but when things become less obvious than that, &lt;a href="https://www.greaterwrong.com/posts/zFuCxbY9E2E8HTbfZ/perpetual-motion-beliefs"&gt;when they get abstract or there’s incomplete information suddenly magical thinking gets introduced&lt;/a&gt;. Our biases take over, and whether in the direction of pessimism or optimism our beliefs become hallucinations premised on a smaller and smaller proportion of evidence to analysis and speculation. What Eliezer tries to get across with his insistence on a Bayesian foundation for epistemology is that your beliefs should still be necessary even under conditions of uncertainty. It is the duty of every serious philosopher to learn to &lt;em&gt;feel gradations of necessity&lt;/em&gt; and to intuit how necessary their beliefs are. What degrees of freedom remain in their ideas, what hypotheses are still left to be considered, exactly how much weight does it make sense to put on a given hypothesis given the available evidence? There are exact, precise answers to these questions even if they are outside of your current awareness.&lt;/p&gt;
&lt;p&gt;Failing to accept the world as it is, failing to take ideas seriously, makes us a danger to ourselves and others. In this the current pandemic gives us a rather fantastic (albeit horrifying) window into the limits of the dream worlds that most people inhabit. College students &lt;a href="https://www.washingtonpost.com/nation/2020/03/19/coronavirus-spring-break-party/"&gt;openly defy public health experts because they’re entitled to spring break&lt;/a&gt;. The health minister of Iran &lt;a href="https://www.france24.com/en/20200225-iran-iraj-harirchi-coronavirus-deputy-health-minister"&gt;gets the virus and still insists that quarantine is an outdated method of controlling an epidemic&lt;/a&gt;. President Trump &lt;a href="https://www.vox.com/policy-and-politics/2020/3/13/21176535/trumps-worst-statements-coronavirus"&gt;tells the public that the disease is comparable to the flu until it’s too late for us to contain it&lt;/a&gt;. If this were a movie it’d be panned by critics as unrealistic b-film trash.&lt;/p&gt;
&lt;p&gt;&lt;img src="theme/images/trump_statements_vs_covid_19_cases.png" width=75% height=75%&gt;&lt;/p&gt;
&lt;p&gt;It’s quite impressive how far people will go to protect their worldview at the cost of their wellbeing, but even this has its limits. Eventually too much predictive error will build up and the whole edifice will come crashing down. What will it take to make you look? How much harm do you have to come to? How many &lt;a href="http://yudkowsky.net/other/yehuda/"&gt;people close to you have to die&lt;/a&gt; before you’ll actually look at the world as it is? Over the coming weeks, &lt;a href="https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca"&gt;we can expect to see&lt;/a&gt; a lot of deeply held worldviews fracture as the illusion of safety is rudely torn away. The safety blanket of childhood won’t protect you from bullets or viruses, &lt;a href="https://www.youtube.com/watch?v=xXXF2C-vrQE"&gt;only true knowledge of the universe has any hope of doing that&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can get a lot of mileage out of willful ignorance, but eventually your fake beliefs will come back to bite you. For example, in the Iranian city of Qom, a number of religious shrines remained open and busy even as the coronavirus tore through the city, because religious leaders believed the shrines had magical healing properties. They don’t. Iran is now digging mass graves. When magical beliefs come up against the cold face of unflinching reality, reality wins. Thus, in order to protect these magical beliefs they have to be socially insulated from reality, challenging them has to be verboten. However when this happens, from the outside it looks rather obvious that the deck is being stacked against truth, and it can’t hold up forever. However uncomfortable the truth may be, as a certain mad titan says, you can dread it, run from it, but destiny arrives all the same. &lt;/p&gt;
&lt;p&gt;Most people are familiar with &lt;a href="https://en.wikipedia.org/wiki/Galileo_affair"&gt;the incident where Catholicism lost credibility&lt;/a&gt; by insisting that the sun revolved around the earth when it did not. I suspect that part of why we single out this episode as a decisive triumph of science over religion is that it represents more than just the loss of Catholicism’s control of cosmology. Rather, it is a prelude to the more personal and uncomfortable revelation that humanity is not the center of the universe. We are a marginal force in nature which exists on a ‘pale blue dot’, and the rest of creation stretches out for an unfathomable distance around us. It is when we fully internalize this, along with Darwin’s revelation that humanity is a product of nature and arose from adaption to the natural world (including other humans, who are also part of the natural world) that we understand &lt;a href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html"&gt;the absurdity of denying death&lt;/a&gt;. &lt;/p&gt;
&lt;blockquote&gt;
In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average.  Who prevents it?
&lt;/blockquote&gt;

&lt;p&gt;Were it “within the stars” so to speak, nature would discard us like you discard so many used tissues. Life is not sacred to the universe, let alone human life. If sleeping &lt;a href="http://www.existentialcomics.com/comic/1"&gt;really did end your thread of experience&lt;/a&gt; nature would have no problem letting that happen. It would allow you to die thousands of deaths over the course of your life so long as it made no difference to reproduction. Observing this vast cosmos and the amoral gears of creation, it becomes abundantly obvious that there is no afterlife. Nature, which seems to care about nothing else and has seen fit to save nothing else, has almost certainly not set aside a special preserve for the sake of your experiences and feelings. You are not special in the eyes of creation, you are a blob of animate matter that will one day become a blob of inanimate matter and that is that. In the second law of thermodynamics, the house always wins; at best you can hope for some unforeseen development in physics which allows us to defeat entropy. In the meantime, there is no life after this one. The expectation that you will see lost loved ones in the hereafter, that you will have eternal life through Jesus Christ, that when you die you will wake again from your lifelong dream is unreasonable.&lt;/p&gt;
&lt;p&gt;Your expectation of eternal life has always been unreasonable, nothing else lasts forever: why would you?&lt;/p&gt;</content></entry><entry><title>"Memento Mori", Said The Confessor</title><link href="https://www.thelastrationalist.com/memento-mori-said-the-confessor.html" rel="alternate"></link><published>2020-02-01T00:00:00+01:00</published><updated>2020-02-01T00:00:00+01:00</updated><author><name>namespace</name></author><id>tag:www.thelastrationalist.com,2020-02-01:/memento-mori-said-the-confessor.html</id><summary type="html">&lt;p&gt;Ten years ago Eliezer Yudkowsky &lt;a href="https://www.greaterwrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click"&gt;wrote a post about cryonics&lt;/a&gt;
where he was baffled by the fact that most young cryonicists heard about the concept
and then decided to sign up. There was no extended questioning, no sales pitch,
young (future) cryonics patients were simply &lt;em&gt;exposed to the concept&lt;/em&gt; of …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Ten years ago Eliezer Yudkowsky &lt;a href="https://www.greaterwrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click"&gt;wrote a post about cryonics&lt;/a&gt;
where he was baffled by the fact that most young cryonicists heard about the concept
and then decided to sign up. There was no extended questioning, no sales pitch,
young (future) cryonics patients were simply &lt;em&gt;exposed to the concept&lt;/em&gt; of cryonics
by seeing it mentioned on television or the radio. He referred to this simple
absorption of the idea as 'clicking', and desperately wanted to know what went into
the seemingly magical click. EY hypothesized that rather than having an extra
sanity gear in their head, it seemed more likely cryonicists had some insanity gear(s)
missing.&lt;/p&gt;
&lt;p&gt;Eliezer, you of all people should know there is no such thing as magic. You
even say so in this exact post. User 'bshock' replies with his experience
working a job where he signed people up for cryonics, writing of the reasons
people reject it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;b&gt;The first and largest by far tended to be religious, which is to say, afterlife mythology.&lt;/b&gt; If you thought you were going to Heaven, Kolob, another plane of existence, or another body, you wouldn’t bother investing the money or emotional effort in cryonics. &lt;/br&gt; &lt;/br&gt;

Only then came the intellectual barriers, but the boundary could be extremely vague. I think that the vast majority of people didnt have any trouble grasping the basic scientific arguments for cryonics; the actual logic filter always seemed relatively thin to me. Instead, people used their intellect to rationalize against cryonics, either motivated by existing beliefs (from one end) or by resulting anxieties (from the other). &lt;/br&gt; &lt;/br&gt;

&lt;b&gt;Anxieties relating to cryonics tended to revolve around social situation and/or death.&lt;/b&gt; Some people identified so deeply with their current social situation, the idea of losing that situation (family, friends, standing, culture, etc.) was unthinkable. Others were afflicted by a sort of hypothetical survivor guilt; why did they deserve to live, when so many of their loved ones had died? &lt;b&gt; Perhaps the majority were simply repulsed by any thought of death itself&lt;/b&gt;; most of them spent their lives trying not to think about the fact that we would die, and found it extremely depressing or disorienting when forced to confront that fact.
&lt;/blockquote&gt;

&lt;p&gt;(Bolding mine)&lt;/p&gt;
&lt;p&gt;I find this answer astonishing in its clarity, and frustrating in its prescience.
Looking back on it after nearly two years of research it's annoying to think if I'd
been paying more attention I could have caught on to the importance of the fear of death
earlier. It's not that I didn't think the fear of death was important, the problem
is that I hadn't understood &lt;em&gt;how important&lt;/em&gt; it is to peoples ability to implement
rationality. A great deal of what goes into the click is having a worldview that
can soberly consider mortality. I actually hadn't looked at that post again until
sitting down to write this one. It's encouraging to see the most credible answer
points towards the thesis of this post: that the fear of death acts as a sort of master key for
introductory rationality concepts. Examining the fear of death ties all the
rationality basics together into a coherent framework, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Map/Territory Errors&lt;/li&gt;
&lt;li&gt;Something To Protect&lt;/li&gt;
&lt;li&gt;Keeping Your Identity Small&lt;/li&gt;
&lt;li&gt;Atheism&lt;/li&gt;
&lt;li&gt;X-Risk&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Why The Fear Of Death?&lt;/h2&gt;
&lt;p&gt;In 'soft' disciplines like psychology, it's easy to confuse ourselves with
compelling nonsense. Hypothesis space is vast, and we often pick from it by
exploring some territory, seizing on a plausible idea, and then using a mix
of confirmation bias and correlation to 'prove' our idea correct. Most of these
proofs are worthless, you could construct another one of about the same justification
to support a completely separate or even contradictory idea. So when we consider
this topic it's not enough to craft a plausible narrative and give it a body of
connective conceptual flesh. We have to narrow the hypothesis space to try and
give ourselves a better chance of landing in the right territory. In the service
of that lets consider some other narratives which I often see cited to explain
people bouncing off rationality and see if the fear of death stands out against
them:&lt;/p&gt;
&lt;h3&gt;The Utility Narrative&lt;/h3&gt;
&lt;p&gt;Scott writes in his classic &lt;a href="https://www.greaterwrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great"&gt;Extreme Rationality: It's Not That Great&lt;/a&gt;
that the basic reason why people aren't interested in rationality is that it's not
useful:&lt;/p&gt;
&lt;blockquote&gt;
Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn’t entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics—syllogisms, fallacies, and the like—have been around much longer. The few groups who made a concerted effort to study x-rationality didn’t shoot off an unusual number of geniuses—the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard’s superstar followers, many of this century’s most successful people have been notably irrational. &lt;/br&gt; &lt;/br&gt;

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?
&lt;/blockquote&gt;

&lt;p&gt;The shortform response to this is that the people who are successful at things by
being 'rationalist-y' about them usually don't call what they do rationality. Bruce
Lee &lt;a href="https://en.wikipedia.org/wiki/Jeet_Kune_Do#Lee's_philosophy"&gt;did not call his style 'rationality'&lt;/a&gt;,
but his description of it could be quoted in The Sequences:&lt;/p&gt;
&lt;blockquote&gt;
I have not invented a “new style,” composite, modified or otherwise that is set within distinct form as apart from “this” method or “that” method. On the contrary, I hope to free my followers from clinging to styles, patterns, or molds. Remember that Jeet Kune Do is merely a name used, a mirror in which to see “ourselves”. . . Jeet Kune Do is not an organized institution that one can be a member of. Either you understand or you don’t, and that is that. There is no mystery about my style. My movements are simple, direct and non-classical. The extraordinary part of it lies in its simplicity. Every movement in Jeet Kune Do is being so of itself. There is nothing artificial about it. I always believe that the easy way is the right way. Jeet Kune Do is simply the direct expression of one’s feelings with the minimum of movements and energy. The closer to the true way of Kung Fu, the less wastage of expression there is. Finally, a Jeet Kune Do man who says Jeet Kune Do is exclusively Jeet Kune Do is simply not with it. He is still hung up on his self-closing resistance, in this case anchored down to reactionary pattern, and naturally is still bound by another modified pattern and can move within its limits. He has not digested the simple fact that truth exists outside all molds; pattern and awareness is never exclusive. Again let me remind you Jeet Kune Do is just a name used, a boat to get one across, and once across it is to be discarded and not to be carried on one’s back.
&lt;/blockquote&gt;

&lt;p&gt;In other words, Bruce Lee follows the way of winning. Famous gamer David Sirlin
would not call what he does rationality, but his book &lt;em&gt;Playing To Win&lt;/em&gt; &lt;a href="http://www.sirlin.net/ptw-book/introducingthe-scrub"&gt;has a better
definition of rationality&lt;/a&gt;
than the &lt;a href="https://www.thelastrationalist.com/rationality-is-not-systematized-winning.html"&gt;'systematized winning'&lt;/a&gt;
found in The Sequences:&lt;/p&gt;
&lt;blockquote&gt;
You will not see a classic scrub throw his opponent five times in a row. But why not? What if doing so is strategically the sequence of moves that optimizes his chances of winning? Here we’ve encountered our first clash: the scrub is only willing to play to win within his own made-up mental set of rules. These rules can be staggeringly arbitrary. If you beat a scrub by throwing projectile attacks at him, keeping your distance and preventing him from getting near you—that’s cheap. If you throw him repeatedly, that’s cheap, too. We’ve covered that one. If you block for fifty seconds doing no moves, that’s cheap. Nearly anything you do that ends up making you win is a prime candidate for being called cheap. Street Fighter was just one example; I could have picked any competitive game at all. &lt;/br&gt; &lt;/br&gt;

Doing one move or sequence over and over and over is a tactic close to my heart that often elicits the call of the scrub. This goes right to the heart of the matter: why can the scrub not defeat something so obvious and telegraphed as a single move done over and over? Is he such a poor player that he can’t counter that move? And if the move is, for whatever reason, extremely difficult to counter, then wouldn’t I be a fool for not using that move? The first step in becoming a top player is the realization that playing to win means doing whatever most increases your chances of winning. That is true by definition of playing to win. The game knows no rules of “honor” or of “cheapness.” The game only knows winning and losing.
&lt;/blockquote&gt;

&lt;p&gt;"Rationality is when you stop living your life by fake rules" is a heuristic I
tell others often, it's beautiful in its succinctness and simple enough that
almost anyone can understand it.&lt;/p&gt;
&lt;p&gt;In &lt;em&gt;Moneyball&lt;/em&gt;, &lt;a href="https://en.wikipedia.org/wiki/Moneyball"&gt;Billy Beane and Bill James did not call what they do rationality&lt;/a&gt;,
yet Billy's picks were quite literally done by a stats whiz that studied
behavioral economics (read: formal rationality) in college and Bill Jame's
'sabermetrics' ("the empirical analysis of baseball") community sounds almost like
LessWrong in its heyday, with the same pattern of a grumpy founder that quits after
everyone he shares his insight with proves to be totally inadequate:&lt;/p&gt;
&lt;blockquote&gt;

Jame's literary powers combined with his willingness to answer his mail to create a movement. Research scientists at big companies, university professors of physics and economics and life sciences, professional statisticians, Wall Street analysts, bored lawyers, math wizards unable to hold down regular jobs &amp;mdash; all these people were soon mailing James their ideas, criticisms, models, and questions. His readership must have been one of the strangest groups of people ever assembled under one idea. Before he found a publisher, James had four readers he considered "celebrities."
&lt;/br&gt; &lt;/br&gt;
. . .
&lt;/br&gt; &lt;/br&gt;
"I hate to say it and I hope you're not one of them," he wrote in his final, &lt;i&gt;1988 Baseball Abstract,&lt;/i&gt; "but I am encountering more and more of my own readers that I don't even like, nitwits who glom onto something superficial in the book and misunderstand its underlying message . . . Whereas I used to write one 'Dear Jackass' letter a year, I now write maybe thirty." The growing misunderstanding between himself and his readership was, he felt, not adding to the sum total of pleasure or interest in the universe. "I am no longer certain that the effects of my doing this kind of research are in the best interest of the average baseball fan," he explained. "I would like to pretend that the invasion of statistical gremlins crawling at random all over the telecast of damn near every baseball game is irrelevant to me, that I really had nothing to do with it . . . I know better. I didn't create this mess, but I helped."
&lt;/blockquote&gt;

&lt;p&gt;Rationality does in fact seem to work, &lt;a href="https://www.greaterwrong.com/posts/gBewgmzcEiks2XdoQ/mandatory-secret-identities"&gt;but the people who actually use it do not
generally call themselves 'rationalists'&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;The Social Reality Narrative&lt;/h3&gt;
&lt;p&gt;The top comment on &lt;a href="https://www.greaterwrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click"&gt;That Magical Click&lt;/a&gt; is by 'pjeby', who replies:&lt;/p&gt;
&lt;blockquote&gt;
One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything. &lt;/br&gt; &lt;/br&gt;

Such people have no problem with the idea of magic, because everything is magic to them, even science. &lt;/br&gt; &lt;/br&gt;

An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises. &lt;/br&gt; &lt;/br&gt;

Well, not congratulating me, exactly. He was saying, “wow, that turned out really well”, and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like it was luck, like, “wow, what nice weather we had.” &lt;/br&gt; &lt;/br&gt;

So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place—as opposed to heroic efforts during the project—was quite an eye opener for him. &lt;/br&gt; &lt;/br&gt;

Fortunately, he (and his boss) were “clicky” enough in other areas (i.e., they didn’t believe computers were magic, for example) that I was able to make the math of what I was doing click for them at that “teachable moment”. &lt;/br&gt; &lt;/br&gt;

Unfortunately, most people, in most areas of their lives treat everything as magic. They’re not used to being able to understand or control anything but the simplest of things, so it doesn’t occur to them to even try. Instead, they just go along with whatever everybody else is thinking or doing. &lt;/br&gt; &lt;/br&gt;

For such (most) people, reality is social, rather than something you understand/ control.

&lt;/blockquote&gt;

&lt;p&gt;Having experienced this when I was younger I think this idea is broadly correct.
However as I'll explain in the rest of this post the fear of death and social
reality as barriers to rational thinking are not mutually exclusive. In fact,
they reinforce and are deeply entangled with each other. &lt;/p&gt;
&lt;h3&gt;The Intelligence Narrative&lt;/h3&gt;
&lt;p&gt;Another common explanation for the small number of rationalists is that rationality
requires a level of intelligence that's very rare. Consider The Unz Review's &lt;a href="https://www.unz.com/jthompson/the-7-tribes-of-intellect/"&gt;The 7 tribes of intellect&lt;/a&gt;
which says of the top 5% of human intelligence:&lt;/p&gt;
&lt;blockquote&gt;
These are the top 5%. If you are fortunate enough to be in this category, the world is your oyster, unless you blow it by getting drunk, or by imagining that you are so bright that no further work is required, or you go off the rails into being some sort of clever fool, due to some personality difficulty. &lt;/br&gt; &lt;/br&gt;

. . . &lt;/br&gt; &lt;/br&gt;

They can deal with tasks which require the application of specialised background knowledge, dis-embedding the features of a problem from a text, and drawing high-level inferences from highly complex text with multiple distractors. They can almost certainly do the previous credit card comparison task; they can summarise from a given text two ways in which lawyers may challenge prospective jurors; and, using a calculator, determine the total cost of carpet to cover a room, given the dimensions of the room and the cost per square yard of carpeting. (There you are, at the apotheosis of intellect. You can challenge a juror and carpet a room). Their occupations will include the professions, the sciences and, with experience and application, the top posts in business and government. Entertainments will include most artistic and literary endevours, and theories will be seen as interesting in themselves. Vocabularies are in the 30,000 to 42,000 range, which is probably as high as you can go without using lots of technical terms. In modern welfare states they would be high net contributors, very probably supporting two or even three households in addition to their own, and have property and savings. In IQ terms they are 125 and above.
&lt;/blockquote&gt;

&lt;p&gt;I've sometimes told people that if they don't know how to use a spreadsheet,
ipython, or another tool for quantitative thinking they're not a rationalist.
It naturally follows that intelligence is a bottleneck. The sort of thinking you have to do
if you want to be reliably correct about things is &lt;em&gt;hard&lt;/em&gt; and even the brightest
peoples capacity for it is limited. IQ points don't really grant magic abilities,
they grant modest abilities the average reader of this blog would be surprised
to learn most people don't have. &lt;a href="http://www.jdpressman.com/public/lwsurvey2016/analysis/general_report.html"&gt;On the 2016 LessWrong Survey&lt;/a&gt;
the median respondent claimed to have an IQ of 138. Obviously a survey is not
necessarily going to get us the most reliable data on this, but I can't say I
really think they're lying.&lt;/p&gt;
&lt;p&gt;At the same time, the intelligence barrier might not be as high as is commonly
assumed. In his &lt;em&gt;Superforecasting&lt;/em&gt;, Tetlock finds that the average IQ of his best
performers in the geopolitical forecasting tournament was 80th percentile, far below
the 1:1000 rarity implied by LessWrong and SlateStarCodex survey results. This
makes sense exactly because the skills provided by IQ points are so modest. Beyond
a certain point, if raw intelligence was the only way to get ahead humanity
wouldn't get very far. Assuming it's possible to teach advanced epistemics to the
merely intelligent rather than exclusively the ultra-intelligent this has serious
strategy considerations for handling existential risk and other issues.&lt;/p&gt;
&lt;h3&gt;The Fear of Death Narrative&lt;/h3&gt;
&lt;p&gt;In his &lt;em&gt;The Denial of Death&lt;/em&gt; anthropologist Ernest Becker identifies the fear of
death as the unifying psychological struggle between man and the natural world.
Here he is describing the confabulation and resistance to thinking that
characterizes most people:&lt;/p&gt;
&lt;blockquote&gt;
Now these euphemisms mean usually that he accepts to work on becoming the father of himself by abandoning his own project and by giving it over to The Fathers. The castration complex has done its work, and one submits to &lt;i&gt;social reality&lt;/i&gt;. He can now deflate his own desires and claims, and can play it safe in the world of the powerful elders. He can even give his body over to the tribe, the state, the embracing magical umbrella of the elders and their symbols, that way it will no longer be a dangerous negation for him. But there is no real difference between a childish impossibility and an adult one. The only thing that the person achieves is a practiced self deceit, what we call the Mature Character. &lt;/br&gt; &lt;/br&gt;

Take stock of those around you and you will hear them talk in precise terms about themselves and their surroundings. Which would seem to point to them having ideas on the matter. But start to analyze those ideas and you will find that they hardly reflect in any way the reality to which they appear to refer. And if you go deeper you will discover that there is not even an attempt to adjust the ideas to this reality. Quite the contrary, through these notions the individual is trying to cut off any personal vision of reality, of his own very life. For life is at the start a chaos in which one is lost, the individual suspects this but he is frightened at finding himself face to face with this terrible reality and tries to cover it over with a curtain of fantasy where everything is clear. It does not worry him that his ideas are not true. He uses them as trenches for the defenses of his existence, as scarecrows to frighten away reality.
&lt;/blockquote&gt;

&lt;p&gt;This is not necessarily an intuitive notion. I had previously favored the utility,
social reality, and intelligence narratives as the basic explanation for why there
were so few rationalists. The fear of death was an important factor, but one that
played second fiddle to these more important bottlenecks. Over time though I've
updated towards a more primal rejection, basic primordial fears in the human animal
which encourage poor thinking. Beyond its above average explanatory power, I provide
four basic arguments for why we should &lt;em&gt;expect&lt;/em&gt; the fear of death to be special
even before we dig into any detailed analysis:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Argument From Empiricism:&lt;/strong&gt; Death is an irrationality hotspot, and the
denial of death represents perhaps the most brazen example of unsanity which is
still tolerated in the modern world. People say in all seriousness that after
their body is destroyed, they will be resurrected into another plane of existence
or reincarnated on earth. They confabulate the most bizarre metaphysics to support
these claims, it's probably not a coincidence that &lt;a href="https://www.greaterwrong.com/posts/hiDkhLyN5S2MEjrSE/normal-cryonics"&gt;people's brains shut off
when it comes to cryonics&lt;/a&gt;.
People do not say "I may not have mated, but after I die my essence will spread
out among the living and my bloodline will continue in the tribe.", they're very
sober about the consequences of not reproducing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Argument From Culture:&lt;/strong&gt; As Becker points out in his book, and as you'd
learn from reading any number of anthropology papers, some form of the denial of
death is a cultural universal. Becker is particularly insightful with his notion
of an &lt;em&gt;immortality project&lt;/em&gt;, by which people use magical thinking to defeat death
even in ostensibly secular guises. While we're all familiar with the ordinary
religious forms of death-denial, even supposedly 'secular' nations such as the
Soviet Union made up for their state sponsored atheism by emphasizing the immortality
of living on through one's industrial or scientific accomplishments. It is notable
that by controlling a group immortality project, tribes and societies gain a symbolic
control over life and death with which they can magically kill defectors and deviants.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Argument From Biography:&lt;/strong&gt; When telling their life stories rationalists
tend to cite facing the reality of death as a key moment of development. Alfred
Korzybski for example actually wrote his &lt;a href="https://www.gutenberg.org/files/25457/25457-pdf.pdf"&gt;&lt;em&gt;Manhood of Humanity&lt;/em&gt;&lt;/a&gt;, a book whose
thesis was that the mismatch between the growth curve in technology and the growth
curve in civilizing capability would inevitably lead to X-Risk, before he wrote his
famous &lt;em&gt;Science and Sanity&lt;/em&gt; which founded General Semantics. The impetus for that
was his participation in WWI, which forced him to seriously consider the question of
how to prevent such horrible wars in the future. Eliezer Yudkowsky has his
&lt;a href="https://www.readthesequences.com/Yudkowskys-Coming-Of-Age-Sequence"&gt;Coming Of Age&lt;/a&gt;
sequence where he discusses the realization that there is no magic that stops
&lt;em&gt;really bad&lt;/em&gt; things from happening. My own journey started with the childhood
realization that there is nothing to stop my world from being ended by a nuclear
war. It was my experience trying to explain my terror to family, friends, and
available adults that led me to &lt;a href="http://www.hpmor.com/chapter/6"&gt;instantly sympathize with HJPEV&lt;/a&gt;
when he explains his own battle against ordinary unsanity about risk.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Argument From Reflection:&lt;/strong&gt; If you get into Buddhist meditation or acid, you'll
eventually find that the layers of your identity begin to peel away. At the bottom
of your motivation stack &lt;a href="https://knowingless.com/2019/08/17/you-will-forget/"&gt;you find the fear of death&lt;/a&gt;,
which to continue living you must leave alone and reattach to the material world.
&lt;a href="https://hivewired.wordpress.com/2019/12/23/vaporize/"&gt;Someone else following the same procedure&lt;/a&gt;
of "take acid once a week and see what happens" reports a similar melting away of
inhibitions surrounding the fear of death. For many readers this will be a "pfft, 
so what" sort of deal, but if people hack their brains to become uninhibited agents
and then report the change as having faced the reality of death I think that's notable.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Map-Territory Errors&lt;/h2&gt;
&lt;p&gt;In his &lt;em&gt;Denial of Death&lt;/em&gt; Becker says that the best interpretation of the observations
made by Freud and other early psychoanalysts is they point to a deep human trauma about
the fear of death. He posits people create an identity and buy into a social symbol
system in large part to ward off the fear of death. This is because humans are, as
Alfred Korzybski identified them, symbolic creatures. We are separated from the rest
of earthly existence by &lt;a href="http://www.xenodochy.org/gs/timebind.html"&gt;the ability to bind time&lt;/a&gt;,
&lt;a href="https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/"&gt;and transmit observations as little pieces of culture&lt;/a&gt;.
The problem is that the symbolic is the realm of the gods, but people are still mortal and creaturely.
As the famous Jewish creation myth goes, man has the Knowledge of higher worlds, and his punishment is to die knowing.
In order to continue existing normally with this massive mortal horror hovering over us all the time, is to selectively deny reality.&lt;/p&gt;
&lt;p&gt;Why is our selective denial of the reality of death so damaging to our ability to think?
The basic reason is that it involves what I call the &lt;em&gt;refusal of phenomenological necessity&lt;/em&gt;.
Necessity can be stated simply as: 2 and 2 equals 4, conclusions follow from their premises.
In order to deny the reality of death, we must also deny the reality of anything
which might be able to show us our mortality. This includes of course, basically
any system of ordered thinking. Or at least, any system of ordered thinking based
on perception of the world around us. Consider the classic syllogism:
All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Deep
damage has to be done to your thinking to avoid absorbing the importance of logic
that simple. Naturally then we should expect most people are failing to absorb the
simple logic of many things.&lt;/p&gt;
&lt;p&gt;So it's not surprising that the most consequential map territory errors
are motivated by fear. As children we believed there were monsters under our bed,
but more extraordinary was the belief we could banish them by hiding under a blanket.
This is the fundamental essence of a map-territory error. &lt;a href="https://en.wikipedia.org/wiki/Perceptual_control_theory"&gt;We control our perceptions,
not the variables those perceptions are nominally meant to track&lt;/a&gt;.
This creates &lt;em&gt;magical beliefs&lt;/em&gt;, where we think that controlling our perceptions (the map) alters the territory.&lt;/p&gt;
&lt;p&gt;Becker notes that during childhood development, there is usually a stage where
the child believes themselves to be omnipotent. When a small child wants something,
they signal their desire and the desire is fulfilled. In fact, it is entirely
sensible for them to conclude they are omnipotent. Eventually, the child has desires
which outstrip what is possible for their caregivers to provide, and the child
often wishes for their caretaker's death. Because the child believes they're omnipotent,
this juvenile death fantasy is taken as a serious threat in their own mind, and
they feel extreme guilt about it.&lt;/p&gt;
&lt;p&gt;This speaks to the fact that the default is to have deeply magical beliefs about
reality. This impulse is so strong, that even after we have (in the abstract) banished
any possibility of psi, ghosts, spirits, etc; we still have people insisting that
there must be something more, some way in which the map controls the territory.
Because it seems so intuitive and compelling to us that it does.&lt;/p&gt;
&lt;h3&gt;Death and The Roots of Magick&lt;/h3&gt;
&lt;p&gt;In shamanic practice, the shaman is typically associated with death and the dead.
The shaman is a necromancer, a spirit-channeler, a traveler between the world of
the living and the world of the infinite cosmology of dead things. Shamans are
supposed to gain their powers through a near death experience, or some proxy thereof.
Modern day notions of magic are the descendants of these ancient astral spirit guides.
Thus the roots of magick and sorcery are in death, and management of the fear of death.
&lt;a href="https://traditionsofconflict.com/blog/2018/7/6/the-nature-of-sorcery"&gt;We see this in anthropological accounts of sorcery&lt;/a&gt;
among more primitive societies, take for example this account of a deadly magician:&lt;/p&gt;
&lt;blockquote&gt;
During my first fieldwork, Asao was the scariest man in the village – a sagguma, and proud of it. People would have openly despised him, only it was too dangerous to do so. It was safer to fear him, and that they certainly did…Sangguma [sorcerers] are said to acquire ghostly powers by mastering magical skills, submitting to harsh bodily disciplines, and drinking the fluids of a rotting corpse. Asao did not simply admit to all of this, he boasted of it. Animal familiars (mostly night birds) spied for him and brought him news of distant places. Asao claimed the ability to fly and to make himself invisible. With ostentatious glee, he told of participating in attacks (sangguma usually work in teams of two or three) on selected victims…Occasionally, he would be mysteriously absent for days or weeks at a time, presumably in retreat to purify his magical powers or on commission to stalk and attack someone in another, possibly distant, place (Tuzin, 57).
&lt;/blockquote&gt;

&lt;p&gt;What's striking to me about this is the similarity, many steps removed, from the
modern day "edgelord", whose social role is to be a troll, culture warrior, or
contrarian killer of comforting untruths. This is probably not a coincidence.
As Becker points out, it is important for people to control social norms and
expression, to enforce religious rules because to lose control of these things
is to lose their sense of control over life and death. Tribal control of
immortality symbols is used as a psychological weapon against would-be magicians,
to be a sorcerer then is to be someone who has stepped outside of social reality:&lt;/p&gt;
&lt;blockquote&gt;
So, what is a sorcerer? A sorcerer is a – real or perceived – violator of norms of conduct. Such atypical behaviors often entail great risk. One who transgresses taboos that are not particularly esteemed, or that indicate one’s impressive abilities, can gain greater status and prestige, while those who infringe on regulations widely considered legitimate earn the enmity of kith and kin. This is the paradox at the heart of sorcery – the sorcerer seizes power or inadvertently orchestrates his own demise, on occasion performing each concurrently.
&lt;/blockquote&gt;

&lt;h2&gt;Something To Protect&lt;/h2&gt;
&lt;p&gt;Which brings us to another key point of becoming a rationalist: Something To Protect.
In his &lt;em&gt;Rationality&lt;/em&gt;, &lt;a href="https://www.readthesequences.com/Something-To-Protect"&gt;Eliezer Yudkowsky writes that the impulse to become a
rationalist&lt;/a&gt;
must come from protecting another being or entity; it can't come from
protecting yourself. I know this is false, because for me it did come from
protecting myself. But I think I know why he would believe this. If you are
a selfish creature as people fundamentally are, and you think that your beliefs
control reality, to acknowledge the reality of death is to kill whatever you gaze
at with it. It's easier then for us to face the reality of death through a proxy
than to acknowledge the reality of &lt;em&gt;our own&lt;/em&gt; death. The exact mechanics of why it's
easier are tricky. One cynical possibility is that the proxy is sacrificial. To
acknowledge the reality of a family member dying or animals dying is to subject
them to death, which instantly creates emotional attachment and feelings of fear,
guilt, and responsibility. The proxy stands in for us, and lets us see the horror
of death without risking our own life being taken by magic.&lt;/p&gt;
&lt;p&gt;A more optimistic possibility is that Something To Protect allows you to elevate
a thing over your own survival. Having found something you care more about than
living, risking magical death is not quite so horrifying in comparison.&lt;/p&gt;
&lt;h2&gt;Keeping Your Identity Small&lt;/h2&gt;
&lt;p&gt;This brings us to identity.
In Becker's view, identity is about putting up a wall between yourself and reality.&lt;/p&gt;
&lt;p&gt;It's notable that Korzybski called confusing the layers of abstraction &lt;em&gt;identification&lt;/em&gt;.
Korzybski saw identity in the sense of Aristotle's "A is A" as the core obstacle to
rationality. He advocated the removal of words such as "I" and "is" from everyday speech.
He also felt that the &lt;a href="https://plato.stanford.edu/entries/dualism/"&gt;splitting of man into a "spirit" being separate from an
animal being&lt;/a&gt; was responsible for much
philosophical woe. It is only by accepting our embodied, creaturely nature that we
can take full advantage of our ability to think according to General Semantics.
Accepting our embodied creaturely nature is of course also to accept our mortality,
as it is the creaturely aspects of man which make him decay and die.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://paulgraham.com/identity.html"&gt;As Paul Graham notes in his essays&lt;/a&gt;,
things which are part of our identity are
things which we parse as being direct attacks on us when they're criticized.
Conveniently, anything that's part of your identity you can't think clearly about
(because to think clearly about it would be to be 'killed' by magic). If as Becker says
our identities exist to protect us from the reality of death (and they at least
in part do), then it stands to reason that one of the most powerful interventions
to become more rational is to tear down the damn wall.&lt;/p&gt;
&lt;p&gt;This is, as far as I can tell, absolutely the case. The most powerful rationality
intervention, bar none, is Paul Graham's simple notion of keeping your identity small
(which he is light on the details of how to actually do, but the goal is sound).
Really, the average person should just grind flaying off useless or maladaptive
identity aspects and then come back and try the other epistemic enhancing techniques.
In the same way that say, you might grind Buddhist meditation before going after
stoic control of your emotions.&lt;/p&gt;
&lt;p&gt;Even if you're not sold on the fear of death thesis, identity is still probably
the place to start for most people. When we let ideas become part of our identity,
&lt;a href="https://www.scientificamerican.com/article/psychology-of-taboo-tradeoff/"&gt;they become sacred&lt;/a&gt;
and it's not possible to update on them even if they're wrong. This creates natural
bottlenecks on the road to developing uncommon sense, which have to be overcome by
shedding ad-hoc identity constructs.&lt;/p&gt;
&lt;h2&gt;Atheism&lt;/h2&gt;
&lt;p&gt;This also explains another 'mysterious' feature of rationality: Why the association with New Atheism?
Can't you be Catholic and be a rationalist, as many practitioners of General Semantics were?&lt;/p&gt;
&lt;p&gt;For example Samuel Bois, who wrote the 1966 General Semantics classic &lt;em&gt;The Art of
Awareness: A Handbook on Epistemics and General Semantics&lt;/em&gt; was Catholic.
It's tempting then to think that we can retain our faith and be a rationalist, but
it's precisely because the denial of death is so damaging to our thinking that we can't.
Until we break down our fake immortality, it's not really possible to perceive the world clearly.&lt;/p&gt;
&lt;p&gt;This theory, that atheism is a fundamental point because it is key to facing the
reality of death, is importantly enough more predictive than the idea that atheism
is important because you need to reject the uniquely damaging influence of Christianity.
Because it needs to be atheism, agnosticism won't do. &lt;a href="https://www.huffpost.com/entry/atheism-rise-religiosity-decline-in-america_n_1777031"&gt;Many people in the modern
era are agnostic&lt;/a&gt;;
they have a vague feel good religious apathy which serves to
prevent them from having to think very hard about this subject.
Those people still act stupid in the way Christians act stupid, even though they've rejected Abrahamic faith.
Therefore we know that the important feature is facing the reality of death, not rejecting Christ, Yahweh or Allah.
In fact even your median atheist has probably still not quite processed the reality
of death, they're just more capable of doing so.&lt;/p&gt;
&lt;h2&gt;Existential Risk&lt;/h2&gt;
&lt;p&gt;It also explains the focus on &lt;a href="https://en.wikipedia.org/wiki/Global_catastrophic_risk"&gt;existential risks&lt;/a&gt;.
With nuclear arms we already have the tools to destroy ourselves, if nukes are
not enough we are likely to get tools that are some time during the 21st century.
Most people are fundamentally crippled in their ability to think about this; again,
to think about it would be to be killed by magic. Therefore we can infer on priors
that existential risks are almost certainly underfocused on in proportion to their
seriousness and severity. If your society has problems thinking about something, it's
a safe bet that issues involving that topic are not getting the attention or rigor
that they deserve.&lt;/p&gt;
&lt;p&gt;This is why Eliezer Yudkowsky wrote his Sequences in the first place. The problem
was not that people were failing to understand complex concepts, but rather that
they were failing to see the simple logic of scary ideas. Once you've faced the
reality of death and stopped living your life by fake rules, understood Darwin,
shed your fake immortality, rooted out animism from your intuitions and learned
a bit about thinking clearly concerns about existential risk are straightforward
obvious ideas, not arcane cultish nuttery. &lt;/p&gt;
&lt;h2&gt;Rethinking Rationality Training&lt;/h2&gt;
&lt;p&gt;If all this is true we should take a hard look at how we've been trying to "train"
rationality up to this point. More classes on critical thinking won't help if the
barriers are emotion rather than skill based. Below are some ideas to consider.&lt;/p&gt;
&lt;h3&gt;Keeping Your Identity Small&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://www.greaterwrong.com/posts/BXQsZmubkovJ76Ldo/the-actionable-version-of-keep-your-identity-small"&gt;I recently read a post&lt;/a&gt;
where the author said they'd had trouble implementing Paul Graham's
&lt;em&gt;Keep Your Identity Small&lt;/em&gt; in a useful way. It was only after becoming more confident
in their ability to get by without any particular identity feature that they could
stop identifying as this or that. I'm honestly a little skeptical, and suspect
the reader misunderstood what Graham was trying to get across. So lets take their
example: If you're someone whose only way to connect with other people is jokes,
you might think something like:&lt;/p&gt;
&lt;p&gt;"I'm a funny person."&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Are&lt;/em&gt; you a funny person? Maybe you are, but are you more than that? Are you always
a funny person? Using phrases like "I am" or "is" or "be" in the wrong ways can
reinforce static self concepts[0]. To change and then stop identifying is to miss
the entire point, you were supposed to notice your limitations by not identifying
and then change. The author's insistence that keeping your identity small is not
actionable advice frustrates me a little because I know it is.&lt;/p&gt;
&lt;p&gt;How I learned to do it was:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="http://paulgraham.com/identity.html"&gt;Read Paul Graham's essay&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Start noticing identity driven defensiveness as a salient feature of other peoples behavior.&lt;/li&gt;
&lt;li&gt;Eventually notice when I have this feature in my behavior.&lt;/li&gt;
&lt;li&gt;When I notice, ask "Do I endorse this part of my identity? Is this identity feature maladaptive?"&lt;/li&gt;
&lt;li&gt;If I endorse it, reinforce/lean in. If I don't, step back/unfuse.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Map/Territory&lt;/h3&gt;
&lt;p&gt;In his research Korzybski found that map/territory errors were best trained out of
people by having them spend time &lt;a href="https://en.wikipedia.org/wiki/Structural_differential"&gt;with a model of the ladder of abstraction&lt;/a&gt;[1].
Bruce Kodish describes this in &lt;em&gt;Korzybski: A Biography&lt;/em&gt;:&lt;/p&gt;
&lt;blockquote&gt;
The diagram could be used as a tool to help bring human thinking to the human level. A person could keep it in front of himself as a reference to help distinguish the levels when dealing with any problem (A statement about a descriptive statement - an inference - is not the descriptive statement; a label or description of an object is not the object ; the object is not the invisible, inferred process; etc.) In order to time-bind most effectively, a person had to understand and use the mechanism correctly by recognizing and distinguishing the levels or orders in any situation. This was not a statistical approach to a science of man but one based on human potentiality.
&lt;/blockquote&gt;

&lt;p&gt;I suspect that something like this would help a lot more than any workshop. People
need simple things they can practice on their own to get past identity and ontological
confusion in their thinking.&lt;/p&gt;
&lt;h3&gt;Meditation&lt;/h3&gt;
&lt;p&gt;Many CFAR-ish people &lt;a href="https://www.greaterwrong.com/search?q=meditation"&gt;claim that meditation practice was essential&lt;/a&gt; to them becoming
more rationalist, but can't explain why. This explains why: Meditation practices
help you face the reality of death and process it. It stops being a thing that
"works but we don't know why just trust us" and gives us a predictive model of why
we should expect it to work, and what things will contribute to it working or not.&lt;/p&gt;
&lt;h3&gt;Practicing Dying&lt;/h3&gt;
&lt;p&gt;Supposedly Becker often told people that they should practice dying. I'm not sure
exactly what this entailed, but I do a similar thing to help me think about X-Risk.
I'll lay down in bed and imagine that I'm about to die in the next 5-15 minutes.
Die of what you ask? Oh, any number of things. The most common is a nuclear war,
but it's often being turned into grey goo by a rampant superintelligence, or more
mundane causes of death like cancer or a virus. When I first started doing this I
found it very distressing, but over time I've gotten a lot more capable at soberly
considering the end of my existence.&lt;/p&gt;
&lt;p&gt;It's notable that the rhetoric used to talk about defeating death might actually be
having the opposite of its intended effect on many people. Emphasizing the horror
and tragedy of death is useful if you've already accepted its reality and need
social permission to say the obvious. If you're crippled in your ability to think
about mortality however, this rhetoric probably reinforces the flinch reaction
people have and takes them farther away from reality.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;(0): &lt;em&gt;Drive Yourself Sane: Using The Uncommon Sense Of General Semantics&lt;/em&gt;, Third Edition (2011) by Susan and Bruce Kodish. &lt;/p&gt;
&lt;p&gt;(1): &lt;em&gt;Korzybski: A Biography&lt;/em&gt; (2011) by Bruce Kodish&lt;/p&gt;</content></entry><entry><title>Slack Club</title><link href="https://www.thelastrationalist.com/slack-club.html" rel="alternate"></link><published>2019-04-15T00:00:00+02:00</published><updated>2019-04-15T00:00:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2019-04-15:/slack-club.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This cynical advice on military leadership dates back to the world wars:&lt;/p&gt;
&lt;blockquote&gt;Those who are clever and industrious I appoint to the General Staff. Use can under certain circumstances be made of those who are stupid and lazy. &lt;b&gt;The man who is clever and lazy qualifies for the highest …&lt;/b&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This cynical advice on military leadership dates back to the world wars:&lt;/p&gt;
&lt;blockquote&gt;Those who are clever and industrious I appoint to the General Staff. Use can under certain circumstances be made of those who are stupid and lazy. &lt;b&gt;The man who is clever and lazy qualifies for the highest leadership posts. He has the requisite nerves and the mental clarity for difficult decisions.&lt;/b&gt; But whoever is stupid and industrious must be got rid of, for he is too dangerous.&lt;/blockquote&gt;

&lt;p&gt;In recent years it has been attributed to Moltke the Elder, &lt;a href="https://quoteinvestigator.com/2014/02/28/clever-lazy/"&gt;even though he probably never said it&lt;/a&gt;. Perhaps most important to its enduring popularity is that it breaks down into a snazzy 2x2 matrix: &lt;/p&gt;
&lt;p&gt;&lt;img src="theme/images/moltke_officer_matrix.png" width=75% height=75%&gt;
&lt;small&gt;&lt;p xmlns:dct="http://purl.org/dc/terms/"&gt;
  &lt;a rel="license"
     href="http://creativecommons.org/publicdomain/zero/1.0/"&gt;
    &lt;img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" /&gt;
  &lt;/a&gt;
  &lt;br /&gt;
  To the extent possible under law,
  &lt;a rel="dct:publisher"
     href="http://www.thelastrationalist.com/"&gt;
    &lt;span property="dct:title"&gt;The Last Rationalist&lt;/span&gt;&lt;/a&gt;
  has waived all copyright and related or neighboring rights to
  &lt;span property="dct:title"&gt;Moltke Officer Attribute Matrix&lt;/span&gt;.
&lt;/p&gt;&lt;/small&gt;&lt;/p&gt;
&lt;p&gt;I don't know how well this model describes reality, but I do know that the worldview it implies is dominant among the LessWrong crowd. The community heavily selects for the commander quadrant and commander quadrant wannabes. That leads to community opinion which roughly reflects the matrix, taking an almost snobbish attitude towards people who have an execution focused mindset. &lt;/p&gt;
&lt;h2&gt;Smart and Lazy&lt;/h2&gt;
&lt;p&gt;Probably the best evidence for a smart/lazy composition is the Big 5 personality scores of LWers. Through the &lt;a href="https://www.yourmorals.org/index.php"&gt;YourMorals personality test&lt;/a&gt; we're fortunate enough to have scores for over 200 LWers from a psychometrically rigorous instrument:&lt;/p&gt;
&lt;p&gt;&lt;img src="theme/images/lesswrong_big_five.png"&gt;
&lt;img src="theme/images/lesswrong_moral_foundations.png"&gt;&lt;/p&gt;
&lt;p&gt;There is unfortunately significant selection bias associated with the normal results presented, because only high O (Openness) people will voluntarily take a personality test on the Internet. In spite of this, we can still tell that the average O among LWers is high, and the average C (conscientousness) is low. E (extraversion) is also notably lower, which might mean more time spent "inside your head" &lt;a href="https://www.greaterwrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection#section-5"&gt;which leads to an anti-action bias&lt;/a&gt;. For elaboration on low C, we also have the staggering fact that being a LWer gives you a &lt;a href="https://www.greaterwrong.com/posts/Xi6syQenk24nQTzgz/2016-lesswrong-diaspora-survey-analysis-part-three-mental"&gt;2.7x relative risk for ADHD and related disorders&lt;/a&gt;. &lt;a href="https://www.youtube.com/watch?v=SCAGc-rkIfo"&gt;ADHD is a disorder of willpower, not one of attention&lt;/a&gt;. To the extent that ADHD is the pathological end of a spectrum rather than a totally different way of being human, it implies the C distribution for LWers is weighted heavily downward from the normal population. &lt;/p&gt;
&lt;p&gt;Of course if all that is true we should be able to observe it in behavior, not just as a theoretical expectation. The fact that Raemon &lt;a href="https://www.greaterwrong.com/posts/rhZQ7MQGuJM5osiDe/hufflepuff-leadership-and-fighting-entropy"&gt;feels he needs to write posts about nobody taking out the trash&lt;/a&gt; is one indicator. Another indicator is the sort of content one finds on LessWrong. &lt;a href="https://www.greaterwrong.com/posts/66BsjQtX7wAGhP4tB/too-smart-for-my-own-good"&gt;This post about being 'too smart for my own good'&lt;/a&gt; is an unusually honest example of the phenomena. The author is anxious about the idea of wasting time learning something inefficiently, and would rather digress into a search for the One True Method rather than plough through it. &lt;a href="https://www.greaterwrong.com/posts/yLLkWMDbC9ZNKbjDG/slack"&gt;Zvi's series of posts on the concept of slack&lt;/a&gt; is illustrative. Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment. I remember seeing a lot of links to this series when it debuted, and now &lt;em&gt;slack&lt;/em&gt; is &lt;a href="https://www.greaterwrong.com/posts/6JrrCK3WDYmQMkgdT/against-naming-things-and-so-on"&gt;another piece of LW jargon&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Before that there was the concept of Munchkinism, &lt;a href="https://www.greaterwrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians"&gt;which appears directly in The Sequences&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;Or there’s Munchkinism, the quality that lets people try out lifehacks that sound a bit weird. A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells. Magic the Gathering is a Munchkin game, and MoR is a Munchkin story.&lt;/blockquote&gt;

&lt;p&gt;This is obviously considered a positive quality, it &lt;em&gt;is&lt;/em&gt; a positive quality so long as it doesn't degrade performance on tasks which require grit. An addiction to easy solutions can become a crippling liability if your entire strategy is based around finding them where they don't exist. &lt;a href="https://www.greaterwrong.com/posts/geqg9mk73NQh6uieE/akrasia-and-shangri-la"&gt;The Sequences also brought us the word 'akrasia'&lt;/a&gt;. Akrasia ended up being LessWrong jargon for "procrastination" and spawned a &lt;a href="https://www.greaterwrong.com/search?q=akrasia"&gt;large number of self help flavored posts&lt;/a&gt; about beating it. For some people &lt;a href="https://www.greaterwrong.com/posts/aYGEmJRX3AbjdDK4x/proposal-anti-akrasia-alliance#comment-urPXrbjBqCZ9qZo7b"&gt;it even became a minor identity label&lt;/a&gt;, probably inducing some psychosomatic issues around work in the process. &lt;/p&gt;
&lt;h2&gt;Impact On LessWrong&lt;/h2&gt;
&lt;p&gt;What to do about being held back by the fact and fiction of smart-laziness is a topic I do plan to cover eventually, but not in this post. This post is about its impact on the community. I tend to cite this stuff as a core issue with the rationalist community in private conversation, so it only makes sense to write about it. The most immediate impact is that not a lot seems to get done around here, &lt;a href="https://www.greaterwrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection#section-5"&gt;to quote Bendini&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;For various reasons, the Sequences disproportionately attracted the personality types who liked reading, hypothesising and debating. One of the defining characteristics of that personality type is a preference for extensive contemplation before action. Put enough of those people in the same place and social founder effects will exaggerate that to the point where action is rarely taken at all. From The War Of Art:&lt;br&gt;&lt;br&gt;

"Often couples or close friends, even entire families, will enter into tacit compacts whereby each individual pledges (unconsciously) to remain mired in the same slough in which she and all her cronies have become so comfortable. The highest treason a crab can commit is to make a leap for the rim of the bucket."&lt;br&gt;&lt;br&gt;

Disconcerting, if true.&lt;/blockquote&gt;

&lt;p&gt;Commander quadrant idealism goes a lot farther than just run of the mill laziness. It is at the root of many distinctive features typically associated with aspiring rationalists. One of these features I've already mentioned is the obsession with clever easy solutions. This tends to result in another typical trait, overabstraction. I recently heard someone tell a story about inviting 6 aspiring ratfolk to help them move. Everything went smoothly until they got to the living room couch. One of the movers began considering the &lt;em&gt;optimal&lt;/em&gt; way to move the couch, according to physics. Not to be outdone the other movers soon got into a frenzied conversation debating different approaches. This argument supposedly wasted &lt;em&gt;over two hours&lt;/em&gt; of the poor authors time. They didn't care how the couch got moved, the entire point of bringing six people to help is so you don't have to think about the best way to move things; just shut up and lift. &lt;/p&gt;
&lt;p&gt;Another typical feature of LWers is their stunning inability to work together. Of the people I know who buy into the whole 'save the world' thing, almost none of them seem to work together. They might have groups of less-shiny people surrounding them, but actual star-to-star collaboration seems rare. &lt;a href="https://www.readthesequences.com/Your-Price-For-Joining"&gt;Of course, Eliezer totally predicted this&lt;/a&gt;, which just makes it all the more disappointing that it's a trap his readers seem to have fallen into. I think a lot of the reason for that is commander quadrant personality selection. If everyone is holding out for the leadership positions, and the entire culture is implicitly built around finding Ender Wiggin and propelling him to stardom; nobody inside the system has an incentive to settle for anything less than a shot at being the messiah. &lt;/p&gt;
&lt;p&gt;I think this also explains the phenomena I like to call silver bullet mindset. Silver bullet mindset is where you insist that your problems are special and can only be fixed with special solutions. This leads to a quest for the Silver Bullet, a singular solution that will 'fix the problem'. Nevermind that most seekers probably have more than one problem, the Silver Bullet can heal all ills. &lt;a href="https://slatestarcodex.com/2017/09/18/book-review-mastering-the-core-teachings-of-the-buddha/"&gt;An excellent example of a silver bullet is meditation&lt;/a&gt;. I know lots of aspiring rationalists who have gone kooky in the head pursuing this enlightenment thing, &lt;a href="http://www.nonsymbolic.org/PNSE-Article.pdf"&gt;as it is known to reliably do&lt;/a&gt;. Most of these people don't need silver bullets, &lt;a href="https://a16z.com/2011/11/13/lead-bullets/"&gt;they need lead, and lots of it&lt;/a&gt;. Zvi writes in his essay &lt;a href="https://thezvi.wordpress.com/2017/12/02/more-dakka/"&gt;&lt;em&gt;More Dakka&lt;/em&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
In all four cases [not enough light to treat SAD, Bank of Japan not printing more money, patient not taking more of a drug showing positive results, gratitude journals], our civilization has (it seems) correctly found the solution. We’ve tested it. It works. The more you do, the better it works. There’s probably a level where side effects would happen, but there’s no sign of them yet.&lt;br&gt;&lt;br&gt;
We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka – rather than firing the standard number of metaphorical bullets, we need to fire more, absurdly more, whatever it takes until the enemy keels over dead.&lt;br&gt;&lt;br&gt;
And then we decide we’re out of bullets. We stop.&lt;br&gt;&lt;br&gt;
If it helps but doesn’t solve your problem, &lt;i&gt;perhaps you’re not using enough.&lt;/i&gt;&lt;/blockquote&gt;

&lt;p&gt;This is not an insight that comes naturally to Commander quadrant people, they use aggressive search strategies that give up on things which don't immediately produce spectacular results. General Staff quadrant folks by contrast have a tendency to plow right through this sort of thing, fail to recognize their success as noteworthy, and not report back on it. &lt;/p&gt;
&lt;p&gt;&lt;img src="theme/images/depression_starter_kit.jpg" height=75% width=75%&gt; &lt;/p&gt;
&lt;p&gt;Beyond these depressing personal quirks, perhaps the biggest impact is not on any individual LWer, but the rationality community as a whole. When the bulk of your movement is made up of these people, it drives execution focused minds away. As it turns out, the General Staff quadrant knows how to get shit done. They put considerable effort into the doing, and having that effort totally unappreciated by dysfunctional manchildren isn't fun &lt;em&gt;or&lt;/em&gt; rewarding. Most never get the opportunity to be driven away however, as they were filtered out by the large time investment to read the sequences and the patience to put up with metaphors involving catgirls and paperclipping aliens. Even Effective Altruism is forced to &lt;a href="https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/"&gt;essentially&lt;/a&gt; &lt;a href="https://80000hours.org/podcast/episodes/tanya-singh-operations-bottleneck/#transcript"&gt;beg&lt;/a&gt; their followers to perform 'operations' despite having many more execution focused minds than rationality. Bottom line: So long as all the appreciation and status &lt;a href="https://www.greaterwrong.com/posts/vHSrtmr3EBohcw6t8/norms-of-membership-for-voluntary-groups"&gt;go to insight porn&lt;/a&gt; instead of &lt;a href="https://www.greaterwrong.com/posts/66DXhQJyPEJNsXgfw/an-alternative-way-to-browse-lesswrong-2-0"&gt;actual work&lt;/a&gt;  nothing will ever happen in this town.&lt;/p&gt;
&lt;p&gt;And since that will never change, you'd probably be better off starting over from scratch.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;P.S If you notice yourself in this description, I have good news for you. Because you've been starving yourself when it comes to effort, you probably have lots and lots of low hanging fruit gated behind (what to you will seem like) extraordinary effort. Don't despair, you probably have more to gain by working harder than anyone else. But only if you get after it, of course.&lt;/p&gt;</content><category term="Essay"></category><category term="Community"></category><category term="Statistics"></category><category term="Rationality"></category><category term="LessWrong"></category><category term="Effective Altruism"></category></entry><entry><title>A Brief Note On Epistemic Rationality</title><link href="https://www.thelastrationalist.com/a-brief-note-on-epistemic-rationality.html" rel="alternate"></link><published>2019-02-16T00:00:00+01:00</published><updated>2019-02-16T00:00:00+01:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2019-02-16:/a-brief-note-on-epistemic-rationality.html</id><summary type="html">&lt;p&gt;When I go to write a post for this blog I spend a lot of time doing research to
establish that the things I'm talking about exist. It can be oddly difficult to
preempt a motte and bailey argument by providing proof that people believe the
things you say they …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When I go to write a post for this blog I spend a lot of time doing research to
establish that the things I'm talking about exist. It can be oddly difficult to
preempt a motte and bailey argument by providing proof that people believe the
things you say they believe. In todays post for example, I would like to address the
prevailing attitude that epistemic rationality is a sort of red headed stepchild to
instrumental rationality. The problem is while this is implied frequently, finding
people on short notice who come right out and say it isn't always easy. In spite
of this, a common sentiment goes that &lt;a href="https://www.greaterwrong.com/posts/aTsD8MmWXrSSezE54/why-rationality"&gt;"the inclination to choose epistemic 
rationality is evidence of being bad at life"&lt;/a&gt;.
&lt;a href="https://medium.com/incerto/how-to-be-rational-about-rationality-432e96dd4d1a"&gt;Nassim Taleb says there is no such thing as rational belief, only rational action&lt;/a&gt;.
&lt;a href="https://www.greaterwrong.com/posts/bMXurpN9qj8NWZKDR/why-cfar-s-mission"&gt;Anna Salamon wrote an entire post responding to the notion that epistemic
rationality is for undisciplined nerds&lt;/a&gt;.
That's three examples, and hopefully they're enough to establish that I'm responding
to a real thing when I say epistemic rationality is actually pretty important.&lt;/p&gt;
&lt;p&gt;But why? &lt;a href="https://www.greaterwrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great"&gt;As we all know, it's suspiciously challenging to show the benefits&lt;/a&gt;.
I've seen the basic answer given by Jordan Peterson during &lt;a href="https://www.youtube.com/watch?v=PE0u7-SX2hs"&gt;one of his lectures&lt;/a&gt;, in regards to self improvement:&lt;/p&gt;
&lt;blockquote&gt;
What could you do to improve yourself?&lt;br&gt;
Well, let's step one step backwards.&lt;br&gt;
The first question might be: Why should you even bother improving yourself?&lt;br&gt;
And I think the answer to that is something like: so you don't suffer anymore stupidly than you have to.&lt;br&gt;
And maybe so others don't have to either. It's something like that.&lt;br&gt;
You know, like, there is a real injunction at the bottom of it.&lt;br&gt;
It's not some casual self-help doctrine, it's that if you don't organize yourself properly: you'll pay for it!&lt;br&gt;
And in a big way. And so will the people around you.
&lt;/blockquote&gt;

&lt;p&gt;People refuse to unconditionally accept the truth because 'accepting the truth'
is often pretty painful. &lt;a href="https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995"&gt;Learning that people are less nice than you think and have lots
of unsavory motives for doing good things&lt;/a&gt; is uncomfortable.
&lt;a href="https://www.forbes.com/sites/alicegwalton/2011/10/24/steve-jobs-cancer-treatment-regrets/#35e56d57d2e9"&gt;Accepting that you can't cure your pancreatic cancer with acupuncture and you
have to let the doctors poke you with horrible instruments&lt;/a&gt; is
pretty damn uncomfortable. To the extent it seems improbable god exists, that's
uncomfortable. The notion of no life after death is frightening. We tell ourselves lies to deceive others, but we also
tell ourselves lies to avoid pain. And the sad fact is that when you do that, you
trade pain now for pain later. Usually, a &lt;em&gt;lot&lt;/em&gt; more pain later. In the case of
Steve Jobs and his cancer, denial about his situation led to lifelong
regret and an early death. The first, basic reason to care about epistemic rationality
is because the lies you tell yourself are just like the lies you tell others: They
have a way of catching up with you. &lt;/p&gt;
&lt;blockquote&gt;

    What is true is already so.&lt;br&gt;
    Owning up to it doesn't make it worse.&lt;br&gt;
    Not being open about it doesn't make it go away.&lt;br&gt;
    And because it's true, it is what is there to be interacted with.&lt;br&gt;
    Anything untrue isn't there to be lived.&lt;br&gt;
    People can stand what is true,&lt;br&gt;
    for they are already enduring it.&lt;br&gt;
    &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;—Eugene Gendlin

&lt;/blockquote&gt;

&lt;p&gt;Before you get into anything esoteric like Bayesian priors, or forecasting
techniques, or &lt;a href="https://www.gwern.net/Mail-delivery"&gt;performing complicated statistics to figure out when you should check the mail&lt;/a&gt;,
it helps to start with the basics. The most valuable posts in a collection like
The Sequences aren't the ones &lt;a href="https://www.readthesequences.com/Neural-Categories"&gt;which dissect subtle errors in thinking&lt;/a&gt;,
they're the ones which &lt;a href="https://www.readthesequences.com/Making-Beliefs-Pay-Rent-In-Anticipated-Experiences"&gt;provide short memorable phrases to help you avoid doing
stupid things&lt;/a&gt;.
If I've gotten anything lasting out of my reading on this topic, it is probably
the wide bank of patterns I've learned to notice in myself and intervene on. By
seeing the conditions which lead to something dumb happening, I can stop and say
"okay but what if I &lt;em&gt;didn't&lt;/em&gt; do the stupid thing this time?". Part of why it's hard
to point to the benefits is that they're not exactly spectacular riches and glorious
accomplishment. They're subtle, they're utilitarian, they're things like "I didn't
waste years of my life on mediocrity because my standards were too low", hard
to prove and unimpressive. "I didn't walk into that giant spike pit, that
was pretty cool." isn't exactly inspiring stuff.&lt;/p&gt;
&lt;p&gt;In classical logic things are either true or false, the moon is made of cheese
or it isn't. But the lies we tell ourselves of greatest consequence usually
aren't cheese-moon lies, they're comfortable interpretations of uncomfortable
facts. Some common examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.deviantart.com/techgnotic/journal/The-Rise-Of-The-Artist-You-Are-The-Future-356840683"&gt;"Machines can never replace the human spirit, we'll always have unique talents"&lt;/a&gt; &lt;br&gt; (&lt;strong&gt;Truth&lt;/strong&gt;: have a look at some cool &lt;a href="https://imgur.com/a/zSzwjAY"&gt;neural net anime art&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;"Poor people are just lazy" &lt;br&gt; (&lt;strong&gt;Truth&lt;/strong&gt;: Poverty is a complex issue which laziness plays a limited role in, plenty of underclass folks work multiple jobs)&lt;/li&gt;
&lt;li&gt;"Love conquers all" &lt;br&gt; (&lt;strong&gt;Truth&lt;/strong&gt;: Nearly half of American marriages end in divorce, indiscriminate sex leads to venereal disease and other parasites such as lice, limerance isn't a good foundation for a long lasting relationship)&lt;/li&gt;
&lt;li&gt;"Sure I smoke, but my grandfather smoked his whole life and he didn't get cancer" &lt;br&gt; (&lt;strong&gt;Truth&lt;/strong&gt;: The outside view says you have a high chance of getting cancer, not to mention plenty of other severe health problems from smoking)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;

&lt;blockquote&gt;

    &lt;p&gt;Then the Gods of the Market tumbled, and their smooth-tongued wizards withdrew&lt;br&gt;
    And the hearts of the meanest were humbled and began to believe it was true&lt;br&gt;
    That All is not Gold that Glitters, and Two and Two make Four&lt;br&gt;
    And the Gods of the Copybook Headings limped up to explain it once more.&lt;/p&gt;

    &lt;p&gt;As it will be in the future, it was at the birth of Man&lt;br&gt;
    There are only four things certain since Social Progress began.&lt;br&gt;
    That the Dog returns to his Vomit and the Sow returns to her Mire,&lt;br&gt;
    And the burnt Fool's bandaged finger goes wabbling back to the Fire;&lt;/p&gt;

    &lt;p&gt;And that after this is accomplished, and the brave new world begins&lt;br&gt;
    When all men are paid for existing and no man must pay for his sins,&lt;br&gt;
    As surely as Water will wet us, as surely as Fire will burn,&lt;br&gt;
    The Gods of the Copybook Headings with terror and slaughter return!&lt;/p&gt;&lt;/blockquote&gt;</content><category term="rationality"></category><category term="poetry"></category><category term="short"></category><category term="truth"></category></entry><entry><title>Rationality Is Not Systematized Winning</title><link href="https://www.thelastrationalist.com/rationality-is-not-systematized-winning.html" rel="alternate"></link><published>2018-11-10T00:00:00+01:00</published><updated>2018-11-10T00:00:00+01:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2018-11-10:/rationality-is-not-systematized-winning.html</id><summary type="html">&lt;p&gt;&lt;small&gt;&lt;b&gt;Authors Note&lt;/b&gt;: I said in my previous post that the next would be about systems to build common knowledge. That post is running very much behind schedule, so I'll be publishing others in the meantime.&lt;/small&gt;&lt;/p&gt;
&lt;p style="margin-bottom: 0px;"&gt;&amp;ldquo;Do not ask whether it is “the Way” to do this or that.  Ask whether …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;small&gt;&lt;b&gt;Authors Note&lt;/b&gt;: I said in my previous post that the next would be about systems to build common knowledge. That post is running very much behind schedule, so I'll be publishing others in the meantime.&lt;/small&gt;&lt;/p&gt;
&lt;p style="margin-bottom: 0px;"&gt;&amp;ldquo;Do not ask whether it is “the Way” to do this or that.  Ask whether the sky is blue or green.  If you speak overmuch of the Way you will not attain it.&amp;rdquo;&lt;/p&gt;

&lt;p style="text-indent: 2em; margin-top: 0.25em;"&gt;&amp;mdash; Eliezer Yudkowsky&lt;/p&gt;

&lt;p&gt;Rationality has been defined as: Using probability theory to pick out features held in common by all successful forms of inference &lt;a href="https://www.greaterwrong.com/posts/8qccXytpkEhEAkjjM/rationality-an-introduction"&gt;[Bensinger]&lt;/a&gt;; The ability to do well on hard decision problems &amp;amp; the art of how to systematically come to know what is true &lt;a href="https://www.greaterwrong.com/posts/2xkMt5XQqpG5fZjxb/what-is-rationality"&gt;[Roko]&lt;/a&gt;; Systematized winning &lt;a href="https://www.greaterwrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning"&gt;[Yudkowsky]&lt;/a&gt;; [to] win when things are fair, or when things are unfair randomly over an extended period &lt;a href="https://www.greaterwrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning#comment-RpxsALES4SdKYzsqx"&gt;[Alicorn]&lt;/a&gt;; Drawing correct inferences from limited, confusing, contradictory, or maliciously doctored facts &lt;a href="http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/"&gt;[Alexander]&lt;/a&gt;; The Way of the agent smiling from on top of the giant heap of utility &lt;a href="https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality"&gt;[Yudkowsky]&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I'm a little worried writing this post. Definitions are one of the more abused tools of thought. Originally meant to facilitate common understanding, they've become a refuge for pedants and smart-aleks to pretend at insight. It seems possible that people will ignore an essay discussing definitions on the grounds that they are trivial things of trivial consequence. Perhaps so, but I think &lt;a href="https://www.greaterwrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning#comment-h9f3EbjhSMcybfqfg"&gt;people ignore poetry at their own peril&lt;/a&gt;. The systematized winning definition of rationality fails to constrain expectations, and I worry it's one of the significant things holding LW-flavor rationality back.&lt;/p&gt;
&lt;h2&gt;Systematized Winning: An Intangible Un-definition&lt;/h2&gt;
&lt;h3&gt;The Bug&lt;/h3&gt;
&lt;p&gt;In his post &lt;a href="https://www.greaterwrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning"&gt;Rationality Is Systematized Winning&lt;/a&gt; Eliezer writes:&lt;/p&gt;
&lt;blockquote&gt;There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do. But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn’t always reasonable. Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn’t. If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.

No. If the “irrational” agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is “rational”.&lt;/blockquote&gt;

&lt;p&gt;"Rationality is systematized winning" is a slogan that was adopted to patch a bug in human cognition. Namely our endless capacity to delude ourselves about how we did in an attempt to save face. The concept seems to have been absorbed, but I'm skeptical it's translated into more effective action. Certainly it produced &lt;a href="https://www.greaterwrong.com/posts/qTSRpyuuu6i9gGmWY/instrumental-rationality-is-a-chimera"&gt;many&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great"&gt;essays&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/hgw3mYJnorskJG5RJ/why-don-t-rationalists-win"&gt;on&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/uFYQaGCRwt3wKtyZP/self-improvement-or-shiny-distraction-why-less-wrong-is-anti"&gt;why&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection"&gt;winning&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/MajyZJrsf8fAywWgY/a-lesswrong-crypto-autopsy"&gt;isn't&lt;/a&gt; &lt;a href="https://www.greaterwrong.com/posts/bRGbdG58cJ8RGjS5G/no-really-why-aren-t-rationalists-winning"&gt;happening&lt;/a&gt;. But the fact that we've been publishing essentially the same essay for a decade now implies something fairly fundamental is wrong. This slogan was chosen because it patches the bug, but I fear at the cost of neutering our ability to focus.&lt;/p&gt;
&lt;h3&gt;The Bug, Disputed&lt;/h3&gt;
&lt;p&gt;Other, more rigorous ways of patching the bug were possible. Tim Tyler &lt;a href="https://www.greaterwrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning#comment-WAFZ5Zorr4xg6gBoY"&gt;responds&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;Wikipedia has this right:

“a rational agent is specifically defined as an agent which always chooses the action which maximises its expected performance, given all of the knowledge it currently possesses.”

    http://​en.wikipedia.org/​wiki/​Rationality

Expected performance. Not actual performance. Whether its actual performance is good or not depends on other factors—such as how malicious the environment is, whether the agent’s priors are good—and so on.&lt;/blockquote&gt;

&lt;p&gt;And Eliezer replied:&lt;/p&gt;
&lt;blockquote&gt;Problem with that in human practice is that it leads to people defending their ruined plans, saying, “But my expected performance was great!” Vide the failed trading companies saying it wasn’t their fault, the market had just done something that it shouldn’t have done once in the lifetime of the universe. Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).&lt;/blockquote&gt;

&lt;p&gt;This is not a philosophical objection, it's a social-emotional objection. Eliezer is saying here that regardless of the correctness of this answer, people can't be trusted with it. Reader, &lt;a href="http://yudkowsky.net/rational/virtues"&gt;the virtues&lt;/a&gt; are cruel. Often when we knowingly lapse in them we already know the direction from which danger will come, and we underestimate the magnitude. I don't want to exaggerate, but this moment of choosing convenience over rigor may have greatly sabotaged the rationalist project. It's a meme that takes beneficial, operationable knowledge and pushes it out in favor of meta hand-wringing.&lt;/p&gt;
&lt;h3&gt;I Notice You Are Confused: Try Frequentism&lt;/h3&gt;
&lt;p&gt;Later in the thread Eliezer observes:&lt;/p&gt;
&lt;blockquote&gt;

I guess when I look over the comments, the problem with the phraseology is that people seem to inevitably begin debating over whether rationalists win and asking how much they win—the properties of a fixed sort of creature, the “rationalist”—rather than saying, “What wins systematically? Let us define rationality accordingly.”

Not sure what sort of catchphrase would solve this.&lt;/blockquote&gt;

&lt;p&gt;There is no catchphrase that solves this, the problem as stated is intractable. You've probably heard of Nate Silver, right? &lt;a href="https://www.forbes.com/sites/quora/2012/11/07/how-accurate-were-nate-silvers-predictions-for-the-2012-presidential-election/"&gt;He did really well forecasting elections&lt;/a&gt;. This made him the standard wisdom for contrarian know-it-alls in 2016. The problem in 2016 was that &lt;a href="https://slate.com/news-and-politics/2016/01/nate-silver-said-donald-trump-had-no-shot-where-did-he-go-wrong.html"&gt;primary polls aren't particularly accurate&lt;/a&gt; and it's the primaries that were of most interest in 2016. So Nate kind of had a rough time with his primary predictions, and then he made a final analysis before the general: &lt;a href="https://projects.fivethirtyeight.com/2016-election-forecast/"&gt;Hillary Clinton favored to win 71-29&lt;/a&gt;. Of course, Trump won. This made a lot of people very angry on both sides. Hillary didn't win, so does that mean Silver is bogus? Well, it's hard to tell. Certainly from a pure probability perspective, things which only occur 30% of the time happen daily and it's not particularly notable. But was Silver right to &lt;em&gt;expect&lt;/em&gt; that Trump isn't favored to win? You can analyze his process and evaluate its quality &amp;amp; correctness, but based on outcomes alone this problem is intractable. One way to solve this &lt;a href="https://timvangelder.com/2015/05/18/brier-score-composition-a-mini-tutorial/"&gt;is to use a scoring rule like Brier's score&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But that only treats the surface issue. Why does all this confusion and heat exist
in the first place? Naively we might expect it's because people don't understand
statistics, or that they're stupid. But I think it's probably more subtle than that.
As Eliezer points out, we insist we were right even when our ideas were dumb to
save face. There is however a related meta issue, one level up from fretting about
particular outcomes of dice rolls. Namely: people have a odd tendency to be okay with
letting single random outcomes decide their success, even when it's unnecessary.
This is common in role playing games. Often players will run headlong into
situations that kill them unless the dice come up a certain way. They take it
for granted that the dice roll happens, and focus on how to make that dice roll
survivable. Usually with a bit of forethought they could avoid their
strategy relying on a lucky save entirely. However it's generally not until the
mechanics are particularly punishing that players get smart about this. I
suspect if this is common in gaming it's common in real life too. That people are
getting so invested into singular outcomes because they've staked too much on them.
In short: If you're finding that single micro-scale dice rolls are key components of
your success, that's a strategy smell.&lt;/p&gt;
&lt;p&gt;So what's all this got to do with traders claiming their strategy is good but chance screwed them? Well everything of course. They're essentially the same problem. We have the same key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An element of uncertainty which means that even good plans can be derailed by bad luck.&lt;/li&gt;
&lt;li&gt;No clear way to distinguish a bad plan with average luck from a good plan with bad luck.&lt;/li&gt;
&lt;li&gt;People blaming the outcomes of their bad plans on luck rather than their own skill.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And the solution is the same in both cases. You can criticize process, but if all you have to go off is outcomes then score rules over multiple trials are the best you can do. I have to wonder if the reason why Eliezer didn't notice this problem is intractable, is his insistence on a Bayesian frame. After all, the Brier score is &lt;a href="https://en.wikipedia.org/wiki/Frequentist_probability"&gt;essentially frequentist&lt;/a&gt; in attitude. The answer to this particular bug in the brain is to change your perspective. &lt;/p&gt;
&lt;h3&gt;Is It Really A Problem?&lt;/h3&gt;
&lt;p&gt;Changing our perspective might have significant benefits. Systematized winning is not an actionable definition. Most domains already have field specific knowledge on how to win, and in aggregate these organized practices are called society. The most powerful engine of systematized winning developed thus far is civilization. Most people trying to explain the value of rationality privilege the hypothesis. They assume that there is a such thing as instrumental rationality, methods to systematically win over and above the usual practices of civilization. It would be a mistake to assume your audience will privilege the hypothesis with you. The first question you have to answer is why rationality at all. If someone asks:&lt;/p&gt;
&lt;blockquote&gt;&amp;ldquo;Look, if I go to college and get my degree, and I go start a traditional family with 4 kids, and I make 120k a year and vote for my favorite political party, and the decades pass and I get old but I'm doing pretty damn well by historical human standards; just by doing everything society would like me to, what use do I have for your 'rationality'? Why should I change any of my actions from the societal default?&amp;rdquo;&lt;/blockquote&gt;

&lt;p&gt;You must have an answer for them. Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do &lt;em&gt;more&lt;/em&gt; than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults. When you give yourself permission to move away from the "systematized winning" definition, without the fear that you'll tie yourself in knots of paradox; it's then that you can really start to think about the subject concretely. &lt;/p&gt;
&lt;p&gt;If not "systematized winning", then what definition is suitable? I have my answer, but I'd prefer not to lose the chance to hear yours. If you agree with me, then I think you owe it to yourself to stop letting other people tell you what rationality is for a bit. Try to &lt;a href="http://yudkowsky.net/rational/virtues"&gt;name the void&lt;/a&gt; for yourself, then compare your answer to others. &lt;/p&gt;</content><category term="rationality"></category><category term="winning"></category><category term="sequences"></category></entry><entry><title>Schools Proliferating Without Practitioners</title><link href="https://www.thelastrationalist.com/schools-proliferating-without-practitioners.html" rel="alternate"></link><published>2018-10-25T00:00:00+02:00</published><updated>2018-10-25T00:00:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2018-10-25:/schools-proliferating-without-practitioners.html</id><summary type="html">&lt;p&gt;It's been more than a decade since Eliezer Yudkowsky started writing &lt;a href="https://www.readthesequences.com/"&gt;The Sequences&lt;/a&gt;. Lots of stuff has happened since then
in the realm of rationality research. Philip Tetlock &lt;a href="https://www.economist.com/books-and-arts/2015/09/26/unclouded-vision"&gt;creamed everyone else&lt;/a&gt;
in a competition to predict global events by making a platform on top to
measure forecasting ability. The &lt;a href="https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science"&gt;replication …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;It's been more than a decade since Eliezer Yudkowsky started writing &lt;a href="https://www.readthesequences.com/"&gt;The Sequences&lt;/a&gt;. Lots of stuff has happened since then
in the realm of rationality research. Philip Tetlock &lt;a href="https://www.economist.com/books-and-arts/2015/09/26/unclouded-vision"&gt;creamed everyone else&lt;/a&gt;
in a competition to predict global events by making a platform on top to
measure forecasting ability. The &lt;a href="https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science"&gt;replication crisis&lt;/a&gt; hit, showing tons of
studies in psychology, behavioral economics, and related disciplines to be total
hocus. &lt;a href="https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/"&gt;Effective Altruism&lt;/a&gt; splintered off as a movement from rationality,
raising millions of dollars for charity and fundamentally changing the way many
wealthy philanthropists, bankers, computer programmers, and other high income
earners think about their contributions. You know what didn't change much? The
prototype of Bayesian Rationality put forth by Eliezer. It's not that there's
nothing to update. Eliezer's sketch was a rough, unpolished thing that he
actively invited his readers to improve upon and iterate. Despite that, the top
'rationality textbook' a newcomer to the LessWrong school can expect to be
recommended in 2018 is still &lt;em&gt;The Sequences&lt;/em&gt;, with a few additions and light
editing. You could say that the LessWrong rationality community, figuratively
and literally has failed to update. &lt;/p&gt;
&lt;h2&gt;Bayes Rule and The Failure To Update&lt;/h2&gt;
&lt;p&gt;Ironically enough, Bayes Theorem is something that the community at large has
updated on without really officially acknowledging it. None of those updates
have really backpropagated into &lt;em&gt;The Sequences&lt;/em&gt; however.&lt;/p&gt;
&lt;p&gt;I suspect most of my readers are already familiar with Bayes Theorem, but if you're not I certainly shouldn't
be the one to explain it to you. &lt;a href="https://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/"&gt;This explanation from Better Explained&lt;/a&gt; is probably your best bet to get a quick handle on the
concept.&lt;/p&gt;
&lt;p&gt;I know it's been a while since most of you have read &lt;em&gt;The Sequences&lt;/em&gt; (if ever), so a few quick reminders are
in order. You probably remember that Eliezer spends a lot of time talking about Bayes Theorem and Bayesian
Reasoning and why Frequentist interpretations of statistics are insane. You might not remember that he's
spending all that time talking about it because he believes Bayes is the centerpiece of his philosophy. For
example in his series of essays on Quantum Physics, Eliezer is trying to force a confrontation between the
readers intuitions about science and their intuitions about bayesian inference:&lt;/p&gt;
&lt;blockquote&gt;Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?

Are you going to make a major modification to a scientific model, and believe in zillions of other worlds you can’t see, without a defining moment of experimental triumph over the old model?

Or are you going to reject probability theory?

Will you give your allegiance to Science, or to Bayes?&lt;/blockquote&gt;

&lt;p&gt;(Source: &lt;a href="https://www.readthesequences.com/TheDilemmaScienceOrBayes"&gt;The Dilemma: Science or Bayes?&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Eliezer speaks of his Bayesian Enlightement, and how it made him realize his entire approach to 'rationality'
had been deeply flawed; he hadn't been holding himself to nearly enough rigor:&lt;/p&gt;
&lt;blockquote&gt;But it was Probability Theory that did the trick. Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox. If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end up doing less than optimal. Jaynes would do his calculations different ways to show that the same answer always arose when you used legitimate methods; and he would display different answers that others had arrived at, and trace down the illegitimate step. Paradoxes could not coexist with his precision. Not an answer, but the answer.

And so—having looked back on my mistakes, and all the an-answers that had led me into paradox and dismay—it occurred to me that here was the level above mine.

I could no longer visualize trying to build an AI based on vague answers— like the an-answers I had come up with before—and surviving the challenge.&lt;/blockquote&gt;

&lt;p&gt;(Source: &lt;a href="https://www.readthesequences.com/My-Bayesian-Enlightenment"&gt;My Bayesian Enlightenment&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;And Eliezer provides short notes on how to think better with Bayes, clearly showing he intends it to be &lt;em&gt;used&lt;/em&gt;
by the reader, not just as a 'model' or 'metaphor' for correct thinking:&lt;/p&gt;
&lt;blockquote&gt;This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1−P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O/(1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.

For example, it’s more convenient to use odds when you’re doing Bayesian updates. Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4/1.4) = 2/7 = ∼29%.&lt;/blockquote&gt;

&lt;p&gt;(Source: &lt;a href="https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities"&gt;0 and 1 Are Not Probabilities&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;All of this is to support a claim that should be fairly obvious, but I suspect many readers will try to
wriggle out of if I state it without extensive justification: Bayesian methods are a core feature of Eliezer
Yudkowsky's version of rationality. You might even say that Eliezer's variant could be called
"Bayesian Rationality". It's not a 'technique' or a 'tool', to Eliezer Bayes is &lt;em&gt;the law&lt;/em&gt;, the irrefutable
standard that provides a precise unchanging figure for exactly how much you should update in response to a new
piece of evidence. Bayes shows you that there is in fact a right answer to this question,
and you're almost certainly getting it wrong.&lt;/p&gt;
&lt;p&gt;This in turn points toward the uncomfortable fact that Bayes does not seem to have
helped the Bayesian Rationalists develop useful approximations of correct inference.
In fact, it's not so much that we started with primitive approximations and then improved them.
Rather, the Bayesian feature of Eliezer's philosophy seems to have left no conceptual descendants in the meme pool.
For example, the Center For Applied Rationality's 2017 handbook does not include the
phrase "Bayes Theorem" even once. It's taken by the current cohort as something
of a status symbol, a neat novelty you can claim to have knowledge of to boost
prestige.&lt;/p&gt;
&lt;p&gt;Meanwhile, Philip Tetlock figured out how to get humans to approximate Bayes Theorem
in their predictive powers. He used a score function called the &lt;em&gt;Brier Score&lt;/em&gt; to
measure predictive strength from participants in forecasting tournaments. This
let him figure out the rules of reason which humans can actually implement to
be very good at predicting the future. In his book &lt;em&gt;Superforecasting&lt;/em&gt;, Bayes
Theorem gets a brief aside to explain that it's largely irrelevant to his top
performers success. In fact, Tetlock takes the reader aside for a bit of myth busting,
stating explicitly that Bayes Theorem is not necessary for the level of ability
superforecasters demonstrate:&lt;/p&gt;
&lt;blockquote&gt;This may cause the math-averse to despair. Do forecasters really have to understand, memorize, and use a―shudder―algebraic formula? For you, I have good news: no, you don't.&lt;/blockquote&gt;

&lt;p&gt;He goes on to explain that while forecasters might occasionally use Bayes Theorem to ground their predictions,
in general its use is not all that necessary for strong performance per empirical observation. Tetlock uses
the example of Tim Minto, a superforecaster who understands the basics of Bayes Theorem and used it an
astonishing zero times while updating and considering his forecasts. The mental movements
that Bayes Theorem suggest for updating your beliefs are incredibly useful and crucially important
to good performance, but the equation itself seems to be of limited benefit in
real world prediction tournaments. Even in describing superforecasters as a whole,
Tetlock says that 'many' know about Bayes Theorem, implying the number does not
even constitute a majority.&lt;/p&gt;
&lt;p&gt;There is an entire fascinating discussion we could have about what it means for
Tetlock's measurement based, empirical perspective to accomplish the goal that
Eliezer's rational, model based perspective didn't. But we shouldn't veer into
too many subjects in one essay, it makes for messy reading.&lt;/p&gt;
&lt;h2&gt;So How About That Replication Crisis?&lt;/h2&gt;
&lt;p&gt;The case of Bayes underscores a larger point about the way this sort of
thing is treated by LW-flavor rationalists. The &lt;a href="[https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science"&gt;replication crisis&lt;/a&gt;
destroyed a lot of things we thought we knew about human psychology in a very
short period of time. Naively, I would expect this to be a hair-catches-fire
moment for the community. Since a lot of these things were taken as important
things in &lt;em&gt;The Sequences&lt;/em&gt;, if people are actually practicing stuff based on
these ideas then their sudden deletion from the scientific canon should have
caused quite a bit of chaos and reshuffling. Instead, there was almost no
reaction, and to the extent a reaction occurred it mostly treated this issue
as a spectator sport rather than something which applies to LessWrongers
personally.&lt;/p&gt;
&lt;p&gt;The replication crisis provides us with a natural experiment, and I invite you to
consider what would happen if it were performed in some other discipline. Imagine
for example what the reaction would be in medicine if it were found that 3/4 of
pharmaceutical drugs were actually placebo or had effect sizes so small that they
were nearly indistinguishable from noise. You'd see doctors going through the 5
stages of grief they'd be so shaken up by it. It would provoke a great deal of
argument, drama, vicious denials, and once the dust settled grim
acceptance of the new reality. And yet:&lt;/p&gt;
&lt;blockquote&gt;The crisis intensified in 2015 when a group of psychologists, which included Nosek, published a report in Science with evidence of an overarching problem: When 270 psychologists tried to replicate 100 experiments published in top journals, only around 40 percent of the studies held up. The remainder either failed or yielded inconclusive data. And again, the replications that did work showed weaker effects than the original papers. The studies that tended to replicate had more highly significant results compared to the ones that just barely crossed the threshold of significance. (Resnick, 2018)&lt;/blockquote&gt;

&lt;p&gt;This is fairly close to the situation we find ourselves in with the bias literature.
But nobody seems particularly shaken, and why should they? Our naive impression is
just that, naivete. The straightforward conclusion is that if deleting knowledge
from the canon causes no reaction, then it clearly wasn't important to people.
And the straightforward conclusion from that postulate is whatever rationalists
do, the practice isn't based on the bias literature. And the practice presumably
isn't based on Bayes either. After all, if people were doing stuff based on
Bayesian Inference they wouldn't need Philip Tetlock to tell them the equation
itself is nearly useless as a supplement to most human reason. Yet a newcomer to the
community would get the impression that the bias literature and Bayes Theorem
are central features.&lt;/p&gt;
&lt;p&gt;Apathy implies inaction, which implies something very strange about Bayesian
Rationality. In &lt;em&gt;The Sequences&lt;/em&gt; Eliezer warns against &lt;a href="https://www.readthesequences.com/Schools-Proliferating-Without-Evidence"&gt;schools proliferating
without evidence&lt;/a&gt;
and how you need measurement, testing, statistics, etc for your organized practice
of X to mean anything. You are probably expecting me to tell you that Bayesian
Rationality is a school proliferating without evidence, but the conclusion is
so much odder than that. More than just proliferating without evidence, Bayesian
Rationality seems to be &lt;em&gt;a school proliferating without practitioners&lt;/em&gt;. It still
memetically replicates but nobody is doing anything directly based on the ideas
it espouses.&lt;/p&gt;
&lt;h2&gt;The Sequences: Trapped In Amber&lt;/h2&gt;
&lt;p&gt;If you were to join the LessWrong rationalist community in 2018, you would probably
be told to read &lt;em&gt;The Sequences&lt;/em&gt;. The more time passes since their publication date,
the less sensible this seems. Certain portions are timeless, other parts could do
with some revision. More than just things which are outdated, there's plenty of
new developments in the past decade that could be &lt;em&gt;added&lt;/em&gt;. It's not as though Bayes
Theorem was somehow invalidated, it is a mathematical law after all. Rather we've
since learned it's possible to iterate on and become better at inference using
the Brier Score. Other possible inclusion candidates exist, such as Jonathon Haidt's
research on the 6 moral foundations.&lt;/p&gt;
&lt;p&gt;It's very difficult for our collective understanding to advance when the introductory
material starts people where we were in 2009. The knowledge of some LessWrongers is
quite deep, but the conversations they can have with that knowledge in public are
bottlenecked by a lack of common knowledge with other potential participants. In
my next post I'll show how this dynamic came about, and what it looks like when a
community actively updates its knowledge in response to new information and events.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;(0): Resnick, Brian. (2018, August 27). &lt;em&gt;More social science studies just failed to replicate. Here's why this is good.&lt;/em&gt; Retrieved from &lt;a href="https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science"&gt;https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science&lt;/a&gt;&lt;/p&gt;</content><category term="bayes"></category><category term="sequences"></category><category term="cfar"></category><category term="org theory"></category><category term="common knowledge"></category><category term="community"></category><category term="effective action"></category></entry><entry><title>Facebook, The Rodents, and The Common Knowledge Machine</title><link href="https://www.thelastrationalist.com/facebook-the-rodents-and-the-common-knowledge-machine.html" rel="alternate"></link><published>2018-10-16T10:48:00+02:00</published><updated>2018-10-16T10:48:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2018-10-16:/facebook-the-rodents-and-the-common-knowledge-machine.html</id><summary type="html">&lt;p&gt;The dark magicians at Facebook have cast a hex on the rationalist community.&lt;/p&gt;
&lt;p&gt;Their hex is not the only hex, and it is not necessarily the hex of most
consequence. It is however one of the more obvious, legible, and objectively
harmful instances of witchcraft. This post is about how …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The dark magicians at Facebook have cast a hex on the rationalist community.&lt;/p&gt;
&lt;p&gt;Their hex is not the only hex, and it is not necessarily the hex of most
consequence. It is however one of the more obvious, legible, and objectively
harmful instances of witchcraft. This post is about how several people put up
actual money to try counteracting the forces of evil and failed to make a dent.
We will look at the problem they tried to solve, what was attempted, why it
didn't work, and what the persistent nature of this problem says about the
rationalist community.&lt;/p&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Facebook is in the uncomfortable position of selling a product that makes its
users worse off. A series of &lt;a href="https://www.npr.org/sections/health-shots/2016/09/07/492871024/facebook-and-mortality-why-your-incessant-joy-gives-me-the-blues"&gt;increasingly&lt;/a&gt; &lt;a href="https://hbr.org/2017/04/a-new-more-rigorous-study-confirms-the-more-you-use-facebook-the-worse-you-feel"&gt;rigorous&lt;/a&gt; &lt;a href="https://www.telegraph.co.uk/news/2018/10/10/facebook-instagram-should-pay-mental-health-levy-nhs-damage/"&gt;studies&lt;/a&gt; have shown fairly conclusively that Facebook is bad for you. And that's
just on traditional mental health lines. Facebook and other social media is also
implicated in the hell that is our current political landscape. Between &lt;a href="https://motherboard.vice.com/en_us/article/vv73qj/facebooks-filter-bubble"&gt;filter
bubbles&lt;/a&gt;,
&lt;a href="https://medium.com/@richardnfreed/the-tech-industrys-psychological-war-on-kids-c452870464ce"&gt;psychological warfare against users&lt;/a&gt;,
&lt;a href="https://darkpatterns.org/"&gt;dark patterns&lt;/a&gt;, and
&lt;a href="https://medium.com/startup-grind/how-the-trump-campaign-built-an-identity-database-and-used-facebook-ads-to-win-the-election-4ff7d24269ac"&gt;the opportunity for 3rd parties to turn this apparatus of control into political outcomes&lt;/a&gt; what we have here is probably the #1 mindkiller in America. These things are more than just media hype. I've bought political ads
on Facebook myself, and the taste of targeting magic I got access to with small
amounts of spend gives me no doubt that deep pockets can get incredible powers.&lt;/p&gt;
&lt;p&gt;But this post isn't really about Facebook, it's about rationalism. All of this
is background to the fact that rationalists, especially ones that live in the
Bay and Seattle Areas, use Facebook as their primary communication tool. This is
a problem because zooming out to "raising the sanity waterline", a core
rationalist mission, it becomes fairly obvious that hosting your communication
on Facebook is a bit like the local health club holding their meetings at McDonalds.&lt;/p&gt;
&lt;p&gt;Then there's the utilitarian concerns. &lt;a href="https://thezvi.wordpress.com/2017/04/22/against-facebook/"&gt;Facebook is not actually a reliable
messaging service&lt;/a&gt;
for the purpose rationalists put it to. Namely, having a reliable forum to
exchange messages on with the expectation other people will see them. Zvi
already breaks down the problem here wonderfully, but the short version is that
Facebook employs tactics that maximize engagement by making it artificially
difficult to know when you should stop engaging with the forum. That means
obfuscation of conversational flow, along with enforced selective attention on
what messages to pay attention to.&lt;/p&gt;
&lt;p&gt;All of this is to say Facebook being the stable equilibrium for local
rationalist communities to discuss things on the Internet is a real problem.
Plenty of people have already written about such problems before. Scott
Alexander's &lt;a href="http://slatestarcodex.com/2014/07/30/meditations-on-moloch/"&gt;Meditations on Moloch&lt;/a&gt; and Eliezer Yudkowsky's &lt;a href="https://equilibriabook.com/toc/"&gt;Inadequate Equilibria&lt;/a&gt; are two
particularly notable examples. The subject of this post, the &lt;em&gt;real&lt;/em&gt; subject is
to describe an attempt to implement a solution advocated by both authors. It is
about why this attempt failed and how it could be done better next time. And
it's about why the failure should concern us.&lt;/p&gt;
&lt;h2&gt;Actuator: An Attempted Solution&lt;/h2&gt;
&lt;p&gt;Enter &lt;a href="https://gitlab.com/vrtrahan/actuator"&gt;Actuator&lt;/a&gt;. Actuator is an attempt
to implement a concept &lt;a href="http://archive.is/tnajt"&gt;described by both Scott Alexander and Eliezer
Yudkowsky&lt;/a&gt; as a potential solution to persistent
coordination problems. The idea is fairly simple: you set up a notice that you
want a certain thing to happen, and then other people can sign up with their
email to indicate they want the thing to happen too. Once you reach a certain
threshold of signsups, the service sends out a mass mail letting everyone know
they've hit enough people who agree with details on who to organize with. It's
clever, already shown to work by sites like IndieGoGo and Kickstarter, and
fairly simple to implement.&lt;/p&gt;
&lt;p&gt;Obviously if this existed it could be used to get people to commit to
switching away from Facebook. Discord user Diffractor#2490 felt the idea was
worth a try and put some of his own money in to start a pool that would pay out
a bounty to anyone that could make the software. This pool ended up with $200.
The plan looked something like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Get software written&lt;/li&gt;
&lt;li&gt;Make people aware of the software&lt;/li&gt;
&lt;li&gt;Get lots of people to sign up to move&lt;/li&gt;
&lt;li&gt;Send out the email and watch a new network form&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An anonymous person wrote the software, collected the money, and &lt;a href="http://actuator.herokuapp.com/"&gt;showed it to
some people&lt;/a&gt; but nothing ever happened with it. &lt;/p&gt;
&lt;h2&gt;Why It Failed&lt;/h2&gt;
&lt;p&gt;So why didn't it work? Following some leads on Tumblr I tracked down the author
of the software to ask his opinion, and I ended up with the following picture.&lt;/p&gt;
&lt;h3&gt;Bad Incentives&lt;/h3&gt;
&lt;p&gt;The author admits that his motivation for writing it was the bounty. Once he
had developed the raw features without any particular polish he showed up to
collect the money. With cash in hand, and no more money forthcoming other
concerns distracted him and he stopped working on it. Only one person has worked
on it seriously since, looking at the GitLab repository.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://archive.is/qv8SE"&gt;The lesson here&lt;/a&gt; is that if you want a bounty to achieve a particular outcome,
like "website that is working and looks good and has people using it" you should
split the bounty up into milestones. Imagine if instead of raising 200 and
giving it all out to the first person with raw features, they'd raised 500 and
split it up. The first 200 to whoever can make the functionality with reasonable
tools (you could probably even specify particular tools) and code quality. The
next 200 to whoever makes a style for it that actually looks good. Hold a
contest, best one gets 200. Then the last 100 to whoever can write some decent
documentation for the thing. Setup instructions and the like. This could all of
course be the same person, but it doesn't have to be. Then once it's done, you
have a good chance people might actually use it. If you had more money you could
hand out a bounty to the first instance that has n active users for x weeks, or
whatever other scheme. The point is that the primary failure mode here was not
considering the full chain of events that are necessary to bring the concept to
fruition, and putting those into the bounty payouts.&lt;/p&gt;
&lt;h3&gt;Many People Enjoy Using Facebook&lt;/h3&gt;
&lt;p&gt;In spite of what I wrote above, the author feels that there's a deeper problem
with the whole idea: nobody wants it. He suspects this is a product idea that
sounds cool but no one actually wants to use. I disagree, but I think that a
related problem helped sink the Facebook idea: there are plenty of people who
absolutely endorse their use of Facebook. Heck, some of the people involved here
work there. I don't think enough effort was put into determining that this is a
problem there was latent pain to fuel a solution for. It doesn't matter how bad
a problem is if nobody wants to solve it. &lt;/p&gt;
&lt;h3&gt;Meta-Coordination Issues&lt;/h3&gt;
&lt;p&gt;From a systems perspective, for people to interact with Actuator they need to
hear about Actuator. Getting people to buy into the Actuator plan and concept
is itself a thorny coordination and Common Knowledge problem. This didn't seem
to be factored into the strategy so almost nobody ended up using Actuator. Some
kind of marketing campaign was probably necessary to bring the project to life.&lt;/p&gt;
&lt;h3&gt;Not Clear How The Move Would Happen&lt;/h3&gt;
&lt;p&gt;A lot of necessary questions just plain weren't answered here, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Where are people moving to?&lt;/li&gt;
&lt;li&gt;How do we know people will actually make the move?&lt;/li&gt;
&lt;li&gt;Who will be the designated organizer that everyone uses as a schelling point
to smooth the process along?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Worse still, you had a truly low effort entry for the Facebook exodus.&lt;/p&gt;
&lt;div style="margin-left: 5%;"&gt;
&lt;p&gt;&amp;ldquo;&lt;b&gt;Title&lt;/b&gt;: Coordinated Facebook Exodus&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Description&lt;/b&gt;: I agree to leave facebook if ten million other people agree to leave with me.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Threshold&lt;/b&gt;: 10000000 &lt;b&gt;Verified signups&lt;/b&gt;: 7&amp;rdquo;&lt;/p&gt;&lt;/div&gt;

&lt;p&gt;For whatever reason nobody felt compelled to make a better one than this,
probably because nobody else seemed like they were putting in effort either.&lt;/p&gt;
&lt;h2&gt;My Idea: Use The Costly Signaling, Luke&lt;/h2&gt;
&lt;p&gt;Moving beyond the poor implementation details, lets say you had a perfectly
working version of Actuator. It's still not clear if people will follow through
on their promise when the mail gets sent. I have an obvious rules patch for
this. Mix a little &lt;a href="https://www.beeminder.com/"&gt;Beeminder&lt;/a&gt; into the equation.
Have people pledge money that they will do the thing they've signed up for,
and if they don't Actuator charges them. If they do it and provide proof they
did it, then they keep their money. If a certain amount of time passes and not
enough signups happen, nobody gets charged. This solves the biggest problem this
system has: you don't really know how serious people actually are. If people
sign up for a coordinated deletion of their Facebook accounts, and each person
puts 20 dollars behind that you can be fairly sure when the threshold gets
hit that accounts will actually be deleted.&lt;/p&gt;
&lt;h2&gt;But What About The Mouse?&lt;/h2&gt;
&lt;p&gt;An important question left unanswered is what it says about the community for
this project to fail. &lt;a href="http://archive.is/0eTLM"&gt;As Somnilogical points out&lt;/a&gt; this
is a great idea that was barely tried. More important perhaps, is how little
anguish there actually is about the use of Facebook. &lt;a href="https://thezvi.wordpress.com/2017/04/22/against-facebook/"&gt;Zvi wrote a series of posts about it&lt;/a&gt;, and
at least the Seattle community tried coordinating a move to Discord (with some
success!). But the fact that this &lt;em&gt;clearly suboptimal&lt;/em&gt; state was left
alone by basically every actor involved, including Eliezer Yudkowsky who
prefers drafting his posts on Facebook; well it's absurd. It says that the
people who populate the rationalist community aren't serious about
ideas like raising the sanity waterline or putting in hard work to get better
outcomes on things that are obvious contributors to mental illness. A full
analysis of this phenomena and what caused it is best left for another time.
However a good starting point is &lt;a href="https://www.greaterwrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection"&gt;Bendini's postmortem of the Bay Area community&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Summary &amp;amp; Lessons Learned&lt;/h2&gt;
&lt;p&gt;The rationalist communities use of Facebook is an inadequate scenario. A group
of people put real money into making a simple tool that could enable
transitions from inadequate equilibria to better, 'adequate' ones. This tool was
made in the basic sense, but had none of the necessary polishing or marketing
work done because the money only stipulated raw tool creation. Better planning
and project management could have avoided this outcome. The fact that this
outcome was collectively allowed to stand implies serious issues with the
foundational makeup of the people within the rationalist community.&lt;/p&gt;
&lt;h3&gt;Lessons Learned&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;When paying a bounty for an outcome, make sure you're not just paying for an intermediate step. If necessary, split your bounty into multiple parts for different milestones.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Model and anticipate the whole process of bringing a concept to life, as much
as you can. Start at the first conceivable step (perhaps you making the list of
conceivable steps) and then list out everything that has to happen thereafter for the thing to work.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you're trying to organize people to do something, make sure you have a solid basis to believe that people are willing to organize to make the thing happen. Don't just assume that because you see a problem other people agree with you.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Don't leave critical steps up to chance unless you have to. Make sure to have some idea of whose responsibility it is to do something like "draft a compelling notice asking for others to leave Facebook".&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rationalists are not particularly better at resisting bad equilibria and solving coordination problems than other communities. This is obviously concerning.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content><category term="community"></category><category term="social media"></category><category term="after action report"></category><category term="coordination issues"></category></entry><entry><title>Eliezer Yudkowsky vs. The Sequences</title><link href="https://www.thelastrationalist.com/eliezer-yudkowsky-vs-the-sequences.html" rel="alternate"></link><published>2018-10-13T17:36:00+02:00</published><updated>2018-10-13T17:36:00+02:00</updated><author><name>The Last Rationalist</name></author><id>tag:www.thelastrationalist.com,2018-10-13:/eliezer-yudkowsky-vs-the-sequences.html</id><summary type="html">&lt;p&gt;There's a certain grand irony to the whole thing. Eliezer writes this sprawling
intellectual epic called &lt;em&gt;The Sequences&lt;/em&gt; that puts a moral spin on General Semantics, Popperian Science, Behavioral Economics, Evolutionary Psychology, Bayesian Statistics, putting it all together into one cohesive worldview concluding that the world is about to be …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There's a certain grand irony to the whole thing. Eliezer writes this sprawling
intellectual epic called &lt;em&gt;The Sequences&lt;/em&gt; that puts a moral spin on General Semantics, Popperian Science, Behavioral Economics, Evolutionary Psychology, Bayesian Statistics, putting it all together into one cohesive worldview concluding that the world is about to be destroyed by peoples inability to absorb the astounding facts of creation. He tells people that &lt;a href="https://www.greaterwrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real"&gt;you have to embrace the real world you live in&lt;/a&gt; and &lt;a href="https://www.greaterwrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist"&gt;they should be more confused by fiction than reality&lt;/a&gt;. Then almost as a joke he decides to write &lt;a href="http://www.hpmor.com/"&gt;this Harry Potter fanfiction&lt;/a&gt; and it &lt;a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/RationalFic"&gt;attracts a deluge of people who are in love with the aesthetics, tropes, and devices of fiction&lt;/a&gt;. They love cute stories about how stuff works,
novelty, cleverness taken to its limits, and they turn the edifice of this
'rationality' into &lt;a href="https://srconstantin.wordpress.com/2017/08/08/the-craft-is-not-the-community/"&gt;a social club for a particular sort of brilliant slacker&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The grand master of this club is of course Eliezer himself.&lt;/p&gt;
&lt;hr&gt;

&lt;div class="juxtapose" style="display: flex; flex-direction: row;"&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;Rational!Harry has already taken the Unbreakable Vow. Rational!Voldemort, especially if he doesn't have his horcrux, and knowing that there's no negotiated way to escape the confrontation, will set up a deadman switch that destroys the world in the event of his own death, and tell Rational!Harry so in Parseltongue. I won't call it checkmate, but Rational!Harry cannot do things past this point that run the *risk* of destroying the world, which is a pretty severe condition. He has to be certain he's disabled Voldemort's kill-switch. Voldemort has also already thought of that, and will tell Harry in Parseltongue that he has set up more than one kill-switch, but not say how many, and that he knows he's obliviated at least one of them from his own memory, but he doesn't know how many.&lt;/p&gt;

&lt;p&gt;If you want to continue the story past this point, it's plausible Harry could walk into the Hall of Prophecies and find a list of all Voldemort's kill-switches, or a further prophecy that the world will definitely end if Harry dies. Which, of course, Harry could also tell Voldemort in Parseltongue. I think at that point Voldemort literally screams in frustration, but he still refuses to take down his own kill-switch. Past *that* point I suspect both of them have primarily switched to mentally thinking of it as a fight against the Voice of Time.&lt;/p&gt;

&lt;p&gt;In anything more closely resembling a straight-up fight with no prophecies or blackmail, the older Tom Riddle wins. The younger Tom Riddle knows this at this point, and his first priority is to run, not fight. If Harry can figure out the Mirror inside a month, he has a pretty solid refuge and one where Time can be made to run faster. The older Tom Riddle may or may not respect the younger Tom Riddle enough to anticipate any strategy like that; if he's been vanished away before Harry slew all his Death Eaters, he doesn't quite know what he's dealing with yet.&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.reddit.com/r/HPMOR/comments/9lt36n/canonharry_and_rationalharry_vs_canonvoldemort/e7abn3y/"&gt;Canon!Harry and Rational!Harry vs. Canon!Voldemort and Rational!Voldemort&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;I have already remarked that nothing is inherently mysterious—nothing that actually exists, that is. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon; to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance; a blank map does not correspond to a blank territory, it is just somewhere we haven’t visited yet, etc. etc...&lt;/p&gt;

&lt;p&gt;Which is to say that everything—everything that actually exists—is liable to end up in “the dull catalogue of common things”, sooner or later.&lt;/p&gt;

&lt;p&gt;Your choice is either:

    &lt;p&gt;Decide that things are allowed to be unmagical, knowable, scientifically explicable, in a word, real, and yet still worth caring about;&lt;/p&gt;

    &lt;p&gt;Or go about the rest of your life suffering from existential ennui that is unresolvable.&lt;/p&gt;
&lt;/p&gt;

&lt;p&gt;(Self-deception might be an option for others, but not for you.)&lt;/p&gt;

&lt;p&gt;This puts quite a different complexion on the bizarre habit indulged by those strange folk called scientists, wherein they suddenly become fascinated by pocket lint or bird droppings or rainbows, or some other ordinary thing which world-weary and sophisticated folk would never give a second glance.&lt;/p&gt;

&lt;p&gt;You might say that scientists—at least some scientists—are those folk who are in principle capable of enjoying life in the real universe.&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.greaterwrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real"&gt;Joy in the Merely Real&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;If you look at what Eliezer &lt;a href="https://www.reddit.com/user/EliezerYudkowsky/comments/"&gt;spends most of his time discussing on Reddit&lt;/a&gt; as of this articles publish date it's a rarely interrupted mix of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lengthy Harry Potter theorizing&lt;/li&gt;
&lt;li&gt;Cryptocurrency speculation&lt;/li&gt;
&lt;li&gt;Defending his authorship of HPMOR&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It's frankly bizarre that the guy pounds out these elaborate word of god amendments to his giant Harry Potter fiction while insisting that only people who can get over magic have a shot at living in the real world. For the sake of fairness,
&lt;a href="https://twitter.com/ESYudkowsky"&gt;his twitter isn't quite so dorky&lt;/a&gt; but the
Reddit history really makes me wonder what he does all day.&lt;/p&gt;
&lt;hr&gt;

&lt;div class="juxtapose" style="display: flex; flex-direction: row;"&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;I'm not sure if the following generalization extends to all genetic backgrounds and childhood nutritional backgrounds. There are various ongoing arguments about estrogenlike chemicals in the environment, and those may not be present in every country...&lt;/p&gt;

&lt;p&gt;Still, for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women.&lt;/p&gt;

&lt;p&gt;A lot of them don't know it or wouldn't care, because they're female-minds-in-male-bodies but also cis-by-default (lots of women wouldn't be particularly disturbed if they had a male body; the ones we know as 'trans' are just the ones with unusually strong female gender identities). Or they don't know it because they haven't heard in detail what it feels like to be gender dysphoric, and haven't realized 'oh hey that's me'. See, e.g., &lt;a href="http://sinesalvatorem.tumblr.com/post/141690601086/15-regarding-the-4chan-thing-4chans"&gt;http://sinesalvatorem.tumblr.com/…/15-regarding-the-4chan-t…&lt;/a&gt; and &lt;a href="http://slatestarcodex.com/2013/02/18/typical-mind-and-gender-identity/"&gt;http://slatestarcodex.com/…/typical-mind-and-gender-identi…/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But I'm kinda getting the impression that when you do normalize transgender generally and MtF particularly, like not "I support that in theory!" normalize but "Oh hey a few of my friends are transitioning and nothing bad happened to them", there's a *hell* of a lot of people who come out as trans.&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="http://archive.is/SRdyb"&gt;I'm not sure if the following generalization…&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;

&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.&lt;/p&gt;

&lt;p&gt;Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.&lt;/p&gt;

&lt;p&gt;We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.&lt;/p&gt;

&lt;p&gt;I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading “EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG.”&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.greaterwrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist"&gt;Your Strength as a Rationalist&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;The conclusion on the left is strange, and more than a little frightening. It's
premised on the idea that trans is an intrinsic trait people have, which is
itself partially justified by the small number of trans people in existence. If
you suddenly witness a large increase in trans people while believing that it's
intrinsic, you should notice you're confused and seek alternative explanations.
Instead Eliezer &lt;a href="https://www.greaterwrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis"&gt;fits an improbable model that privileges his existing ideas&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To give some idea of just how improbable, &lt;a href="https://en.wikipedia.org/wiki/LGBT_demographics_of_the_United_States"&gt;he proposes trans is 4x more common than the entire LGBT demographic of California (4.8%) at the time of writing&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;div class="juxtapose" style="display: flex; flex-direction: row;"&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;Q: An omniscient source offers to provide a truthful answer to a single question. What would be the most beneficial question to ask?&lt;/p&gt;

&lt;p&gt;A: Will you either answer this question in the negative, or become my good-genie servant for eternity?&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.reddit.com/r/rational/comments/8t7v1i/an_omniscient_source_offers_to_provide_a_truthful/e15iddx/"&gt;An omniscient source offers…&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;The jester reasoned thusly: “Suppose the first inscription is true. Then the second inscription must also be true. Now suppose the first inscription is false. Then again the second inscription must be true. So the second box must contain the key, if the first inscription is true, and also if the first inscription is false. Therefore, the second box must logically contain the key.”&lt;/p&gt;

&lt;p&gt;The jester opened the second box, and found a dagger.&lt;/p&gt;

&lt;p&gt;“How?!” cried the jester in horror, as he was dragged away. “It’s logically impossible!”&lt;/p&gt;

&lt;p&gt;“It is entirely possible,” replied the king. “I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.greaterwrong.com/posts/hQxYBfu2LPc9Ydo6w/the-parable-of-the-dagger"&gt;The Parable of the Dagger&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;The question does not specify that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The entity must answer your question no matter how ludicrous&lt;/li&gt;
&lt;li&gt;The entity is somehow bound in any way by its actions by answering&lt;/li&gt;
&lt;li&gt;The answer must be 'logically consistent' in a rigid way&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If I were the hypothetical entity discussed, I would slay someone who asked me
this on the spot.&lt;/p&gt;
&lt;hr&gt;

&lt;div class="juxtapose" style="display: flex; flex-direction: row;"&gt;
&lt;div class="jux"&gt;
&lt;p&gt;&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;“Because the circumstances under which you’re invoking meta-honesty have something to do with how I answer,” says Harry (who has suddenly acquired a view on this subject that some might consider implausibly detailed). “In particular, I think I react differently depending on whether this is basically about you trying to construct a new mutually beneficial arrangement with the person you think I am, or if you’re in an adversarial situation with respect to some of my counterfactual selves (where the term ‘counterfactual’ is standardly taken to include the actual world as one that is counterfactually conditioned on being like itself). Also I think it might be a good idea generally that the first time you try to have an important meta-honest conversation with someone, you first spend some time having a meta-meta-honest conversation to make sure you’re on the same page about meta-honesty.”&lt;/p&gt;

&lt;p&gt;“I am not sure I understood all that,” said Dumbledore. “Do you mean that if you think we have become enemies, you might meta-lie to me about when you would lie?”&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.greaterwrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases"&gt;Meta-Honesty: Firming Up Honesty Around Its Edge-Cases&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;

&lt;div class="jux"&gt;
&lt;span style="font-size: 1.8em;"&gt;&amp;ldquo;&lt;/span&gt;I tend to be suspicious of morality as a motivation for rationality, not because I reject the moral ideal, but because it invites certain kinds of trouble. It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance. Consider Mr. Spock of Star Trek, a naive archetype of rationality. Spock’s emotional state is always set to “calm,” even when wildly inappropriate. He often gives many significant digits for probabilities that are grossly uncalibrated. (E.g., “Captain, if you steer the Enterprise directly into that black hole, our probability of surviving is only 2.234%.” Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?) Yet this popular image is how many people conceive of the duty to be “rational”—small wonder that they do not embrace it wholeheartedly. To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.&lt;span style="font-size: 1.8em;"&gt;&amp;rdquo;&lt;/span&gt;

&lt;p&gt;(Source: &lt;a class="plainlink" href="https://www.readthesequences.com/Why-Truth-And"&gt;Why Truth? And…&lt;/a&gt;)&lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;The excerpt on the left is almost an OCD scrupulosity level of obsession with
'not lying'. Some of the reason not to lie is game theoretical, but most of it
is moral. Further, it is difficult for me to imagine a more 'spock' sounding
dialogue than the one presented in the post linked on the left.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;What's the slacker club interest in this stuff anyway? Marketing professor David Gal
recently asked &lt;a href="https://www.nytimes.com/2018/10/06/opinion/sunday/behavioral-economics.html"&gt;why behavioral economics is so darn popular&lt;/a&gt;
in an op-ed for the &lt;em&gt;New York Times&lt;/em&gt;. He ends up blaming an
addictive mix of pop psychology and borrowed prestige from economics (both of
the two ounces it has). Hacker News, itself mostly populated by the sort of
middle class technician that enjoys this stuff has a &lt;a href="https://news.ycombinator.com/item?id=18185509"&gt;long and interesting
thread&lt;/a&gt; analyzing the question.
One comment in particular stuck out to me.&lt;/p&gt;
&lt;p&gt;Hacker News user 'kolbe' writes:&lt;/p&gt;
&lt;p style="margin-left: 2.5%; margin-right: 2.5%;"&gt;&amp;ldquo;It's easy to understand and relate to. To even comprehend contemporary research in most scientific disciplines, you need a seriously strong understanding of math or chemistry--a level so high that it cannot be 'popular'. Behavioral Economics only require remedial algebra, statistics and literacy, and the topics they address are usually familiar to everyday people's lives.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;And in this regard I think he's hit the nail on its head. Rationality is a
subject that has weak enough prerequisites to attract every useless crank
that bounced off anything requiring more actual effort. Especially the
LessWrong flavor, which contains &lt;a href="https://www.greaterwrong.com/posts/BJfb2hqtdnaRijAfz/resurrection-of-the-dead-via-multiverse-wide-acausual"&gt;enough extropian science fiction content&lt;/a&gt; to fill out the greatest metal album yet to be recorded. &lt;/p&gt;</content></entry></feed>