Two Year Review

[A retrospective of how 2017 went for me as a result of blogging on mindlevelup. Broken up into six sections:

  • Learning more about rationality.
  • My own take at creating rationality.
  • Doing more concrete actions this year.
  • Interesting areas to look into for next year.
  • Some fun stuff about 2016 vs 2017 blog stats.
  • Moving into next year.]

It’s now been two years since I’ve started blogging (basically) weekly here on MLU. Like last year’s one year retrospective, this essay will chart out how things changed for me in 2017.

 

1] Diving Deep Into Rationality:

Early on, in February, I volunteered at a CFAR workshop, and I got formally acquainted with the material for the first time. (I’d previously seen a lot of the stuff in a more casual form as a participant of ESPR 2016.) The experience led to a 5-part series on rationality and a review of the workshop itself.

One general principle I endorse is “act the way you wish other people would act, if they were in your position”. Written out like that, I know it just looks a lot like the Golden Rule. For rationality, though, I felt like there were a lot of things that people (like me) would have wanted, that didn’t yet exist. And one of those things was a demystification of what happened at CFAR workshops.

My CFAR series is far from comprehensive, but I’m happy to know there’s now some more additional material for curious onlookers, like the sort of person I once was. This sort of attitude also ended up being a big contributor towards pushing me to write the instrumental rationality series.

So, for a time, I was with the forefront of people probing ideas about rationality. I think that one of the most helpful takeaways I got as a result of the workshop was the idea that ontologies can be thought of as operating systems. Reading Mark Lippmann’s Folding picked up on this same thread later on. More recently, I was chatting with Emily Crotteau from Leverage Research who helpfully summarized her thoughts as such:

(Note the below words are paraphrased and not quoted verbatim.)

“You can think of your mind as a very high level way of interfacing with the rest of your body. It’s well-established that your thoughts, for example, do lead to physiological effects. The sort of rationality stuff we’re looking into, then, is basically along these lines, looking at ways that you can tinker with all of the stuff below the hood with just your mental ontology.”

I now basically have more respect for approaches that use your internal experience as a means of producing changes. Along the same lines, I’ve been thinking about feelings more as well. This was a distinct shift from the sort of thoughts I’d had in 2016, which was a lot more technique-focused. Some deeper conversations with CFAR staff updated my thoughts on their thoughts.

Roughly, my original impression at the time was that, at the workshop, they’d just give you a bunch of techniques, executable real life scripts that you could run or something. The actual worldview espoused by CFAR, though, ended up being something a lot more like:

“We don’t expect you to implement all these techniques; rather, we hope you can grab the sort of thinking behind them / what they’re trying to achieve. This way, you can regenerate something functionally akin to them when you need it. Also, your feelings are important and you should listen to them.”

I think the appropriate idea here is something CFAR calls Polaris, based off the North Star. Roughly speaking, it’s the idea that if you’re focused on the end result, i.e. you know what needs to happen, you’ll be well-armed to come come up with something that, even if it’s not exactly the same as what you originally learned, will work for you in the moment.

For example, if you’re trying to arrive on time to appointments more, and you recall that Murphyjitsu was a technique for doing better, you’re better off trying to re-derive things from first principles than wracking your brain to recall what Step 2 is because that process is fueled by a need to get the job done, rather than merely a need to figure out what the “correct” next step is.

(“Stop trying to hit me and hit me!”)

So, yeah. Less techniques. More about getting at something a little deeper.

 

2] Crafting My Own Craft:

After having the importance of introspection and internal dialogue impressed upon me, I…did very little with them. I kept most of the stuff on the back-burner, and I turned right back to the technique-based approach I was already familiar with.

One thing I had touched upon at the end of my CFAR essay series was that my progression of ideas in rationality seemed to follow a general direction of shifting from “more techniques, less feelings” to “less techniques, more feelings” in a way that, to a casual outsider, might seem a little New Age-y.

So, perhaps motivated by a secret fear of looking less crazy, as well as my explicitly stated goal of trying to build up rationality again from more evidence-backed studies, I started on my instrumental rationality series, i.e. mindlevelup: The Book.

While the book as a whole failed to hit all of my goals, I by no means think that it was a bad endeavor to undertake. Working on the project was a large impetus for me to keep developing thoughts on rationality.

Three insights that I don’t claim originality for, but which I think have been the most useful from the series are:

1) Attractor Theory:

At its core, this is pretty simple—stuff you do will impact how you feel about doing other stuff, and thus being strategic when picking stuff you do is a good idea. If anything, you can even view it as an extension of the Outside View; you’re basically being mindful of how your current actions play into the larger scope of things.

But the applicability to a lot of everyday life situations, coupled with the highly visual imagery is something I’m satisfied having put into a crystallized form.

2) In Defense of the Obvious:

The tongue-in-cheek metacontrarianism makes this piece of advice quick to pick up. At ESPR 2017, a lot of the students had started saying it, which I found pretty great.

Apart from pushing against your brain’s default dismissals, I like how it easily flows towards action. If you adopt the heuristic of “always just do the Obvious Thing” you’re less liable to be paralyzed with indecision because there often is something you can do, if you left yourself.

3) Recognizing vs Generating Dichotomy:

I think the specific instance of this dichotomy with regards to advice is a very good one. You want to think in terms of how your receiver will take your advice and if maybe giving them something different than just your conclusion can allow them to bring themselves to your conclusion.

The similarity between recognizing vs generating is a pernicious one, I think, as recognizing is often both far easier to do, but it’s typically much less useful than its generating counterpart. For example, trying to prove a math theorem versus merely verifying an existing theorem is correct.

Despite having written about all these things, though, I am currently not using a lot of what I wrote down. I realize this is hypocritical, and it’s worth looking more into why I feel this way.

For the meantime, though, what rationality things do I use?

  • I often ask for examples and try to give examples.
  • I check in with my feelings and see what’s causing internal knots. I use something similar to Focusing to put my mental finger on what, specifically, is causing discomfort.
  • I put things in my to-do list when I think of them. Sometimes I do stuff on the list and cross them out.

And that’s about it for now.

If you pressed me right now, I’d easily agree that, yes, doing more things would be helpful. But, somehow, they don’t feel compelling. It seems that a combination of fading novelty combined with inconsistent effort on my part has led to me just…not doing stuff.

One potential line of thought I’ve heard floated by others is that different rationality techniques are good for different stages. So if the stuff you’re using no longer seems effective, perhaps you’ve “graduated” onto using something else. I am uncomfortable with this because this suggests that you need ever-higher-powered tools to get roughly the same results.

Which, given what we know about humans and tolerance, might not be all that far-fetched.

I still don’t like it, though.

So in the meantime, I’m just in the paradoxical position of espousing stuff but not doing it all. I have done everything I write about at some point. Just maybe not right now.

 

3] Doing Real Things:

I am perhaps now less fond of the meta-level, I think. Maybe.

The issue for me is that that examining our meta-reasoning is probably a viable way to get some additional clarity on solving major problems like existential risk, but it can also be an engaging topic that can lead nowhere, while giving the illusion of progress.

(Sort of like how sometimes the right thing to do is to do what feels “wrong”, but other times the right thing to do is to do what feels right. Ugh.)

For me, at least, it’s very helpful to have external evidence that I’m doing things. When I’m looking back thinking, “Huh, have I been productive?” querying a past history of my actions feels like a far better metric than querying how my internal landscape has changed. After all, the whole point of even caring about rationality is to the extent that it lets you win at other things in life.

I’ve previously said that, to an external observer, doing the rationality-thing right should lead them to conclude that you are Very Successful (or something along those lines).

Compared to last year, I think I did a good job of doing more things that had real-world impacts this year.

The largest by far would be helping run ESPR, which also has its own extended writeup. As I wrote about in the postmortem, there were often times where opinions clashed. I think the most “wise” thing I got as a result was a real appreciation for finding compromises with people similar but not-exactly-the-same as yourself.

I also got some more experience testing out models and predictions of social dynamics (EX: What are relevant things to filter for during interviews? What are the major factors that impact the camp experience?)

Of course, using a camp that’s all about rationality as a place to practice rationality is still pretty meta.

Then there was the instrumental rationality sequence, of course, which I already covered above.

Next, I ended up coding up some simple productivity tools. The largest coding project I did on the side ended up being the Double Crux web app, which ended up having multiple shortcomings and basically no user base. On the plus side, I’ve now got a better idea of why the original setup didn’t lend itself well to resolving online disagreements.

(The short answer is that an online forum feels too rigid for most discussions, as disagreements sort of become full-fledged, picking up structure / sub-points as part of the process.)

Overall, my attitude towards doing things in real life is currently biased towards trying things, being fine with failing, and picking up incidental knowledge along the way. One thing about this whole process which seems sub-optimal is that the actual domains which I find myself trying are largely influenced by the people around me / the information that I take in.

Putting it like that makes it seem like I’m not putting that much thought into evaluating different potential domains before diving into one. For example, my roommate was very much into the Ethereum blockchain, and, as a result, now so am I. A possible response is that, on the meta-level, I’ve done a lot of filtering for the sort of people / info I surround myself in, so it’s not like the sorts of new domains I’m diving into are 100% random.

If we roll with this model, it suggests that my values are controlling things on a meta-level. For example, I sorta stumbled my way into learning some machine learning, but it’s arguable that the whole stumbling only followed through because of previous positive affect I’d had (which could be attributable to my values).

I also don’t think that’s fully satisfactory, though.

Anyway, the last Real Thing I did this year was donate 0.1 ETH to the Effective Altruism Donor Lottery.

Hurrah for throwing virtual money at real world problems!

 

4] Tackling The Really Hard Problems:

Where do I think looking into rationality will take me in 2018? What do I think are some of the valuable areas to spend more time thinking about?

Here are 4 things I’d ideally like to write some essays about:

1) Deeper Beliefs and Feelings:

As I mentioned in the earlier section (and apparently also back in July), I’m at the point where I feel a lot more ready to drop a lot of my socially-based fear behind diving into stuff which basically seems like mysticism. Yes, it’s hard to get some good grounding on this stuff, especially as introspection-based techniques are going to be, by their nature, grounded in an individual.

However, given the success of more widely-accepted therapy techniques like CBT (which, if you accept the whole “ontologies as operating systems” hypothesis, seems to be doing something roughly similar), I think it’s worth trying to rederive some of this stuff all over again.

Using Focusing and Folding as a basis, I’d like to do a deeper dive into manipulating and accessing my internal state.

Some questions I currently have:

  1. How far can you go with just mental techniques?
    • What’s the most extreme thing you can do without any physical intervention?
  2. How teachable is introspection?
    • What is a good basis for pedagogy when you’re dealing with (perhaps) intrinsically subjective phenomena?
  3. What is a model that reconciles the automaticity of habitual behavior with intentional action?
    • Where do your internal beliefs play into all of this?

2) How to Get Other People to Actually Do Stuff:

I’ve covered the intention-action gap in a few of my past essays. In a few words, it refers to the disconnect between what people say they want to do and what they actually do. I’m interested in this because it currently seems very difficult to change people’s behavior over the internet. Which sounds a little silly, perhaps.

For example, how many people who read Habits 101 actually went out of their way to try those techniques? When you give people info, what determines whether or not they act on it?

One of the things I’d thought about after writing the instrumental rationality sequence was that the people who could benefit most were also the ones least likely to read the sequence in the first place. Rather, it was the motivated, already interested in productivity sort of person who’d bump into it.

That is, most writing on self-help isn’t reaching the right audience.

My intuitions (and perhaps yours) point to the medium being important. For example, going through an online video course on planning versus just being in a room with a coach walking you through making a plan will likely have distinct effects. (I predict the in-person workshop would be significantly better.)

A lot of this seems to have to do with precommitment. It’s a lot harder to back out when you’re in a room with someone willing to help you than it is to just close a video and make excuses. But it’s not just the in-person thing. In my limited experience working with friends, a common pattern looks like this:

  1. I offer a skill which might be useful to their problem.
  2. They agree that what I just said is super insightful.
  3. I walk them through using the skill once.
  4. They nod and thank me and say it’ll change things.
  5. Next time, we meet.
  6. And nothing has really changed for them. They vaguely remember I’d said something insightful, and we start over again.

I’m super curious as to what’s going on here. My intuitions point to the whole “will actually try stuff” trait as being something inherent, but my own experience (I used to not actually try stuff) should suggest otherwise. Moreover, why is it that I often stop using the rationality techniques I myself write about? How likely is the idea that people “graduate” from one skill / paradigm to another?

And to what extent does the channel by which you receive the information matter?

I’d like to keep pushing and see what comes out of thinking about the medium for pedagogy as well as what effectively conveying information that leads to action looks like. (See here for a preliminary musing on medium and incentives with regards to attention.)

3) Models and Breaking Down Queries:

I think prediction markets are pretty cool. There’s some great evidence showing they can make outperform expert judgment and all that. What I’m curious about is the sort of question that often gets posed to them—questions like “Will Daniel Dennett win the primary election?” or “What will the price of cotton candy be on July 4, 2020?”

There are lots of these types of questions, and I guess this sorta generalizes to questions about the world at large: you’re trying to make inferences about a system which you only know in limited and approximate scope. The point I’m getting at though, is that there seem to be quite a few involved steps when trying to answer these types of questions.

You need some hypotheses about how the world works, and then that informs what sort of information you look up. For example, if you were trying to figure out the cotton candy question above, several things which all seem pertinent form my current perspective are:

  1. Which is the right reference class—sweets versus cotton candy specifically?
  2. How has the price of cotton candy been in the past?
  3. What are factors which might affect price?

The point is that all of the above queries require some assumptions about how the world works; you need a model of which factors are related to which other ones, and then you need to estimate their values given what you know / can Google.

And what happens when you have competing models which are at odds with one another?

For example, if you want to convey an insight, is it better to write something short or long? A short blog post with a very high content-to-length ratio might conceivably be the most efficient medium. But also maybe a longer book that hammers the point in through multiple anecdotes and examples is what will keep the insight longer in the reader’s head.

I’m curious about exactly what’s going on when we try to make traction on questions about the world, the models we use, and how to decide between different models.

4) Group Rationality:

As my time at ESPR made clear, rationality really doesn’t scale well in groups. While I think we handled conflict resolution somewhat adequately, the fact that bad feelings lingered is evidence of the additional work I think we need to put in towards figuring out how to make groups work well.

Unlike everyday rationality, groups aren’t exactly under one person’s control, so decision-making is different. My impression is that the military and companies have got systems that work when it comes to this stuff. I’m wondering, though, if there are ways to do better. I have questions like:

  1. How do we adequately resolve situations when a person tasked with doing X believes that doing Y is better?
  2. When people with conflicting values try to cooperate, what is a good process for compromising and negotiating?
  3. What actual method of group governance—futarchy, hierarchical, etc.—proves to be the best in practice?

The above questions are all circling around how groups function, make decisions, and how they’re structured.

I’m also wondering about how information propagates across a community as the number of people grow:

I think that current communities hit diminishing marginal returns on additional community members about halfway through their growth stage. At some point, keeping up with the current forefront of the conversation becomes hard, as does on-boarding new members.

(In part, this is related to the above questions about group structure. For example, if you had a designated “New Person Greeter” role, that’d likely be helpful for on-boarding. Likewise, having some sort of accessible group canon like Rationality: From AI to Zombies can also provide good beacons to point people to.)

But, if we focus on how things like communication just tend to get harder as groups grow, there’s an interesting analogy I’d like to draw:

I’m curious if there are solutions here that parallel scaling solutions in blockchains:

Conceptually, you have roughly the same idea: A group of units all need to process the same amount of information, so the typical setup limits the rate at which you can process to that of the slowest member of your group.

For example, one thing you might consider is splitting your community into several smaller groups and meet up periodically to catch everyone up. This is sorta similar to what sharding does in Ethereum.

I think the analogy might break down here because my super rough understanding of current blockchain scaling efforts are focused on finding ways to have only a subset of the transactions processed by each node (e.g. only the ones they’re involved in). However, in a group, it seems largely desirable for their to be lots of common knowledge, which means you *want* everyone to be getting exposed to the info that every member is broadcasting.

Effectively building common knowledge as more and more people join seems important. It’s sort of related to pedagogy, and I think knowing more about this could be reasonably useful towards figuring out how to get more value out of larger communities.

 

5] Hitting Blogging Goals and Stats:

As I closed out my 2016 retrospective, I’d made a list of several things I’d wanted to do for 2017. I had wanted to:

  1. Give more examples in my writing.
  2. Have more subheadings and pictures.
  3. Write something about habits.

In 2017, I hit all three of those targets!

I think that in at least 80% of the essays I’ve written this year, I had been mindful to give at least one example for every concept I introduce.

More pictures, subheadings, and examples came when I started writing primer essays, the longer essays consisting of several thousand words + graphics. It was, I think, very successful, and I’m happy that doing them also brought MLU a visual identity. There’s now a distinctive “style” that I think people can place when they see a graphic.

And Habits 101 became its own primer essay, so that happened too.

Here are some stats about the blog, to see how 2015-2016 differed from 2016-2017:

In 2016, I wrote ~30,000 words.

In 2017, I wrote ~75,000 words. That’s 2.5 times as much!

In 2016, MLU had ~5,000 page views.

In 2017, MLU had ~18,000 page views. That’s 3.6 times as much!

(If you count the total hits from Medium, that’s another ~17,000!)

A lot of additional traffic in 2017 came from Reddit and LessWrong, which makes sense as I started cross-posting links there more often.

Most searched for term that led to mindlevelup: “porn insight”.

Ahem.

Well then.

 

6] Moving Forward:

I spent ~200+ hours on writing for MLU, according to Toggl; I think it was time well spent. Blogging is kind of a weird project, though, because it’s always a little unclear what a good stopping point is, unlike a coding project. I plan to keep writing for 2018, but I’m well aware that many blogs begin to see more infrequent updates and slowly wither away.

How do I plan to tackle this? If I keep writing, I expect that the topics I write about will change over time. I’m not super confident that I can keep writing about rationality, without rehashing the same ideas. (Of course, that might not be a bad thing.) But I do want to just keep writing.

In the meantime, as I covered in Tackling The Really Hard Problems, I think I want to write more essays on these four areas:

  1. Feelings and more introspection-based techniques to see about achieving physical effects / results.
  2. Figuring out how to bridge the intention-action gap via different mediums, looking into what drives people to act.
  3. Building better models and understanding what’s going on under the hood when I try to formulate one.
  4. Understanding decision-making in groups and finding ways to deal with competing opinions and getting consensus.

Concretely, I’d like to continue with the occasional primer essay with colorful graphics, and I’d also like to keep giving lots of examples. I think the overall header + examples + italicized summary-at-the-beginning format works pretty well, so I’ll stick with it.

I predict that I’ll write about 50,000 words in 2018, and there will be 3 primer essays.

2018 is Here.

Here’s to less mistakes and screw-ups and to more triumphs and victories!

2 comments

  1. […] Two Year Review by mindlevelup – The actual core message of CFAR. The three most useful insights from MLU’s instrumental rationality series. What rationality techniques does MLU actually use. Less meta-level. Deeper Beliefs and Feelings, How to Get Other People to Actually Do Stuff, Models and Prediction Markets, Group Rationality and Scaling. […]

    Like

Leave a comment