[I’ve recently updated a few things in how I look at the world having to do with both rationality and effective altruism. In this post, I try to explain the areas of rationality that I find compelling to work on: improving rationality instruction and making existing insights easier to understand. ]
Rationality: Good Enough?
To start off, I think research in rationality is in an interesting place right now and, as a result of actually spending some time thinking, I think I’m changing my mind on several things.
First off, it seems likely to me that two things are true:
- Current research into rationality by CFAR, Leverage Research, and other people is now heading towards areas that, from the outside, appear to be somewhat esoteric / confusing.
- The question of applying rationality to productivity and with regards to motivation is now largely a solved problem.
What I mean by #1 is that it seems to me that many of the current areas of active development by different groups seem very inaccessible to newcomers. A lot of ideas, like Folding (as one publicly available example), have a lot of background assumptions, and a cursory look at such material only raises more questions.
It may seem that the terminology and descriptions of such techniques seem to draw more and from mysticism and magic. I’ll be the first to admit that any cursory description of Internal Double Crux, for example, sounds like something straight out of a spellbook: “You dialogue between the two conflicting parts of you and reserve judgment to let the true answer appear. Then, you will see that there is no conflict.”
An outside observer might draw the reasonable conclusion that we’ve gone off the deep end.
Now this in itself isn’t exactly the problem:
Seeing this sort of convergence between rationalists and thinkers of the past might also be a sign of something worthwhile to be found, especially as we’ve come to similar conclusions from a very different starting point. People in the past did get some things right. Mindfulness meditation, for example, is a commonly cited tradition linked to ancient spirituality that’s been beneficial even outside of its typical monastic context.
Also, we have chains of reasoning that lead us to our current conclusions, which is often readily available. One thing you can (perhaps) count on from a rationalist is to give reasons for why they believe things. Yet, as is often the case with extended arguments, each step can seem very clear, while the conclusion still seems apparently absurd.
But I’m not here to argue about whether or not current investigations in rationality have merit.
Rather, what seems important to me is that the present areas of research consist of ideas that are arrived from many steps of reasoning, meaning that there’s a lot of assumed background knowledge. This means that even if we chance upon something awesome, the pool of people who could get benefit out of it seems pretty small.
I will also say that I would also be surprised if these techniques turned out to be more effective than, say, TAPs or exercising.
This takes us to #2, my claim that we’ve actually solved productivity. In my opinion, with our current knowledge on areas like habits, planning, and motivation, procrastination is basically a non-problem.
Using the strategy outlined in There Is No Akrasia, I think that most surface-level productivity issues can be worked out. True, this doesn’t exactly solve everything; I still don’t think it’d be realistic for most people to fully utilize 100% of their available work timer.
However, the CFAR curriculum alone seems like enough to get people to at least some sort of baseline where they’re able to be largely quite efficient. A good GTD system paired with some knowledge of TAPs as well as resolving internal conflict seems enough for most problems.
Again, I know the devil’s in the details here, and I’m merely focusing on the bigger shape right now.
It also seems to me that you hit diminishing returns fairly quickly on most of these techniques. For example, I suspect that having any sort of GTD system at all provides far more benefit than trying to optimize a system that’s already in place. Ditto for exobrains, precommitment strategies, internal models, etc.
What this means for me is that I feel less excited about pushing farther on the boundaries of productivity in the way I have been doing so in the past. I also think that frameworks for rationality outside of a CFAR workshop are lacking, which bottlenecks the amount of people who can partake in the current advances.
Effective Altruism Revisited:
Recently a friend mentioned that the EA community at large is probably more constrained by talent rather than funding. I thought I knew this already, but it hadn’t quite sunk in. Probably the way he said it helped: “There’s a lot of rich people in EA willing to fund projects” were his approximate words, which made it easier for me to visualize.
This drove home the point that I might want to to start doing more things in the world.
His words came at a fortuitous time. Another thing that’s been on my mind lately is fear over the future, or some sort of internal feeling that at least sort of maps onto those words. I’ve been thinking about the course of things, the transient nature of things, and I’ve found myself to be very, very frightened. As someone else told me recently, “We’re living in a horror story.”
Things are bad. Things are bad. Things are bad.
More than that, though, the need for me to understand things now seems a lot more sharp. My current bumblings in the world, where I’m trying to explore more still seem good, but it feels like there’s now an increased need to start building my internal map of which levers move which things. It’s about knowing how society actually functions and which causal relationships are the ones relevant for what I care about.
From this, the questions about how to actually create positive change in the world are now ABSOLUTELY IMPORTANT.
This was a pretty weird thing for me to experience. See, I’d previously “thought” that I thought that effective altruism was a good idea. It now seems like I’d previously been wearing such beliefs as attire, or something similar. My vague feelings of Past Owen is something like “he had a lot of internal fire as well as a sense that things were wrong, but he didn’t quite grok cause prioritization”. That, and/or perhaps something along the way shifted where I went close to 100% into rationality and forgot that Reality itself was important.
Writing about this now, it seems like I’ve followed this trail of reasoning at least a few times before, but this is the first time that I’ve explicated it like this.
Internally, it feels like I very quickly went through a chain of reasoning similar to how effective altruism initially came about:
“Think rationality is cool. → Realize the real world has problems. → Time to actually care about the way things work. → Start doing things in the real world.”
Prior to this shift, I got that things like impact, neglectedness, and tractability were “important”, but now they seem like actually necessary things to be considering because they make a difference.
Feedback Loops in Reality:
A core idea I keep returning to over and over again is that rationality is about results. For me, this means having the insights from rationality actually cash out into informing our real-world actions and decisions. If you’re using rationality, an outside observer should be suitably impressed.
(This is often expressed as “rationality is about winning”. Also: “sufficiently advanced rationality is indistinguishable from magic”.)
One of the best and reliable to do this is to use rationality seek competency in another domain:
It seems like one reason why this whole “make sure your ideas hold up in the real world” thing is important is because having an actual domain to test stuff means you get fast feedback loops.
For example, performing magic gives you very fast feedback loops on manipulating people’s perceptions. The difference between successfully pulling off a trick and failing to do so is very large—the audience tells you outright if you messed up. Then it’s back to the drawing board to find something that can fool them.
The point here is that having some sort of process that allows you to both iterate quickly and produce largely generalizable results is very valuable. Many domains in reality serve as a structure that promotes these types of feedback loops.
We can think of rationality as an input we feed into these domains, and the output (i.e. how good we did in the domain) as providing information about how useful our rationality is.
One thing that’s perhaps of interest is noticing that, often, we can use Rationality itself as the domain by which we feed in rationality as the input:
For example, if you would like to consistently add to your GTD system, you can use a TAP to add things you remember to the list. Likewise, if you’re trying to integrate a habit via a TAP but you feel aversive, then this is a prime opportunity to use Internal Double Crux.
Some techniques can even recurse on themselves. For example, if you think that you’re not going to use Murphyjitsu often, you can then use Murphyjitsu on such a feeling to see what failures come to mind.
And while this seems to be quite interesting, I’m worried that this sort of behavior—feeding rationality into Rationality—is pretty circle-jerky. It’s too easy to fool ourselves in our minds and conclude something was helpful when we’re really just selectively looking at subjective data.
For me, at least, this means that, coupled with my reasons about working more in reality, that I’m now more interested in trying to use real life as a feedback loop.
tristanm has a great comment on LessWrong about this:
“[Also,] it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down.”
While it’s not exactly where I’m at, I think it points to this core idea I want to live up to about how all this philosophizing on goal pursuit only works if I actually start to pursue goals.
To be clear, I do still think that rationality is very important. Especially rationality education. Having a way to reliably scale someone up from “baseline” to “competent” would be pretty awesome.
(For the record, I don’t think “rationality = competency” is exactly true, especially as “competency” is a word that sneakily encodes much of the meaning we actually want to get at here.
Right now, I think that the core skill of rationality is something like “self-righting self-improvement”. By this, I mean that even if you just teach someone a few of the rationality techniques and they had this skill, you can be confident that, with this core skill, they could go on and recreate the rest of rationality by noticing problems with themselves and searching for solutions.
There’s a general sense here of being able to trust the person’s problem solving ability such that even if they don’t have the necessary pieces available to them right now, you know they’ll be “all right in the end”.)
The area I want to focus on is bridging the inferential gaps between a typical smart person and the host of ideas in instrumental rationality and also the current ideas being explored. The existing mass of scattered blog posts and research papers on the area is not, in my opinion, a good way to get a bright (but not rationalist) person from “interested” to “relatively self-righting”.
I want to do better.
To that end, two things seem like they’ll shape how I approach rationality, at least in the short-term:
- Resolving research debt in rationality. Crystallizing existing information into clearer insights.
- Focusing more on pedagogy and the process behind learning. Figuring out how to teach rationality well to others.
The essay on research debt is a phenomenal one, and I’d highly recommend you give it a read. Basically, though, it’s about how the current incentive structure in academia (but it generalizes to other domains) doesn’t encourage efforts to try and make ideas easy to understand.
Given the nature of many people in the rationality community, with our predisposition to multi-clause sentences and fancy vocabulary, I think striving for additional clarity and understanding is a worthy goal. For me, this means working on more projects where I take obvious, fairly basic concepts in rationality, and I find ways to make them clearer.
My current efforts on this are on making a book on instrumental rationality, the way I wish someone had done for me when I was starting out.
For #2, pedagogy, my current impression is that CFAR has a very good idea of what this looks like. Their workshop’s been iterated many many times, and they have good ideas of which sorts of classes will and won’t be useful. And I’m already involved there, so that’s good.
(I think I used to underestimate the level of effort involved in rationality education. Changing people’s minds is hard. Also, aside from bridging the inferential distance, things are often in motion. Systems tend towards disarray, and part of rationality is about upkeep and maintenance perhaps as much as it is about pushing the envelope into new territories.)
Until they scale up, though, I think there’s a lot of room for additional, more comprehensive frameworks on the matter.
As I mention in the linked LessWrong post on instrumental rationality, I don’t think I’m the best or unique person to be doing this in written form. But I’d still like to try to avoid the typical pitfalls.
It’s frustrating for me to see other people who seem to have gotten farther than I have in the area of self-improvement like James Clear, Shane Parrish (the guy behind Farnam Street) and Scott H Young, all end up following current incentives to their unfortunate conclusion:
Instead of looking back on their progress and insight and crystallizing it, they market their services in one way or another, be it through speaking engagements or an online course. The rest of their content is filed away in blog archives that are messy and not well-arranged for others.
(By crystallization, I mean the process of optimizing for understanding. Kalid Azad of Better Explained has a very good algorithm he calls the ADEPT method. Also a recommended read.)
I don’t really blame them, though. People are willing to pay money for this sort of thing. It’s a win-win situation: the people teaching gain money, and I’m sure the people learning walk away at least thinking that they got something. I mean, that’s why CFAR charges a sizeable fee for their own workshops.
Because people are willing to pay for this kind of upgrade.
Yet, I look at this entire field of exciting ideas, and I don’t like how everything is either confused or locked away.
It feels very unfortunate to not have a public framework that collects all these ideas. There’s a thing that I want, and it involves making a repository of rationality that’s both structured and clear for all the people behind me who are trying to figure out similar questions.
And I sure as hell don’t want to ask for someone’s email before giving them the information.
Updating My Approach:
What does this all mean for my actions in the short-term?
The obvious things now seem to be to:
- More deeply engage with ideas in effective altruism.
- Do more things in reality that force me to keep only the parts of rationality that don’t shatter upon impact.
So I’ll try to look at what sorts of actions I might take differently as a result of caring more about the above two points. Going into the world, diving in, and all that.
But what about mindlevelup?
I want to spend more time on organizing posts, mapping out concepts, and finish writing the first version of the Instrumental Rationality book. I’ll also look to update the site to make it friendlier to newcomers. This’ll definitely eat into time I’d previously spent on writing more blog posts.
However, I think I want to maintain my weekly update schedule on MLU. Regardless of the direct impact of what I write here, I think it’s still very powerful commitment device to keep myself thinking and writing, which seems hard to get from elsewhere.
At the heart of it all, I think I find myself just generally enjoying thinking about rationality at this point.
Which is problematic because then it’s hard to disentangle my own affect from actual benefits conferred by these skills. But that’s where real life comes in as the final arbiter.
Another recent conversation, though, has convinced me that humans don’t explore nearly as much as we could. I think that’s worth keeping in mind.
So I’ll keep pushing new ideas here on mindlevelup. I think one thing I did well was chart the arc of development of ideas across different stages of my thought process on this blog. My hope is that with some organization and examples posts from each “stage” of thought, I can at least gesture at the overall shape of how things changed for me.
Now I think I’m at the point where I want to start exploring crazy magical ideas!
tldr; For rationality blogging, I’ll keep writing here on MLU, but expect more varied topics that go beyond normal productivity stuff. The mindlevelup book is now in its first publicly viewable draft form.