Back to Basics: Post-CFAR 4
[Having covered the ideas behind ontologies, this post explains how I’m trying to re-orient with newcomers to rationality in mind. I talk about maintaining transparency and practicality. This is part four of a five-part series about my thoughts after volunteering at a CFAR workshop. As usual, the views expressed here are my own and don’t reflect CFAR’s]
As I get deeper and deeper into rationality, I’m encountering ideas that, without some prerequisites, look pretty weird. I’m at the point where I might start engaging more fully with the meta-rationality literature, where ideas start to become far more…esoteric.
But to a newcomer, even “normal” rationality can seem a little insane.
I know this because when I first started learning this stuff, I had people from CFAR telling me things like, “Yeah, so the point of this whole rationality isn’t actually to teach people the techniques themselves, per se, but to get them to create their own stuff,” and I thought they were crazy.
Of course, now I’m starting to get it, but Past Me was having a hard time wrapping my head around the idea.
People also said things to me like, “Motivation isn’t really a thing you get. It’s more like… moderating a disagreement with two parts of yourself, where you’re merely letting both sides talk, And then at the end, you either want to do the thing or you don’t.”
I found statements like these really, really confusing. As someone who’d only just started figuring out agency, this was understandably difficult to swallow.
I’m thinking about people like the person I was last year who stumbles upon the ideas I have now and not understanding. I’m worrying about the stuff I write here on MLU becoming even less accessible for friends of mine who might just be sorta interested in this type of stuff.
So that’s one part of what I’ve been thinking about. The other part is trying to draw some high-level generalizations from the idea of an ontology.
I’ve posited that ontologies change our sense of what is possible, and they lend themselves to different rationality techniques. Rationality techniques, to me, are things we can do to help improve our thinking / decisions in reality. (Other people may think of rationality differently.)
We’d expect, then, for helpful ontologies to cash out at some point in the real world, where they hopefully have some connection to people’s actions. For example, Resolve may be good for enabling people to power through things, while “humans are mainly habits” can help us build better daily routines.
From the outside view, then, someone using rationality techniques should appear to be winning at life. Whatever that means.
For real, though, here are some things I might consider “winning” (subjectively speaking):
- Having a daily exercise regimen, or any sort of regular “hard” commitment
- Stuff people generally find impressive (writing a book, inventing a product, etc.)
- Generally positive mood, strong emotional stability
- Agenty, able to set goals and achieve them
- Being super productive (relative to other people)
- Consistently, erm, winning the lottery (Okay, maybe my examples suck, but the discussion of what the heck constitutes winning at life is one for another day.)
The point is that if whatever change your ontology granted you is purely internal (which might in itself be a confused concept because stuff in your brain influences your actions, but we’ll leave that for another time), or isn’t immediately visible to the outside world, then I claim that it’s likely your ontology isn’t very useful.
Of course, there are ontologies like the introspection model which have a very internal foundation. I’m willing to suspend my judgment as I explore these alternate models of mind, where influencing direct action “isn’t the point”, like meditation.
Still, even for things like meditation, there’s a sense in which it’s still helping you via whatever physical / mental benefits you’re deriving from them. So on one level up, you’re still doing them for the benefits, even if caring about the benefits isn’t the best way to acquire them.
Therefore, merging the above two points, I’d like to keep two general principles in mind when moving forward:
My ultimate goal is something like finding a way to effectively bridge inferential distances for new people trying to learn everything in the LW-sphere. (Maybe something like Arbital’s dependency graphs?) But that’d require a lot of work to cross-reference and build.
For the meantime, I’d like to at least be clear about where my reasoning is from. After all, there’s no rule that says the words I write have to absolutely correspond with what’s in my head. The illusion of transparency is often just that—transparent and invisible to spot.
This follows directly from my point about winning at life. At some level, whatever new shiny rationality concept I’m learning should change my actions. My belief here is that, as much fun as it is to philosophize, solving things in reality is the ultimate priority.
To that end, I’ll be focusing on the sorts of concrete takeaways or benefits you can expect to see from adopting certain concepts/ontologies. I suspect that this might be disabling in certain contexts, but as much as possible, I want to stay on the object level. (I see it being a problem when talking about abstract concepts that may be important for understanding the world, but are either too large or removed from most object-level actions.)
Moving forward, I want there to be constant highlights of heuristics and mindsets people can adopt, in terms of “upgrades”.