[Any system you build will likely tend towards disarray. Taking eventual collapse as a given can give you more helpful when thinking about which systems to implement and how to recover when said systems break.]
I have a friend who has a Getting Things Done system. It’s a very involved system—there’s a form which triggers a to-do list which leads to a spreadsheet which leads to a some equations to calculate priority which leads to a scheduled event. When I first learned of this, my first thought was “I really, really don’t think this is sustainable.”
(There’s also the point that if you need a to-do list to remind yourself to do certain things in your free time, this raises the question of what the heck it is you’re currently actually doing instead during your free time, and why you feel inclined to do those things instead. But that’s a different topic.)
I think I’m generally more allergic to complex systems now than I was before. Part of the reason why seems to be that basically every single GTD system I’ve ever used has broken down in some way or another:
Periodic goal check-ins? Abandoned.
My typical boom and bust cycle looks a little something like this:
- I look for a principled way of managing my to-dos, and I discover productivity tool X.
- A period of high affect follows, where I think X is the best thing ever. “I’ll definitely keep using this,” I think. “This tool has solved all the shortcomings tools in the past haven’t!”
- At some point, my usage of the tool dwindles, perhaps due to increased business. Returning back to X now feels more effortful.
- The “ugh” feeling intensifies as it now feels like I need to expend energy in order to start interacting with X. As a result, I stop doing so—it’s a negative feedback loop.
So under the typical Fading Novelty paradigm [LINK], it might seem that I’ve insufficiently managed to habituate X into my workflow before the initial excitement faded, and this led to its abandonment. I don’t think that’s all of it, though. I’m willing to go a step further and claim that all systems will fail, and building for this in mind is an important consideration.
I’d like to introduce thinking about rationality in this context, then, as a means of system maintenance. This forms one component of my broader thesis that Everything Is Active, a framework that favors deliberate effort and intention.
With regards to systems, thinking in terms of system maintenance to me means that you assume everything is tending towards disarray. Rationality is useful to the extent that it prolongs dying systems and helps you restart them.
One consequence of this for Getting Things Done is preferring systems that are easy to pick up again:
You want the smallest possible system that can do what you need your system to do; anything that reduces effort to re-acquire on your part is good. This does seem pretty tautological on the surface, I admit (and also at odds with what I just said about effort being important. I’m still working on reconciling all of this, so bear with me.).
As an example, I currently use Todoist to handle my to-do list items. While I might be unable to prevent myself from losing interest in the system, I can pick something that’s easy to return back to. Based on my past experience with systems, I predict that I’ll keep going with it for a while, stop using it, feel guilty, and then dive back into a system again once that guilt builds up.
So part of this is about having something that I can return to without much trouble. The other part is about just accepting that these sorts of cycles of highs and lows are more the norm rather than the exception.
The bigger picture here is the idea that progress isn’t linear. For example, in conditioning, we see that attempts are behavior modification are subject to ups and downs which, in the limit, converge to the desired behavior. If you try to reduce a child’s fear of rats, you’ll see some decrease in the first session, say from 10 FUs (“fear units”) to 8 FUs. The next session, though, will likely start at something like 9 FUs, a higher level of fear than the ending level of the session before it.
Similarly, if you hit your peak productivity level today, you shouldn’t expect to be starting right off from the high where you left off tomorrow.
In the same way, you can expect that most of your efforts might pay off in the long run, but you should expect to see lots of variation along the way. Connor Moreton goes over this in the last post of his rationality sequence (which by the way is awesome and you should totally go read.)
While Recovering from Failure was about the in-the-moment mental shifts you’d want to make to keep yourself sane during a relapse, I’m trying here to gesture at the larger intuition that relapses are largely inevitable. Rather than trying to make systems which are heavily resistant to stressors, you want to think about how to deal with their inevitable breaking.
Also, for every step you take in pursuit of progress, regression to the mean is waiting to drag you back in a way similar to the conditioning example.
What this means in practice is that incremental or gradual attempts will look apparently unsuccessful for longer than you might expect. To a first approximation, you can roughly think of this as a heuristic of “Keep trying stuff even if it doesn’t seem to be working immediately because it might eventually start working.”
(There might also secretly be something here about “Anything is effective if you keep going at it,” but that seems like it’d derail into something even more speculative.)
For quite a while, this whole viewpoint was one that I felt hesitant to endorse because I think that your ontology does in fact shape your experiences. I think there are self-reinforcing loops where, for example, thinking about willpower as a finite resource does indeed make it “more” of a finite resource than someone who operates without said concept.
And that makes me feel a little iffy about espousing views like “everything you ever do in your mind is temporary and will break down at some point” because even if this sort of cynical view is actually grounded (i.e. usefully describes some subset of reality)—and I do think it is, the sort of reification that can happen feels potentially dangerous.
Recently, I’ve become less fond of English as a language. Along the Sapir-Whorf lines, I feel like the conflation used for words like “feelings”, “beliefs”, “wants”, and “shoulds” all present problematic contextual baggage in shaping how we view these concepts.