Rationality as System Maintenance

[Any system you build will likely tend towards disarray. Taking eventual collapse as a given can give you more helpful when thinking about which systems to implement and how to recover when said systems break.]

I have a friend who has a Getting Things Done system. It’s a very involved system—there’s a form which triggers a to-do list which leads to a spreadsheet which leads to a some equations to calculate priority which leads to a scheduled event. When I first learned of this, my first thought was “I really, really don’t think this is sustainable.”

(There’s also the point that if you need a to-do list to remind yourself to do certain things in your free time, this raises the question of what the heck it is you’re currently actually doing instead during your free time, and why you feel inclined to do those things instead. But that’s a different topic.)

I think I’m generally more allergic to complex systems now than I was before. Part of the reason why seems to be that basically every single GTD system I’ve ever used has broken down in some way or another:

Workflowy? Abandoned.

Schedules? Abandoned.

Periodic goal check-ins? Abandoned.

My typical boom and bust cycle looks a little something like this:

  1. I look for a principled way of managing my to-dos, and I discover productivity tool X.
  1. A period of high affect follows, where I think X is the best thing ever. “I’ll definitely keep using this,” I think. “This tool has solved all the shortcomings tools in the past haven’t!”
  1. At some point, my usage of the tool dwindles, perhaps due to increased business. Returning back to X now feels more effortful.
  1. The “ugh” feeling intensifies as it now feels like I need to expend energy in order to start interacting with X. As a result, I stop doing so—it’s a negative feedback loop.

So under the typical Fading Novelty paradigm [LINK], it might seem that I’ve insufficiently managed to habituate X into my workflow before the initial excitement faded, and this led to its abandonment. I don’t think that’s all of it, though. I’m willing to go a step further and claim that all systems will fail, and building for this in mind is an important consideration.

I’d like to introduce thinking about rationality in this context, then, as a means of system maintenance. This forms one component of my broader thesis that Everything Is Active, a framework that favors deliberate effort and intention.

With regards to systems, thinking in terms of system maintenance to me means that you assume everything is tending towards disarray. Rationality is useful to the extent that it prolongs dying systems and helps you restart them.

One consequence of this for Getting Things Done is preferring systems that are easy to pick up again:

You want the smallest possible system that can do what you need your system to do; anything that reduces effort to re-acquire on your part is good. This does seem pretty tautological on the surface, I admit (and also at odds with what I just said about effort being important. I’m still working on reconciling all of this, so bear with me.).

As an example, I currently use Todoist to handle my to-do list items. While I might be unable to prevent myself from losing interest in the system, I can pick something that’s easy to return back to. Based on my past experience with systems, I predict that I’ll keep going with it for a while, stop using it, feel guilty, and then dive back into a system again once that guilt builds up.

So part of this is about having something that I can return to without much trouble. The other part is about just accepting that these sorts of cycles of highs and lows are more the norm rather than the exception.

The bigger picture here is the idea that progress isn’t linear. For example, in conditioning, we see that attempts are behavior modification are subject to ups and downs which, in the limit, converge to the desired behavior. If you try to reduce a child’s fear of rats, you’ll see some decrease in the first session, say from 10 FUs (“fear units”) to 8 FUs. The next session, though, will likely start at something like 9 FUs, a higher level of fear than the ending level of the session before it.

Similarly, if you hit your peak productivity level today, you shouldn’t expect to be starting right off from the high where you left off tomorrow.

In the same way, you can expect that most of your efforts might pay off in the long run, but you should expect to see lots of variation along the way. Connor Moreton goes over this in the last post of his rationality sequence (which by the way is awesome and you should totally go read.)

While Recovering from Failure was about the in-the-moment mental shifts you’d want to make to keep yourself sane during a relapse, I’m trying here to gesture at the larger intuition that relapses are largely inevitable. Rather than trying to make systems which are heavily resistant to stressors, you want to think about how to deal with their inevitable breaking.

Also, for every step you take in pursuit of progress, regression to the mean is waiting to drag you back in a way similar to the conditioning example.

What this means in practice is that incremental or gradual attempts will look apparently unsuccessful for longer than you might expect. To a first approximation, you can roughly think of this as a heuristic of “Keep trying stuff even if it doesn’t seem to be working immediately because it might eventually start working.”

(There might also secretly be something here about “Anything is effective if you keep going at it,” but that seems like it’d derail into something even more speculative.)

For quite a while, this whole viewpoint was one that I felt hesitant to endorse because I think that your ontology does in fact shape your experiences. I think there are self-reinforcing loops where, for example, thinking about willpower as a finite resource does indeed make it “more” of a finite resource than someone who operates without said concept.

And that makes me feel a little iffy about espousing views like “everything you ever do in your mind is temporary and will break down at some point” because even if this sort of cynical view is actually grounded (i.e. usefully describes some subset of reality)—and I do think it is, the sort of reification that can happen feels potentially dangerous.

Recently, I’ve become less fond of English as a language. Along the Sapir-Whorf lines, I feel like the conflation used for words like “feelings”, “beliefs”, “wants”, and “shoulds” all present problematic contextual baggage in shaping how we view these concepts.

6 comments

  1. Interesting.
    Out of curiosity, do you need to do something to maintain your TAPs?
    “You want the smallest possible system that can do what you need your system to do”. I don’t suppose this would have any political implications.

    Like

    • I currently have a list of TAPs that I want to keep in mind. Sometimes I will review the list. That’s it for now. If I notice I “want” to do something more, I’ll take that as a signal to put more effort into figuring out my TAPs.

      I suppose there are political implications, but I also think that organizational costs scale faster than linearly with each additional person, due to things like communication. So maybe not 100% applicable.

      Like

  2. I agree with you that one should aim for the minimum viable system that gets the job done. One thing that feels of to me though, is that I do think one can have a system that is doesn’t get harder and harder to do over time. I think it might have something to do with how much the benefits of a system are felt on a gut level. I think I’m now fairly committed to making all of my important work deep work, because it feels drastically better to do work that way. I feel bloated and fuzzy when “just doing work” while trying to do other things.

    But I definetley have many systems that I do get benefits from, but not enough benefits that I really feel it on a gut level. These are the things I do “because they’re good for me”. And yeah, rationality seems to be an excellent tool for making those things happen. Also, often I’ve noticed that resistance to using a system has been related to noticing a way that the system is failing to do what it should be doing, and thus redesign is necessary.

    Like

    • Hey Hazard!

      The point about deep work rings fairly true; I think it’s in line with the general idea that “focused effort” is what ends up driving a lot of potent results.

      Right, I agree that the next step when you notice that you need to “force” yourself to do something, this is likely a sign that something somewhere isn’t working. And in those cases, the healthy option is likely to *not* keep pushing, but instead to stop for a second and look.

      Like

Leave a comment