You and your utility function in the new year

An end-of-the-year reflection: what percentage of your waking hours do you spend considering what to do with your life, versus actually doing it?

Clearly we all spend more time doing than considering, and it could hardly be otherwise. We can’t let ourselves be paralyzed by thought, or we’d accomplish nothing. Still, the “considering” part shouldn’t be zero either, should it? What’s the ideal percentage, then? One percent? Ten? Twenty? And whatever you decide, how much does that ideal line up with how much you actually spend?

I want to come at this question obliquely, and see whether it gives us any guidance as we begin a new year. My AI class this semester watched and discussed some episodes from Westworld, season 1. The philosophy there is deep, and I think it cuts to the heart of what makes the human mind distinctive above all other creations.

By way of background, the show has two groups of characters: the “hosts,” who are lifelike robots whose purpose is to fulfill various classic roles in a Wild West theme park of sorts, and the “guests,” who are real-life rich people who vacation to the park and get a chance to play cowboy. The hosts exist purely for the guests’ enjoyment, and appear unaware that they themselves are part of an elaborate act. The guests get their jollies by safely hunting imitation outlaws, exploring canyons, or frequenting saloons and the robotic prostitutes employed there. Some of the show’s darker moments involve guests abusing the hosts in various ways, enjoying sadistic power trips that are guaranteed to be free of repercussions.

Anyway, the park’s managerial staff often speak to the hosts of the importance of “staying on their loops.” Each host robot is permitted a little flexibility of improvisation, but their main role in the Westworld “story” is fixed. For example, Dolores, the farmer’s daughter (played by Evan Rachel Wood) is programmed to be courteous to others, blushingly encouraging of male guests’ affections, and content to live with her parents forevermore. She’s not supposed to do things like question her surroundings, run off with a fellow robot, or seek a different future for herself. These things are especially prohibited because the hosts themselves are merely fulfilling the vision of the park’s “narrator,” who is weaving together the threads of the little community in order to present the guests with a compelling setting and story.

Now the primary plot arc of the show involves some of the hosts veering off their “little loops” and genuinely inventing their own goals. Instead of reflexively acting the way they were programmed to act, they begin questioning their motives and their pursuits, and choosing different ones instead. In the language of a previous post on this blog, they’re becoming aware of their utility functions and developing their own opinions about whether or not those reflect genuinely “good” outcomes.

As I’ve written before, I maintain that this is fundamentally impossible for a synthetic intelligence (robot or AI) to do. Robots possess a blank slate upon which their designers can write whatever they want. The values the designers give are by definition “what the AI thinks is good,” and there simply is no trump card above that. Not only will the AI never disagree with those principles, they will never have an opinion about them at all.

Now in some ways our natural intelligence is like this. Our minds seem to come pre-equipped with likes and dislikes that we can’t do much about. We can force ourselves to do exercise or eat broccoli if we believe there’s some benefit to it. But we can’t easily force ourselves to like exercise or broccoli. Similarly, we can’t really force ourselves to prefer pain to pleasure, or to think unrealized ambition is better than fulfillment, or cruelty better than kindness. We have these presets, assigned to us (I believe) by God, and cannot readily imagine improving them.

One reason the Westworld storyline is so interesting is that it encourages the viewer to imagine what might happen if we altered our own utility functions. Suppose you could choose not only what you do, but also what you want to do? That you could not only force yourself to read Shakespeare for your grade’s sake, but could decide to genuinely like reading Shakespeare?

I think that unlike synthetic intelligences, we humans do possess some of this ability to change what we prefer. It’s hard to pin down what happens when we do, but I think it mostly involves focusing on previously neglected aspects of value-laden situations, which can shift our equation of what to value. But to be honest, the biggest obstacle to improving one’s utility function is not the difficulty of defining how to do that, but simply overcoming our force of habit. Because the fact is that we aren’t used to introspecting about this — practically nothing in our environment demands or even encourages us to do so — and so we normally don’t. We tacitly accept whatever values we’ve absorbed, and neglect to consider whether they’re in fact the right ones.

Like all kids, mine were prone to drag their feet before bedtime, trying to delay the moment of lights-out as long as possible. One strategy they’d use for procrastination was something we eventually dubbed “the Why game.” It went something like this:

Dad: Okay, we really have to go to bed now.

Child: Why?

Dad: Because it’s late, and Daddy needs to have a good night sleep.

Child: Why?

Dad: Because if he’s sleepy tomorrow, he’ll do a bad job teaching, and that’s not a good thing.

Child: Why?

Dad: Because if he teaches poorly, the students won’t learn what they need to, and it’s important that they learn it.

Child: Why?

Dad: Because if they don’t learn it, they won’t mature and become able to contribute to our society. And that’s not good for anybody!

Child: Why?

Dad: Because then our whole society will become lazy and apathetic, and that would be truly sad.

Child: Why?

Dad: Because God wants us all to reach our potential and become who He meant us to be.

Child: Why?

Dad: Because He just does.

You can hear the child giggling more and more with each “why” and with each exasperated response from Dad. It’s just a delaying tactic, but it’s also actually a very good mental exercise. Even though we rarely pose the question to ourselves explicitly, we should be able to audit any of our choices and make sure there really is a valid “why” at the end of the chain. That anchor at the end must be an end in itself, not a means to an end.

In the dialogue above, the chain bottoms out at “God wants us to reach our potential, and that is a value in-and-of itself.” Whether or not you agree with that value, you must at least admit that it is an answer that terminates the chain. You cannot and need not ask “why” after that, because it’s self-certifying.

I predict that if you try this exercise with a few of your daily choices, you’ll have mixed results. In some cases, you’ll be able to satisfyingly trace the “whys” back to an actual value that you believe is worth holding, which authenticates the choice and the entire chain. But in other cases, you’ll disturbingly hit a “why?” that seems to have no answer. You might reach for something like, “well, because I’ve always done it that way,” or “well, because that’s what most people seem to do,” and you’ll realize it’s futile. If you can’t anchor it in a bedrock of value, then the technical term to describe your behavior is irrational. (That’s actually the term AI researchers use for choosing any action that doesn’t maximize your utility function under statistical expectation.)

One Westworld character revealed to be acting irrationality is Teddy, the hunky local gunslinger and a quasi-love interest of Dolores. In a compelling scene, the park’s creator (Dr. Ford, played by Anthony Hopkins) interrogates Teddy (James Marsden). “What are your drives?” he begins innocently. Teddy mentions Dolores, and how one day he hopes they will live a life together. Ford asks why Teddy doesn’t simply run off with her. He replies, “I’ve got some reckoning to do before I can be with someone like her.” Ford then unveils a devastating revelation: “ah yes, your mysterious backstory….the truth is, I never actually bothered to give you one. Just a formless guilt you will never atone for.”

Teddy’s creator is revealing that the “why” chain of his actions towards Dolores sputters out in a dead end. There is no ultimately meaningful reason for him to act that way. (This is what I predict will happen with some, not all, of your own “why game” attempts.) At the point this becomes apparent, the subject essentially has two responses: one rational, one irrational. The rational response is to react with “ah — good to know! I’ve been unwittingly wasting time doing things that are not in fact related to my values. Time to change my actions!” In Teddy’s case, this would probably result in him realizing that his imagined “reckoning to do” is a red herring, and that to fulfill his values he should propose to Dolores on the spot.

But I fear the irrational response is the one we normally choose. We are creatures of such relentless inertia that even once our own actions are incontrovertibly revealed to be irrational, we often still choose the same actions out of force of habit. This, to me, is the most tragic thing of all. To be unwittingly living one’s life contrary to one’s own core values is bad enough; to be presented with this discovery, and yet choose to stay misaligned, is heartbreaking.

So coming around full circle, here’s my challenge for you in 2024: consider spending more of your time intentionally thinking about what your core values are, and whether your daily actions stem from them as they ought to. For every concrete thing you do during the day, deliberately bring to mind what core reason you have for doing it. Make yourself play “the why game.” You may well discover that like the Westworld hosts, you’re stuck inside various loops, living life on autopilot. It’s all too easy to let our lives play out, day after day and year after year, and not ever consciously think about why we’re spending it that way. Our “percentage of time considering” can shrink to dangerous levels, below 1%, which means that we may well be putting energy into nothing ultimately meaningful at all.

It doesn’t matter how fast you’re driving on the highway if you aren’t heading to the right destination. Are you?

— S


  1. Sarah Davies Avatar

    WOW!! I should have started playing the “why game” a few years earlier!! 🥴

  2. Rachel Davies Avatar

    Core values are tough to nail down. When I’m tired out my only core value is to rest my body and mind. Thoughts on how to transition from vague desires to core values?

Leave a Reply

Your email address will not be published. Required fields are marked *