# Trust The Process

# For managers, venture capitalists, and fans of the Philadelphia 76ers, wisdom lies not in assessing outcomes, but rather, the process.

Human beings are results-oriented. Evolution is a byproduct of *results*, since both passing on one’s genes and premature death (before procreation) reflect what *did happen* rather than the underlying probabilities beforehand.

This turns out to be deeply frustrating, as we find ourselves evaluating, assessing, and litigating our decisions as a function of their outcomes rather than the process we followed. In turn, we make poor decisions, fixated on what we can and cannot guarantee rather than rational management of expectations.

**Coins & Cards**

Your friend asks you to retrieve a coin from *your* pocket.^{1} The friend offers you the opportunity to pay $X, then flip the coin. If you guess correctly, you’ll receive $3X. (If you guess incorrectly, you’ll have forfeit your $X).

This is an easy one. Even odds of winning $2X or losing $X makes this an obviously profitable venture. Furthermore, your friend allows you to pick the value of X, so you needn’t risk next month’s mortgage payment upon one turn of pitch and toss.^{2}

Then the coin is flipped. You lose. You’re out $X. Your friend cackles in the glee typically reserved for supervillains and corporate tools.

Assuming there weren’t any probability-altering shenanigans, you made a sound decision, then lost nonetheless. Should we *now*, retroactively, determine that your decision was poor? This seems inconsistent with our previous thinking, and nothing, apart from our knowledge of the random outcome itself, has changed.

Though we all recognize that a random outcome cannot alter the probability distribution used to generate that outcome *after the fact*, even world-class poker players^{3} lament their “bad beats.”^{4} The ultimate practitioners of “process over results” find their emotions dominated by the latter.

**The Real World**

Focusing on the underlying distributions of outcomes, the probabilities of random events that govern our lives is psychologically unnatural. Managing evolution’s pressure in the opposite direction is daunting enough, but the realities of “real life”^{5} push outcome-driven, rather than process-driven behavior.

For instance, there are over two million folks incarcerated in the United States. The vast majority are non-violent offenders, and the $100+ *billion* we spend policing and locking people up could surely be better spent, to say nothing of the economic value we leave on the table by removing these individuals from the workforce.

So, let’s say there’s a large number of non-violent individuals we could release from prison and/or modifications to sentencing guidelines that would reduce the prison population. Let’s also assume that the probability of recidivism and its social costs is far outweighed by the fraction of the $100+ *billion* we might repurpose for any number of alternatives.^{6}

Ah, but that’s a *process* argument. You know and I know that somewhere, amidst those hypothetically-released millions of non-violent offenders, one of them is going to commit a violent crime, and a political opponent is going to assert (factually, albeit misleadingly) that absent the prison reform, that felon would have remained behind bars. The *outcome* invalidates the *process*. And thus, a system almost no one would vote for persists. Moloch wins.

While the prison example is rather stark, how often are projects in a corporate setting evaluated solely on their outcomes, rather than the process by which the decisions were made?

Companies invest when the project promises to “double conversion rates” or “deliver x million dollars of revenue from new customer sales.” *Outcomes*. The same decision-makers balk at the promise of spending 500 hours implementing industry best practices in pursuit of higher conversion rates or new customers.

The *process* is entirely within the control of the company. The *outcomes *are subject to uncertainties of everything from human health to market conditions to geopolitics. Nonetheless, we choose projects with higher probabilities of success while opportunities with far greater upside are left waiting. We eschew mitigation efforts for existential risks because, after all, if the wolf doesn’t come, clearly, we shouldn’t have spent any time or resources considering that risk.

We trap ourselves in local maxima because pursuit of global maxima and the neglected approaches that pursuit demands requires trust in a *process* rather than assurances about desirable outcomes.

**Updating Priors**

The fact is, *outcomes* do contain information, but as human beings, we’re generally ill-equipped to process it.

The proper question is not “was this a positive or negative outcome?” The proper question is “does this outcome change my view of the world?”

When you took the fair coin out of your pocket, placed your bet, then lost, the outcome probably did not change your view of the *world*. You probably continued to believe that coins possess roughly even odds of heads and tails. That was a rational decision, considering how many coin flips you’ve experienced, seen on television before football games, or read about in statistics textbooks.^{7}

In the parlance of Bayesian statistics, we would say that you possessed a “strong prior.” A fair bit of life experience with coin flips means that the outcome of a single coin flip does not alter your perception of the odds associated with such events. Thus, you should judge the *process* not the *outcome*.

But what if one of your quantitatively-challenged co-workers asserted vehemently that “this plan is foolproof—the odds of failure are like one in a billion!” Then, a couple weeks later, some odd sequence of events has turned the foolproof plan into a fool’s errand and fingers are being pointed. Has a mistake been made? And if so, what was it?

Well, one of two things is true. The odds of failure really were one in a billion, and somewhere on the astral plane, a celestial conspiracy delivered a series of improbable coincidences that scuttled the plan. Or, the odds of failure were decidedly higher than one in a billion from the beginning and the overconfidence effect led to irrational exuberance.

When a so-called “billion-to-one” event happens, we *should* change our worldview, specifically, reassess whether that event’s recurrence needs to be considered as a risk far larger than a billion-to-one. The *outcome* becomes a *teachable moment* in a way the coin flip was not.

**Learning (and failing to learn)**

Imagine that you reside in a town where there are three veterinarians. The first two charge similar prices to treat pets. The third charges double the price, but does not charge for readmission. Now let’s say that this vet has operated under such a policy for twenty years. If the vet was incompetent, either the market would have discovered this fact and refused to pay the higher price or alternatively, the costs of return visits, borne by this vet, would have bankrupted their practice.

They’ve stuck around for twenty years. Their office furniture seems new and well-maintained. Even if *you* haven’t updated your prior expectations about the size of vet bills, clearly, your neighbors have. They’ve decided to trust this vet’s *process*. And this vet, when they err, and a pet needs additional care for which they cannot charge, trusts their process and does not overreact to the *outcome *by changing their pricing strategy.

The neighbors have *learned* that the higher price is often worth it. The vet has *learned* that their pricing structure is superior. They trust the *process*.

In contrast, there is the common straw man^{8} of the original Star Trek’s (ir)rationalist, Spock. Throughout the iconic series, he predicts with comical numbers of significant digits, the probability that the enterprise will survive some plan derived from the instincts of Captain Kirk. Inevitably, the “improbable” occurs, and the ship and its crew survive.^{9}

If someone tells you an event is improbable and it occurs nonetheless…and this happens repeatedly, you naturally begin discounting the opinion of that individual. If you *are* that individual, maybe you should reconsider the *process *by which you produce those probabilities?

A rational approach to decision-making requires wisdom. The serenity prayer supplicates the ability to distinguish between things we can and cannot change. The Bayesian’s prayer requests the ability to determine how much (or how little) we should adjust our worldview from any given outcome.

**Practical Applications**

When it comes time to hand out bonuses, raises, and equity, consider incentivizing superior process rather than being blinded by the recency bias that rewards fortunate *outcomes*. After all, past *outcomes* are less likely to portend impressive future results than a defensible *process*.

When a portfolio manager has seen his assets under management vanish in the face of “million-to-one” events on multiple occasions, perhaps you shouldn’t hand them your life’s savings.

When a venture capitalist has generated a single $500 billion exit and 1,000 failures whilst another has generated one hundred $1 billion exits and 1,000 failures, you should ask yourself if the first firm is truly *better* at picking winners and delivering guidance or if they simply got lucky *once*. What evidence suggests their *process* is superior? Sure, $500B > 100 x $1B… but that’s much ado about a single *outcome*.

When a strategy is faring poorly, we must assess whether we have been unlucky like the coin flip example or whether our struggles indicate that our initial assumptions no longer hold. When a strategy yields profits, we needn’t necessarily believe ourselves to be brilliant—maybe the coin simply fell in our favor. Updating priors means updating our estimates of the *odds* of outcomes* *rather than simply reacting to the outcome itself.

And most importantly, the question that should remain top-of-mind is not “what happened” or even “why did this happen?” The key question is “what should I *learn* from what happened?”

_{1 Worded specifically to ensure that you believe the coin to be fair. Let’s assume, for the sake of this argument, that it is. Unfair coins will be discussed shortly, don’t you worry!}

_{2 With all respect to Rudyard Kipling, whose work I just pilfered. If you lose you will not be required to “start again at your beginnings” and likewise, will be encouraged to breathe a word or two about your loss.}

_{3 Experts in the field of applied probability theory if ever there were.}

_{4 The term of art given to a situation when a sound probabilistic decision nonetheless results in the loss of chips}.

_{5 Business, politics, conversations with your spouse, etc.}

_{6 Education, infrastructure, tax deductions, free hot dogs at baseball games, whatever}.

_{7 Yes, I know that people bet on football games before the superbowl at mediocre odds. Yes, I know that statistics textbooks and real-life differ in several meaningful ways. Everybody ok with this now?}

_{8 Or “straw vulcan”}

_{9 Give or take a couple low-cost supporting actors who meet some grisly end when Scottie fails to beam them up.}

# Trust The Process

# For managers, venture capitalists, and fans of the Philadelphia 76ers, wisdom lies not in assessing outcomes, but rather, the process.

Human beings are results-oriented. Evolution is a byproduct of *results*, since both passing on one’s genes and premature death (before procreation) reflect what *did happen* rather than the underlying probabilities beforehand.

This turns out to be deeply frustrating, as we find ourselves evaluating, assessing, and litigating our decisions as a function of their outcomes rather than the process we followed. In turn, we make poor decisions, fixated on what we can and cannot guarantee rather than rational management of expectations.

**Coins & Cards**

Your friend asks you to retrieve a coin from *your* pocket.^{1} The friend offers you the opportunity to pay $X, then flip the coin. If you guess correctly, you’ll receive $3X. (If you guess incorrectly, you’ll have forfeit your $X).

This is an easy one. Even odds of winning $2X or losing $X makes this an obviously profitable venture. Furthermore, your friend allows you to pick the value of X, so you needn’t risk next month’s mortgage payment upon one turn of pitch and toss.^{2}

Then the coin is flipped. You lose. You’re out $X. Your friend cackles in the glee typically reserved for supervillains and corporate tools.

Assuming there weren’t any probability-altering shenanigans, you made a sound decision, then lost nonetheless. Should we *now*, retroactively, determine that your decision was poor? This seems inconsistent with our previous thinking, and nothing, apart from our knowledge of the random outcome itself, has changed.

Though we all recognize that a random outcome cannot alter the probability distribution used to generate that outcome *after the fact*, even world-class poker players^{3} lament their “bad beats.”^{4} The ultimate practitioners of “process over results” find their emotions dominated by the latter.

**The Real World**

Focusing on the underlying distributions of outcomes, the probabilities of random events that govern our lives is psychologically unnatural. Managing evolution’s pressure in the opposite direction is daunting enough, but the realities of “real life”^{5} push outcome-driven, rather than process-driven behavior.

For instance, there are over two million folks incarcerated in the United States. The vast majority are non-violent offenders, and the $100+ *billion* we spend policing and locking people up could surely be better spent, to say nothing of the economic value we leave on the table by removing these individuals from the workforce.

So, let’s say there’s a large number of non-violent individuals we could release from prison and/or modifications to sentencing guidelines that would reduce the prison population. Let’s also assume that the probability of recidivism and its social costs is far outweighed by the fraction of the $100+ *billion* we might repurpose for any number of alternatives.^{6}

Ah, but that’s a *process* argument. You know and I know that somewhere, amidst those hypothetically-released millions of non-violent offenders, one of them is going to commit a violent crime, and a political opponent is going to assert (factually, albeit misleadingly) that absent the prison reform, that felon would have remained behind bars. The *outcome* invalidates the *process*. And thus, a system almost no one would vote for persists. Moloch wins.

While the prison example is rather stark, how often are projects in a corporate setting evaluated solely on their outcomes, rather than the process by which the decisions were made?

Companies invest when the project promises to “double conversion rates” or “deliver x million dollars of revenue from new customer sales.” *Outcomes*. The same decision-makers balk at the promise of spending 500 hours implementing industry best practices in pursuit of higher conversion rates or new customers.

The *process* is entirely within the control of the company. The *outcomes *are subject to uncertainties of everything from human health to market conditions to geopolitics. Nonetheless, we choose projects with higher probabilities of success while opportunities with far greater upside are left waiting. We eschew mitigation efforts for existential risks because, after all, if the wolf doesn’t come, clearly, we shouldn’t have spent any time or resources considering that risk.

We trap ourselves in local maxima because pursuit of global maxima and the neglected approaches that pursuit demands requires trust in a *process* rather than assurances about desirable outcomes.

**Updating Priors**

The fact is, *outcomes* do contain information, but as human beings, we’re generally ill-equipped to process it.

The proper question is not “was this a positive or negative outcome?” The proper question is “does this outcome change my view of the world?”

When you took the fair coin out of your pocket, placed your bet, then lost, the outcome probably did not change your view of the *world*. You probably continued to believe that coins possess roughly even odds of heads and tails. That was a rational decision, considering how many coin flips you’ve experienced, seen on television before football games, or read about in statistics textbooks.^{7}

In the parlance of Bayesian statistics, we would say that you possessed a “strong prior.” A fair bit of life experience with coin flips means that the outcome of a single coin flip does not alter your perception of the odds associated with such events. Thus, you should judge the *process* not the *outcome*.

But what if one of your quantitatively-challenged co-workers asserted vehemently that “this plan is foolproof—the odds of failure are like one in a billion!” Then, a couple weeks later, some odd sequence of events has turned the foolproof plan into a fool’s errand and fingers are being pointed. Has a mistake been made? And if so, what was it?

Well, one of two things is true. The odds of failure really were one in a billion, and somewhere on the astral plane, a celestial conspiracy delivered a series of improbable coincidences that scuttled the plan. Or, the odds of failure were decidedly higher than one in a billion from the beginning and the overconfidence effect led to irrational exuberance.

When a so-called “billion-to-one” event happens, we *should* change our worldview, specifically, reassess whether that event’s recurrence needs to be considered as a risk far larger than a billion-to-one. The *outcome* becomes a *teachable moment* in a way the coin flip was not.

**Learning (and failing to learn)**

Imagine that you reside in a town where there are three veterinarians. The first two charge similar prices to treat pets. The third charges double the price, but does not charge for readmission. Now let’s say that this vet has operated under such a policy for twenty years. If the vet was incompetent, either the market would have discovered this fact and refused to pay the higher price or alternatively, the costs of return visits, borne by this vet, would have bankrupted their practice.

They’ve stuck around for twenty years. Their office furniture seems new and well-maintained. Even if *you* haven’t updated your prior expectations about the size of vet bills, clearly, your neighbors have. They’ve decided to trust this vet’s *process*. And this vet, when they err, and a pet needs additional care for which they cannot charge, trusts their process and does not overreact to the *outcome *by changing their pricing strategy.

The neighbors have *learned* that the higher price is often worth it. The vet has *learned* that their pricing structure is superior. They trust the *process*.

In contrast, there is the common straw man^{8} of the original Star Trek’s (ir)rationalist, Spock. Throughout the iconic series, he predicts with comical numbers of significant digits, the probability that the enterprise will survive some plan derived from the instincts of Captain Kirk. Inevitably, the “improbable” occurs, and the ship and its crew survive.^{9}

If someone tells you an event is improbable and it occurs nonetheless…and this happens repeatedly, you naturally begin discounting the opinion of that individual. If you *are* that individual, maybe you should reconsider the *process *by which you produce those probabilities?

A rational approach to decision-making requires wisdom. The serenity prayer supplicates the ability to distinguish between things we can and cannot change. The Bayesian’s prayer requests the ability to determine how much (or how little) we should adjust our worldview from any given outcome.

**Practical Applications**

When it comes time to hand out bonuses, raises, and equity, consider incentivizing superior process rather than being blinded by the recency bias that rewards fortunate *outcomes*. After all, past *outcomes* are less likely to portend impressive future results than a defensible *process*.

When a portfolio manager has seen his assets under management vanish in the face of “million-to-one” events on multiple occasions, perhaps you shouldn’t hand them your life’s savings.

When a venture capitalist has generated a single $500 billion exit and 1,000 failures whilst another has generated one hundred $1 billion exits and 1,000 failures, you should ask yourself if the first firm is truly *better* at picking winners and delivering guidance or if they simply got lucky *once*. What evidence suggests their *process* is superior? Sure, $500B > 100 x $1B… but that’s much ado about a single *outcome*.

When a strategy is faring poorly, we must assess whether we have been unlucky like the coin flip example or whether our struggles indicate that our initial assumptions no longer hold. When a strategy yields profits, we needn’t necessarily believe ourselves to be brilliant—maybe the coin simply fell in our favor. Updating priors means updating our estimates of the *odds* of outcomes* *rather than simply reacting to the outcome itself.

And most importantly, the question that should remain top-of-mind is not “what happened” or even “why did this happen?” The key question is “what should I *learn* from what happened?”

_{1 Worded specifically to ensure that you believe the coin to be fair. Let’s assume, for the sake of this argument, that it is. Unfair coins will be discussed shortly, don’t you worry!}

_{2 With all respect to Rudyard Kipling, whose work I just pilfered. If you lose you will not be required to “start again at your beginnings” and likewise, will be encouraged to breathe a word or two about your loss.}

_{3 Experts in the field of applied probability theory if ever there were.}

_{4 The term of art given to a situation when a sound probabilistic decision nonetheless results in the loss of chips}.

_{5 Business, politics, conversations with your spouse, etc.}

_{6 Education, infrastructure, tax deductions, free hot dogs at baseball games, whatever}.

_{7 Yes, I know that people bet on football games before the superbowl at mediocre odds. Yes, I know that statistics textbooks and real-life differ in several meaningful ways. Everybody ok with this now?}

_{8 Or “straw vulcan”}

_{9 Give or take a couple low-cost supporting actors who meet some grisly end when Scottie fails to beam them up.}