The Value of a Dollar

A parable about gambling illustrates the survival issues created by extreme income inequality

Khayah Brookes
7 min readSep 20, 2020
In 2020, the value of a quarter suddenly skyrocketed. (Photo credit: author)

Every dollar has the same value as every other dollar. That’s why we use them as a perfectly fair medium of exchange. Right?

I’d literally bet my life that’s not the case.

A dollar’s value is steady for most fairly localized regimes. Yet I can tell you empirically that not all equal amounts of money have equal value, as can anyone else who has had coin-op laundry, limited time, and dollars that do not fit into the quarter-slot receptacle on the machine.

Granted, that’s an artifact of currency, not money as an abstraction. Still, the differing use values might result in a differential exchange value: you might be glad to trade your neighbor a useless dollar bill for $0.50 in precious quarters if it lets you finish your laundry and get out of the house.

Money may also have different value to people depending on how much they already have, or how much they need. If you do not have enough to eat, $50 might literally have a lifesaving value, and an $8/hour job might be desirable and actually worthwhile. If you have a few billion dollars, it’s probably worthwhile to give someone hundreds of dollars per hour just to save you from the inconvenience of counting your money.

If you’re familiar with the St. Petersburg Paradox, you might be satisfied with the idea that the utility you attribute to your money varies roughly with the logarithm of how much money you already have. This means $50 is worth about four times as much to you if it takes you from having zero to $50, compared to if it increases your wealth from $1,000,000 to $1,000,050. (This is because log(1,000,000)/log(50) = 3.53. I’m rounding up to 4.)

But wait–what kind of “worth” is that? If $50 keeps you from starving to death, or even experiencing a prolonged period of hunger, then it means much more to you than whatever $200 means to a very wealthy person who might spend as much on a last-minute seat upgrade. The ratio of value between one person’s lifesaving windfall and another person’s extra legroom must be more than 4. So let’s not try to include crossing the threshold of death in our calculations [1].

Instead, let’s compare the value of having enough groceries for a week, which costs around $50 if you’re careful, to subsisting on half-rations for two weeks. Let’s try to be perfectly objective: is the marginal difference made for one person by having enough food to subsist without starvation anywhere close to the marginal value four well-off people would obtain from sharing a $50-per-plate meal in a comfy restaurant?

Nah, they’re not commensurate. They may not even be commensurable. [2]

To be fair, the St. Petersburg paradox isn’t about equating suffering and luxury. The question posed in it is, how much would you be willing to pay for a shot at a lottery that’s guaranteed to keep giving you a trickle of money for as long as you keep playing it? It’s more a question about ROI: how does the subjective value of the payoff depend on how much time and attention and dice-rolling and counting or whatever you had to spend on getting it. The cost of the ticket is negligible compared to any arbitrarily large payoff.

This tells us that the logarithmic value function derived for wealth from the lottery in the St. Petersburg paradox really applies to the time or effort — the non-refundable investment of your irreplaceable life — that you would choose to invest in such a payoff system. If it’s correct, then someone with no money to speak of would be willing to put around four times as much time and effort into earning $50 than our hypothetical millionaire.

I’m not convinced this captures the relationship between the way we value our effort compared to the payoff we get for it. It drastically overestimates the compensation poor people may actually demand for their time and effort, while dramatically underestimating the compensation expected by the wealthy.

Say someone with negligible financial assets would consent to work for an $8 hourly wage for a year (2,000 hours), earning about $16,000. (This is higher than the federal minimum wage in the US.) I don’t believe you could convince someone with $1 million in liquid assets to work full-time for three months at the same job, even if they took home the same $16,000 in 1/4 of the time, or a billionaire to mop floors for even three weeks. If they did, it’s probably because it satisfies their curiosity or some other drive, not because the pay itself is worth it.

Furthermore, you’d probably still have with negligible assets after working for a year at $8/hr, simply because you’d have to eat the whole time. If you earn that money many times faster, your cost to live while you earn it is proportionally lower, leaving you with even higher proportional compensation for your effort.

The inverse of a logarithm is an exponential. If we work backwards from the assumption that a logarithm is an accurate estimate of the utility function for people with moderate wealth, it seems like the compensatory value of money as a function of wealth — the value people place on money they receive for their time — must increase faster than exponentially as wealth approaches zero.

Or to look at it from another direction, a dollar that you are forced to eat has less financial value than a dollar that you may invest and earn dividends from, further sharpening the difference between the use values of each. This means the financial value you place on your time increases no less than quadratically as you accumulate wealth.

Maybe this accounts, in part, for the apparently unconstrained increases in ratios between executive and employee compensation that we’ve seen in the last couple of generations: the increase in the relative affluence of the wealthiest workers might actually cause a steep increase in the relative price one must pay to induce them to show up, independent of the value they produce for whoever’s paying them! As a labor market segment’s demand for cash falls, the price you must offer for its time rises

On the other hand, poor people will take whatever they can get if it keeps them from starving.

Of course, this is obviously a gross oversimplification of the complexities of labor markets–but I think it might permit some insight into many different perspectives on financial equity, and the human rights problems inherent to the progression of extreme income inequality. [3]

[1] We might assign infinite value to the difference between being alive or dead, but placing a finite monetary price on human life is actually a very ancient and respected practice. Weregeld, or man-price, is one way ancient Britons compensated each others’ families when people were killed. More recently, actuaries have done a lot of interesting work representing the value that is lost if a person dies. Depending on the table you use, it varies with your age, level of education, race, and biological sex, among other factors. (This site demonstrates one actuarial model that estimates my life as being worth almost 60% as much as my next-door neighbor, since we have the same level of education, but he belongs to a more highly-valued race and gender.) We might say that the baseline worth of a live person is, say, $800,000, or $5 million, and this reduces the drastic difference in the “value” of the life of a person who is on the brink of starvation versus a well-off person. On the other hand, establishing that baseline would make the ratio of subjective value between these people’s time even more strikingly divergent.

[2] If you disagree with me on this, that’s cool. I’ll make you a bet! First, restrict yourself to half-rations for two weeks. Save the money you’d spend on food, and give it to me. On the last day, we’ll go to a nice restaurant, and I’ll order a meal that costs exactly that amount, and you can watch me eat it. I’ll do everything I can to really savor it, to maximize the value of my experience. If, at the end of the meal, you still think that your pain, suffering, and weakness after two weeks of hunger are less than or equal to my nourishment and enjoyment from the meal, you win the bet, and I’ll give you the money back! And if you don’t think that’s a remotely fair comparison, I’m pretty sure my point is made.

[3] Apart from comparing marginal values, this model doesn’t take choice into account. The logarithmic utility function that Berniulli derived applies to a situation where your options are (1) deciding on an amount to bet on the lottery, and (2) walking away and doing something else with your time. This is much more analogous to the situation of the wealthy person, who can choose to take a job with some given compensation package, or walk away. The person who assents to work full-time for a year at a job that barely affords them food, let alone capital gains, is really choosing between subsistence and existential uncertainty; their position is more like guaranteed drudgery on one side, and a round of Russian roulette followed by potentially less-bad drudgery if they don’t die on the other. So if economics is a framework for rational, value-maximizing choices…what kind of choice is that?

--

--

Khayah Brookes
Khayah Brookes

Written by Khayah Brookes

Khayah Brookes is a data scientist and applied ethicist in the Pacific Northwest. She likes to see good information put to good use.

Responses (2)