Lecture 20: Uncertainty

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description

This video explains the economic concept of decision making under uncertainty. See Handout 20 for relevant graphs for this lecture. 

Instructor: Prof. Jonathan Gruber

 

 

[SQUEAKING] [RUSTLING] [CLICKING]

 

JONATHAN GRUBER: All right, let's get started. We have three weeks left in the class. And what we'll be doing for the next three weeks is really a series of applications of what we've learned so far to sort of help you understand how we add some richness to what we've learned and sort of take it to some more real world applications.

So we're going to start that today by talking about something we've really ignored in the course so far, which is uncertainty and how uncertainty affects your decision making. So what we've done so far in this class is we've sort of said, look, we've assumed whenever you make decisions you make them with full knowledge and full certainty. But many, many decisions in life are made under conditions of uncertainty.

So consider your decision to study for the final in this class. In our models so far, you could optimize your studying across different units of the class. So our model so far, you'd know on the final which unit would be represented with what proportion. You'd optimize your studying appropriately.

But of course you don't know that. You face uncertainty about what will be covered in the final or upper proportion. You know the whole course-- you're responsible for the whole class. But obviously you allocate your time. You're uncertain about how to allocate your time across the different subjects on the test.

So how do you make that decision? We need to bring our tools to bear on thinking about these kinds of decision making under uncertainty situations. And this isn't just about the test. I decide whether to bring my umbrella today. If I bring my umbrella, there's a chance I'm going to lose it. But I don't want to get wet. I have to think about whether it's going to rain. It all depends on how certain I am it's going to rain, et cetera.

There's decisions about whether to bet on a sporting event. That's a decision of uncertainty. And that's all the fun stuff in your life. When you get to the anxiety ridden adult life, you've got things like whether to buy a seven year mortgage or a 30 year mortgage, whether to buy health insurance, what school to put your kids in. All these things involve a huge amount of uncertainty. And we have not yet developed the tools to deal with this.

What's really cool is that economics has a very useful tool to think about exactly these kinds of situations. That much like the other tools we've dealt with this semester, it's pretty easy once you understand it. But it's a huge amount of power for explaining the world. And that's the tool of expected utility theory. And that's what we'll focus on today. That's what we'll learn today, the tool of expected utility theory.

Now, I'm going to ask you a question. As always, I don't want you try to outsmart me. I just want a quick, gut reaction. I'm going to offer you a bet. Not really, but imagine I was. I'm going to flip a coin. Heads you get $125. Tails you give me $100. So you win $125 versus lose $100. How many of you take that bet?

So yeah. You're a slightly more aggressive class than usual. About 40% of you taking the bet. Usually I get more about 20%, 25%. OK. Now-- yeah.

AUDIENCE: [INAUDIBLE]

JONATHAN GRUBER: No. It's a one time bet. So basically-- now how do we think about this, whether it's a good idea or not? Now those who raised your hand probably quickly did the math and did in your head what we call an expected value calculation. What's the expected value of any gamble? The expected value of any gamble is the probability that you win times what you win plus the probability that you lose times what you lose.

So you did that calculation in your head. You said, as long as he's not cheating, using a fair coin, there's a 0.5 times 125 plus 0.5 times minus 100. And that is an expected value of $12.50. So those of you who raised your hand probably did that calculation quickly in your head and said, yeah, this is a positive expected value. This is what we call in economics a more than fair bet.

More than fair. A fair bet is one with an expected value of 0. So a fair bet is one with an expected value of 0. So a bet with a positive expected value is more than fair. And you in your head said, I did that through. It's more than fair. And that's why some of you raised your hands.

But many of you didn't. And that doesn't mean you were wrong. It just means this is not the right way to think about it. The right way to think about it is that what you care about is not dollars. What you care about is utility. Dollars are meaningless. What you care about as a consumer is your utility. And so we don't want to think about expected value. We want to think about expected utility.

Now, what is expected utility? It's the same kind of formula as this but with one change. Expected utility is the probability that you win times your utility if you win plus the probability that you lose times your utility if you lose. And that is somewhat different than expected value. And the reason is because utility is not a linear weighting of dollars. Utility is a concave weighting of dollars.

As such, because you utility is a concave weighting of dollars, it exhibits diminishing marginal utility. Diminishing marginal rate of substitution. Diminishing marginal rate of substitution, as we talked about ad nauseam in consumer theory.

That means that the next dollar is worth less to you than the previous dollar. The next dollars worth diminishing utility. The next slice of pizza is worth less to you than the previous slice of pizza. Likewise, the next dollar's worth less to you than the previous dollar. As a result, losing $1 makes you sadder than winning $1 makes you happy. There's a nonlinearity that comes from diminishing marginal utility and diminishing marginal rate of substitution.

So for example, let's think about our typical utility function that we use previously in consumer theory. Utility equals the square root of consumption. And let's just say you consume all your income, as we always did. We've talked about savings the last couple of lectures. Let's put savings aside again and just assume people consume all their income.

And so you utility is the square root of consumption. And let's say you start your initial consumption, c0 is 100. You start with $100 of consumption. Your c0 is 100. So your initial utility u0 is 10.

Now, I give you-- I offer you this bet. What's the expected utility of this bet? Well, the expected utility of this bet is the probability you win, 0.5, times utility if you win, where utility if you win is the square root of your consumption if you win. What is your consumption if you win? If you win the bet, what is your consumption? Somebody raise your hand and tell me. Starting with 100 and I made this bet with. Yeah.

AUDIENCE: 225.

JONATHAN GRUBER: 225. You started with 100 and you won 125. If you lose the bet, what do you have? What do you have if you lose the bet? 0. OK? So your expected utility is 0.5 times this utility of what you get if you win plus 0.5 times utility what if you lose.

Now if you do that math, you will find that that is 7.5. Your expected utility of this bet is 7.5, which is less than your initial utility. So you should not take the bet, which is through the mechanism of psychology why many of you didn't. You should not take the bet.

And the reason is because you are what we call risk-- and humans are what we call risk averse. That risk is inherently negative value to us. A certain dollar is worth much more to us than an uncertain dollar. Just like $1 today is worth more than $1 tomorrow, a certain dollar is worth much more than an uncertain dollar.

To see why, the best way-- this intuition here is graphical. So let's go to figure 20-1 and just sort of slowly walk through this. It's a little confusing graph. On the x-axis is your wealth, your total consumption or money you have. You consume everything you have. So it's wealth, or alternatively consumption.

On the y-axis of figure 20-1 is your utility. It's the graph of how much you consume against utility. And as you can see, this is a concave graph. It's exhibiting diminishing marginal utility. Each dollar of wealth adds utility but less and less over time. Just like a slice of pizza makes you happier but less and less over time.

You get that. So the shape of this curve is true for any utility function that features diminishing marginal utility. That pretty much any utility function we haven't used this semester. It's not as if we're trying to trick you. Has diminishing marginal utility. So as a result, it has this shape.

Now, with the utility function of this shape, let's evaluate the gamble I just gave you. You start with wealth of 100. So you see on the x-axis the 100 point, that corresponds to utility of 10. So you can trace up that 100 to utility curve, and then you go over to the y-axis. So that corresponds to a utility of 10.

Now, think about what the gamble does. What the gamble does is say look. There's two possible outcomes with 50% chance each. One is wealth of 225. That is the point all the way to the right. That leaves utility of 15. The other's wealth of 0. That's the point all the way to the left. That was utility of 0.

What is the average of those two? Well, it's just a linear combination. So 50% chance of each. So the average of those two is a wealth of 112.5 but an expected utility of 7.5. So you draw a cord between those two points, between the 0 point and point B. You find the midpoint of that cord. That's wealth of 112.5. But then you trace that over to utility function.

And you see the expected utility is only 7.5. And that's because we're not using a linear combination. We're using a nonlinear combination-- a nonlinear concave combination-- which means that moving up in terms of wealth makes you less happy than moving down in terms of wealth makes you sad. And you're really sad at 0.

So going for 100 down to 0 makes you way sadder than going from 100 up to 225 makes you happier. And that's all because diminishing marginal utility of income. So it's natural that this gamble would make you worse off, even though it's more than fair, because, yeah, you're somewhat happier if you win. If you win, utility goes up from 10 to 15. That's great.

But if you lose, utility goes down from 10 to 0. That's really bad. So you don't want to take this risk, which is why, although you may not realize it, many of you wouldn't want to take that gamble. So using this graphic, let's ask-- are there questions about that? Yeah.

AUDIENCE: So would we take the bet if our gain utility outweighs our loss utility, or we would not take the bet if our utility [INAUDIBLE].

JONATHAN GRUBER: Well, you've answered-- it's the same-- those two questions are the same. It just depends on the probabilities. If the probabilities are 0.5 each, then those two answers-- two questions are exactly the same because 0.5 each would be gain outweighing losses the same as on net positive.

But if the probabilities aren't 0.5, you want to use this equation. So you basically want to say is the weighted-- it's about the weighted average change in utility, essentially, where the weights are these probabilities. Yeah.

AUDIENCE: Is there any utility calculation in actually gambling. Like can someone dislike gambling--

JONATHAN GRUBER: Hold on. I'm going to come to that. We're going come back to gambling. We talk all about the lottery at the end. OK. But do people understand the basics of this graph? So using this graph, tell me the following. How much-- answer the following question.

How much-- let's say-- it's a hard question. See if you can get this. Let's say that I said, you know what, class, I'm going to force you to take this gamble. I'm going to come in here, and I'm going to tell you I'm locking the door. You're not leaving without taking this gamble unless you pay me not to force you. So I'm going to offer you a more than fair bet. Would you be willing to pay me to get out of that bet? And how much would you be willing to pay me? Yeah.

AUDIENCE: I'd be willing to pay you up to 2 and 1/2 dollars.

JONATHAN GRUBER: Up to 2 and 1/2 dollars.

AUDIENCE: [INAUDIBLE]

JONATHAN GRUBER: 2 and 1/2 dollars. I don't understand.

AUDIENCE: Sorry, [INAUDIBLE].

JONATHAN GRUBER: Well, OK, that's--

AUDIENCE: $100 start, right?

JONATHAN GRUBER: Yeah. $100 start.

AUDIENCE: $25.

JONATHAN GRUBER: OK. You're thinking about that sort of right, but you're still thinking in a linear world, not nonlinear world. Look at the graph. OK. Yeah.

AUDIENCE: Would it be $43.75?

JONATHAN GRUBER: It would be exactly $43.75. Why?

AUDIENCE: Because right now the bet essentially gives me the utility of $56.25.

JONATHAN GRUBER: So your answer in the front was exactly right in a linear world. But in a nonlinear world, you've got to account for the curvature. So the way to think about it is right now your utility from that bet, if I force you to take it, it leaves you a utility of $56.25. You can see that. Just go backwards from the 7.5 utility down to what level of wealth that's equivalent.

What that means is you would rather pay me $43.74 than take that bet. Think about how crazy that is for one second. I'm offering you a bet that is more than fair, and you will pay me almost half of your entire wealth to avoid taking that bet. And that's only with our standard utility function we always use.

That's risk aversion. And the reason is because 0 sucks so badly. The reason is because you're so sad going to 0 that you really don't want to be in that situation. And so you will actually pay me $43.75 to avoid a more than fair bet. That's what's crazy.

So another way to see this-- let me ask you another question. How large-- let's offer you the same bet. Flip a coin. Tails, you lose 100. Heads, you win x. How large would x have to be for you to take the bet? Tails, you lose 100. Heads, you win x. Yeah.

AUDIENCE: $300.

JONATHAN GRUBER: $300. Why?

AUDIENCE: Because then your expected utility is 10.

JONATHAN GRUBER: Right. Exactly. If it's $300, then I'm doing the square root of 400, which is 20, and the square root of 0. You average those, and you're going to get expected utility of 10. So for you to take that bet, I would have to say, tails, you lose 100, heads, you win 300. I need to give you a monstrously more than fair bet.

And that is due to the principle of risk aversion. That basically, because of diminishing marginal utility-- risk aversion isn't something made up. It's not some crazy concept. It just falls naturally out of diminishing margin utility because making you sad, because losing, because moving down makes you sadder than moving up makes you happier. Any questions about that? Yeah.

AUDIENCE: I guess the question is, doesn't this kind of more depende on utility? Because I know that--

JONATHAN GRUBER: OK. Stop there. Let's go to the next section. You guys are way ahead of me as always. Let's talk about a couple extensions. Let's talk about a couple extensions.

First extension, change utility function. Suppose your utility function was of the form u equals 0.1 times c. Now I've chose this particular form because the initial conditions are the same. With c0 of 100, u0 is still 10. So I'm starting from the same point as I was before. But now, would you take the bet, and why?

If that's utility function, would you take the bet and why? Just do the math. Do the math. What's your expected utility of that bet? Your expected utility is 0.5 times your utility of 225. So it's times 0.1 times 225 plus 0.5 times your utility of 0. So it's 0.1 times 0. And if you write that out and solve it, you get 11.25, which is greater than 10.

So you would take the bet. Any questions about the math? I just did the expected utility evaluation. So you would take the bet. What's the difference? Why do you take the bet here? Yeah. Yeah. Let me get-- go ahead.

AUDIENCE: This is a linear utility function. So--

JONATHAN GRUBER: If you're a linear utility function, what you care about is expected value. So you can see that-- do I have that, yeah-- in figure 22. This is the case we call risk neutrality. With the linear utility function, you're risk neutral because your linear utility function does not have what? Does not have diminishing margin utility.

As a result, you just care about expected value. There's no difference in expected value and expected utility. Now, the numbers are a little bit different because the functional form, but it gives the same outcome. You always take a more than fair bet and you reject a less than fair bet.

So as people move from risk averse to risk neutral, or as utility features less and less diminishing margin utility, they'll be more and more willing to take gambles. But it doesn't have to stop there. We can go further. Imagine that I wrote utility of the form-- utility was of the form c squared over 1,000. A weird utility function but once again created so that if c0 equals 100, u0 equals 10.

Now let's do the expected value calculation. Well, if you take the gamble, there's a 0.5 probability that you win. So then you would have-- utility B would be 225 squared over 1,000 and a 0.5 probability that you lose. So you just get 0. And the expected utility in that case is 25.3. The expected utility equals 25.3, which is way bigger than 10.

So you take this gamble. Why? Because now you're risk loving. Because what does this say about the diminishing marginal utility of income? This actually says you have increasing margin utility of consumption. A utility function like this says the next slice of pizza makes you even happier than the previous slice of pizza.

So go figure 22-3. This is the risk loving case, where you actually have increasing marginal utility of consumption. We don't really talk about this case because it doesn't make sense. But just to understand how this works, same calculations before you start at a point like A. If you win, you go to B. If you lose, you go to 0.

Well, this shape utility function, going to B makes you way happier than going to 0 makes you sad. So you love the gamble. In fact, with this utility function, you would take an unfair bet. So for example, imagine I change the gamble to one where it's win 100, lose-- I'm sorry, win 75, lose 100. So I changed the gamble now.

Win 75, lose 100. I made an unfair bet. The expected value was negative. Well, this person will still take that bet. If you do the math, if they win, 75, they get 175. Just replace this with 175. And you do the math. You're going to find that the expected utility in that case is 15.3, which is greater than 10.

So even an unfair bet-- lose 100, win 75-- will still leave these risk loving people better off than if they hadn't taken a bet. Yeah.

AUDIENCE: So when we're doing the losses, when we set that to 0, we're assuming that they're starting off with the amount that they could possibly lose, correct?

JONATHAN GRUBER: Oh, that's-- OK. Great. I'm going to come to that next. That's in these examples. But you've called me on an important assumption I'm making. These examples I have. Let's make sure we understand risk neutral and risk loving. People understand that?

So now let's-- now you actually raised another issue. Let me ask a new question. Once again, forget everything you've learned. Gut instincts. Here's the bet I'm offering you. Flip a coin. Heads, you win 1,250. Tails, you lose $10. Who takes that bet? Raise your hands if you take that bet.

OK. That's backwards. More of you should take that bet, not less of you. And why is that? It's because of exactly what you just pointed out. It's because basically if you think about that-- think about that bet. Think about-- let's go back to our old utility function, u equals square root of c. Let's think about that bet.

So what's the expected utility? It's 0.5 times you win 1,250. So it's square root of 112.5 plus 0.5 times you lose 10 square root of 90. And that expected utility is 10.5, which is greater than 10. So you should take that bet.

What changed? You're still risk averse. The guy who would've paid me $44 to avoid the other bet is now happy to take this bet. Same person. What changed? What changed? Yeah.

AUDIENCE: Smaller portion of [INAUDIBLE].

JONATHAN GRUBER: Right. And why does a smaller portion of the income change things here?

AUDIENCE: [INAUDIBLE]

JONATHAN GRUBER: Exactly. Because ultimately, infinitesimally, it's a linear curve. So if you go back to Figure 21, for any given epsilon change from point A it's linear. So essentially as gambles get smaller and smaller relative to your starting point, you become more and more risk neutral. Because, yeah, you're a bit sadder than happier but just a bit.

And remember, it's linear. You just take-- so if I did 12 and 1/2 cents versus $0.10, then unless you were crazy risk averse, you should take that bet because essentially you only care-- at that point it's so tiny relative to your wealth you might as well use expected value. So as gambles become smaller relative to your income, the utility function becomes locally flatter. Utility function goes locally flatter. And as a result, you become more and more risk neutral. Question about that?

Last point. Why did so many of you still not take that bet? Well, the answer is that even the model we've talked so far misses an important psychological phenomena. So now I'm stepping out of standard microeconomics into the realm of behavioral economics. Unfortunately, due to time this semester, I'm not going to get to my lecture on behavioral economics.

But you guys want to learn-- I'll talk a little bit about it in the next few lectures. I'll sprinkle it in. But course 1113 is a fascinating course we offer here about how you build psychology when you think about economics. And here's one example.

Why is it that even with this gamble, even if I'd done the 12 and 1/2, $0.10 gamble, a bunch of you still wouldn't have taken it. Why is that? That's because humans not only feature risk aversion, humans also feature loss aversion, which is we have an irrational behavioral bias that losing by itself makes us sad relative to winning. Taking away something we have makes us sadder than getting something new.

So here's a standard experiment that's run. They get a bunch of $5 mugs. Mugs worth about $5. They take people. And randomly half of the people they ask, how much would you pay for this mug? And half the people they give them the mug and say, how much would you sell your mug for?

Half people they say, here's a mug, how much would you pay for it? Half the people say, here's a mug. It's yours. But I want to buy it back. How much would you sell it to me for? The average person will pay $3, but the average person wants $7 to sell it back.

That makes no sense. Either way, it's trivial. It's just a mug. It's trivial relative to your wealth. It's just a mug. It doesn't really matter. But once people have it, they feel like, no, that's already mine. I don't want to sell it. That's loss aversion. People are biased by their starting points.

Your very starting point dictates your willingness to take a gamble, which is not true in a standard economic model. But it's true in all laboratory experiments in psychology. So the reason people don't like gambles are not only because they're risk averse. But even more, they're loss averse.

So for example, there's great economic studies which show that there's a massive bias against selling your house for anything under the price you paid for it. That people sort of-- they're very sort of linear, and they're willing to sell the house above the price you paid for it. But there's this huge notch at the price you paid for it. They simply don't want to sell less they paid for it. And that's just loss aversion idea. This comes up in many other contexts.

So there's two reasons why people are risk averse, that people won't take gambles-- a standard reason and the sort of extra psychological bias. Now, this raises-- this leads us naturally to the next section I want to go to, which is to talk about applications of this theory and why it's important. And the first application I want to talk about is insurance.

Insurance is big business in America. Individuals in America pay individuals-- forgetting the government, just people-- spend $1.5 trillion a year on insurance products. Almost 10% of GDP. 10% of our entire economy is people buying insurance. Health insurance is the biggest, life insurance, casualty and property insurance, auto insurance, et cetera.

Added all up, it's almost a tenth of our entire economy. Why? Because they're risk averse, and also loss averse. I'm going to put loss aversion aside and just use the standard framework. That just strengthens the argument. But it's because they're risk averse.

So let's do the math. Imagine you're a single 25-year-old male. I'm only being gender biased here because there's no risk of pregnancy. So you're a single 25-year-old male in perfect health. So the only risk you face health wise is getting hit by a car. That's basically your risk. Otherwise, you're basically not going to go to the doctor.

So essentially, let's say your income is $40,000. That's your income. And let's say that there's a 1%-- since it's Cambridge, there's a non-trivial chance you get hit by a car. Let's say there's a 1% chance, probability 0.01, you'll get hit by a car every year. And if you do, you're going to face $30,000 in medical bills.

So you have a $40,000 income. There's a 1% chance you get hit by a car. And if you do, you'll face $30,000 in medical bills. And let's assume for the minute you'll still get to work. Your income will always be there. You're just going to have to face a bunch of medical bills. There's a separate issue about whether you might have to miss your job. That makes this even worse.

But let's ignore that. You get to go to your job. You're just going to have to take a week to get patched up. And $30,000 is nothing, by the way, for a hospital bill. A typical, for example, heart attack hospital bill is well over $100,000. So $30,000 is pretty modest. Yeah.

AUDIENCE: Wouldn't the person that hit the guy have to get their insurance cover the medical bill?

JONATHAN GRUBER: Well, let's ignore that for a second. Right now we're talking about why you want insurance overall. We'll later get into who-- we can discuss later who should own that insurance, who should bear the risk. Right now it's just simply why you'd want insurance.

So the expected cost of getting hit by a car-- the expected value or expected cost because it's negative-- is minus 300. So every year you have a $300 expected cost of getting hit by a car. And let's say your utility function is u equal square root of c. And let's assume there's no savings. You consume all your income.

How much will this person be willing to pay for insurance? Well, we can solve that by asking at what insurance price would they be better off being insured versus uninsured. So let's just do the math.

If they're uninsured what's going to happen to them? Well, what's their expected utility? Well, there's a 0.1 chance that they are going to get hit. And so their net income will drop from 40,000 to 10,000 because they'll have to pay $30,000 in medical bills. So 0.1% chance their net income be $10,000. And there's a 99% chance that their net income will be $40,000.

That's utility without insurance. Add that up and you get 199. So their expected utility without insurance is 199. That's their expected utility with no insurance. Now let's do their expected utility with insurance but with insurance with an uncertain price. Let's call the price x.

What's the utility with insurance? Well, with insurance there's a 0.1% chance that they get hit. 0.01, I'm sorry. It should be an .01. My bad. Wow. I can't believe you guys missed that. You guys are a little tired today. 0.01. 0.01% chance that I get hit.

Now, if I get hit with insurance, I don't have to pay my medical bill. But I do have to pay my insurance premium. So what I have is I have $40,000 minus x, which is my insurance premium. I always have to pay my insurance premium every year, no matter what.

If I don't get hit, then I get my 40,000, but I also have to pay my insurance premium. So basically these things are the same. So my expected utility is square root of $40,000 minus x. That's my expected utility.

So how do I solve for the optimal x? Well, I ask at what x am I better off than being uninsured? So I set this equal to 199. I ask at what x would I be better off than being uninsured? And if you solve that, you get that the x star, the point at which you would rather be insured than uninsured, is a premium of $399.

So you will pay-- you would rather have insurance at a cost of 399 than you would go uninsured. Clear on that? You'd rather pay a premium of 399 than you would go uninsured.

Now think about what that means. Remember, the expected cost of this accident was only $300. That means that you have a $99 risk premium. You are willing to pay $99 to avoid bearing this risk. That is what you're willing to pay-- before you were willing to pay me $43.75 to get out of that gamble. That was your risk premium. It's how much you'll pay to get out of taking a gamble.

Now before I was offering a gamble. But here's the thing. Being uninsured is a gamble. Being uninsured is like taking the gamble. So the example I had before, I locked you in the room. And you have to pay me if you want avoid the bet. That's insurance. You are locked in this life. You're locked into being in Cambridge.

You're dealing with a 1% risk getting hit by a car always. So the question is, how much will you pay to avoid at least the financial cost-- forget the trauma, the financial costs-- of being hit by that car? And the answer is, you'll pay $99 above the expected damage it will do.

And that is why insurance is big business because people will pay to avoid being put in risky situations. So insurance is a very profitable exploit. Now, in fact, of course, the insurance industry is like any other industry. The supply side and competition, and whether that will lead to profits and stuff like that. Depends on the whole supply side. This is just the demand side.

But the point is that there's going to be huge demand for insurance, and people will be willing to pay much more than it costs the insurance companies. The consumers expect to pay the $300 a year. And they're getting almost $400 a year in premium. So the insurance company makes that money. And basically, that's why insurance is big business.

Now, here's a couple of things I'd like you to show yourself in your copious spare time. First of all, this risk premium should be bigger as the loss gets bigger. Why? Because you're moving away from that linear part towards the nonlinear part of the utility function.

Similarly, this risk premium should fall as your income is higher. Why? Because once again, that makes you more towards the linear part. The bottom line is the bigger the risk is relative to your income, the more risk averse you become. The more you move from that linear part of the curve onto the nonlinear part of the curve. So that's the key thing. What matters is risk relative to your income. That's going to determine your value, your willingness to pay for insurance.

So for example, let's think of your decision to go buy consumer electronics and the warranty. And they always offer you a warranty. Now, those warranties are expected value negative. If you take the odds of your machine breaking-- if you go to buy a new stereo. As if people buy stereos anymore. You're going to buy a car stereo. People still buy those.

You're going to buy a car stereo. You take the odds of that breaking times the cost it would take to fix it. Those multiplied are less than what they'll charge you for the insurance premium. It's a bad bet but it's insurance. And so the question is, should you take that? Well, that depends on how wealthy you are.

I should never take that because my car radio cost is tiny relative to my income. Someone who has low income might decide that's a large gamble relative my income. I don't want to take that. So I want to buy the insurance offered by the manufacturer. So once again, it's all about the size of the risk relative to your income. Yeah.

AUDIENCE: What if I have increased chance of breaking my phone?

JONATHAN GRUBER: Well, that's a separate issue. We'll talk about that later. That's called moral hazard. You might-- well, it's not moral hazard. What you're saying is there's heterogeneity. And you know you're actually-- that for you it is a fair bet because you're clumsy. Well, then you should definitely take it.

This is saying risk aversion works in favor of you taking it. Clumsiness further works in favor of you taking it. And we'll talk about that in a lecture or two when we talk about government provision of insurance. So that's sort of the first application, which is thinking about insurance and why it's such big business in America. And it is huge business in America.

The second application is thinking about our friend the lottery. Big news lately. We talked. There was a couple of huge lottery payouts. So let's talk about-- actually, let's do this. Let's talk about the lottery.

Now, the lottery is a total rip off. The expected value of a lottery purchase is 50%. So every dollar you spend on the lottery, over all lottery options-- for every dollar you spend, the expected payout is $0.50, is a much, much less than fair bet. And yet lotteries are wildly popular.

Actually, the beginnings of this country, the US were financed by a lottery. Much of the money they government raised initially to set up America came from a lottery. And state lotteries are a huge source of public financing right now across America.

Now, why do people play lotteries. Well, there's basically four theories for why people play lotteries. The first is that people are risk loving. In fact, that were wrong, people are risk loving. That's why they play lotteries. How do we know that theory is wrong? How do we know that theory is its face wrong, that people are risk loving? Yeah.

AUDIENCE: The demonstration in class. Lots of didn't raise their hands.

JONATHAN GRUBER: Well, that's one way we might know. But how do we know more globally? You guys could just be a weird bunch. How do you we know more globally? Yeah.

AUDIENCE: Because of [INAUDIBLE].

JONATHAN GRUBER: Yeah. But that's absolutely right, theoretically. But that's theoretical. In the real world, what piece of evidence can you immediately point to that I recently pointed out that could show you this is wrong. Yeah.

AUDIENCE: People buy insurance.

JONATHAN GRUBER: People buy insurance. If they're risk loving, why are they buying insurance? So clearly people aren't risk loving. We wouldn't spend 10% of GDP on insurance. So that theory is clearly wrong. OK. That's theory one.

Now, the second theory is a somewhat subtler version of this theory, which is quite interesting, which is that people are both risk averse and risk loving. So that risk aversion varies. Risk tolerance, let's call it, varies.

And then in particular, people are risk averse over small gambles, but risk loving over big gambles. So let's look at figure 20-4. This is an example of what we call Friedman-Savage preferences. You don't need to know that. But they had this idea that maybe people are locally risk averse but globally risk loving.

Let's see what this-- let me explain this. This is sort of complicated. So imagine that I'm going to offer you a 50-50 gamble between w1 and w3. So a gamble leaves you at w1 and a gamble leaves you at w3 at 50% chance. Well, if we look at between w1 and w3, we're on the concave part of the utility function. And as a result, I will not take that gamble.

That gamble leaves me at point B, which is below B star. So basically, I will not take that gamble. That leaves me worse off than just having w1 plus w3 over 2 with certainty. This is just if you hold your hand-- if you hold your hands between w1 and w3, that's just the graph we saw before. You won't take that bet.

But now let's say once you get above w3, once you're rich enough, you're risk loving. Let's say people are risk averse at first but then get risk loving. So if you started it-- if you said you were starting at w3 plus w5 over 2, I'd offer you a gamble between w3 and w5. Then you're risk loving in that range. And you take it.

So once you get rich enough, you start to get risk loving. So I talked about before getting rich or getting more and more risk neutral as you get richer. What if it goes the other way? What if you actually get risk loving when you get richer? Well, then, if you think about the whole Mega Millions thing, you could see that over the whole distribution from w1 to w5, people might want to take that risk. That you could actually be risk loving over these gambles, over these giant gambles.

And that could explain why people play Mega Millions. That over the giant gambles, they're risk loving, even if over more moderate gambles they're risk averse. You don't insure yourself for a billion dollars. You insure yourself for a few thousand dollars. Yeah.

AUDIENCE: How does the size of the gamble we're talking about determine-- we're talking about small bets that are more linear, is it based on what the player puts in or what they get in return?

JONATHAN GRUBER: Both. Both. It's basically about-- it's about expected utility calculations. Yes, it's true, for Mega Millions, you put in $2 for the chance of winning $1.6 billion. But your probability is way, way lower than 2 in 1.6 billion. So it's still unfair.

So the Friedman-Savage hypothesis-- you don't need to know the name, once again. This hypothesis is that what's happening is that people are first risk averse, but then risk loving. Well, how could we test and actually disprove this theory? Yeah.

AUDIENCE: Like scratch offs.

JONATHAN GRUBER: Yeah. If this is true, people would love Mega Millions but hate $10 scratch offs. In fact, the vast majority of lottery playing is not Mega Millions. It's $10 scratch off. Most money spent on lotteries are $10 and $20 gambles where you bet $1 to win $10 or $20. You should be risk averse over that, or at best risk neutral. You shouldn't be risk loving over those tiny gambles. And yet that's most of what lottery players do.

AUDIENCE: Wouldn't not really a scratch off be counted internally as not getting money instead of losing money?

JONATHAN GRUBER: Well, no, but you've lost the dollar you spent. And that dollar-- I still offered you a gamble. Spend $1, win $10, with a 0.05 probability.

AUDIENCE: I mean, in the loss averse sense.

JONATHAN GRUBER: Well, we're not doing loss aversion. Loss aversion is sort of hard. You don't have to think about gambling. Loss aversion is more about losing. But in the regular-- you can see you shouldn't do that unless you're risk loving. But even this theory would say you're not locally risk loving.

So this theory is out, which leaves us with two more theories. The first theory is that this is entertainment. That in people's utility function is not just consumption but the thrill of finding out if they won. My wife, against my better judgment, went out and bought a Mega Millions ticket. And she got utility out of waiting for that number to come up. And it was pretty cheap utility. Cost her $2.

So in that sense, maybe you play the lottery a lot for entertainment. That's one theory, unfortunately, the other theory is ignorance. That basically, the saying is the lottery is a tax on the stupid. Basically just don't understand what a bad deal this is.

And the problem is we don't know which of these theories is right. And they have very different implications for government policy. If this theory is right, if this theory is right, then the government should support lotteries, where essentially the government is essentially getting paid for providing entertainment.

It's what the lotteries often call the voluntary tax. That I am basically giving the government money that can run our schools in return for the government giving me the entertainment value of seeing if my scratch off won. That's great. That's welfare improving.

Under this theory, we should be discouraging lotteries. That all we're doing is taking a bunch of ignorant people and getting them to waste their money. Yeah.

AUDIENCE: Is there almost a possibility that there is this concept of having nothing to lose. If you're already too poor to be able to afford your basic needs, then you might feel like, I may as well try and win the lottery and then I would be all set. That is--

[INTERPOSING VOICES]

AUDIENCE: [INAUDIBLE]

JONATHAN GRUBER: That's exactly this. That wouldn't explain why I'd pay the $20 scratch off. That's an absolute reason why I'd go ahead, even if I was starving, and play the Mega Millions. And that's the Friedman-Savage hypothesis, absolutely. But that can't explain why in fact, in some low income communities, among some low income groups, they'll spend as much as 20% of their income every year on the lottery, a net.

They're losing huge amounts of money on scratch off tickets. So the question is, is that a rational decision because they find it entertaining or an irrational decision because they just don't understand what's going on? And unfortunately, we don't know the answer. But we do know it's very important.

It's important because there's big bucks and in many low income communities it's a huge source of expenditure. So I can't give you the answer to that. I can just tell you it's an important question. I hope someday someone to figure out how to think about this because it's got very important implications.

So let me stop there. That's all I want to say about uncertainty. And we'll come back and do another topic on Wednesday.