This two-part interview has been edited for length and clarity.
What led you to look at whether people buy long-term care insurance (LTCI)?
The background to the project came from the thinking that I and some of my colleagues had done around insurance. That is, why don’t people buy insurance when they are facing very big, potentially catastrophic costs? This is a pervasive finding across a whole range of different topics, but long-term care is one of them.
My own mother recently passed away, and she was in a LTC facility and it was about $85,000 a year—and she was there for a long time. When one thinks about these big expenses, it makes sense to think about trying to buy insurance against these… The problem that we find is that many people don’t buy LTCI, notwithstanding the big expense and high probability of needing care.
My colleague [Daniel Gottlieb] and I started thinking about why people might not buy LTCI. There are many arguments that people have made previously. One is that if you spend all your money, then the government will support you through the Medicaid plan and pay for your LTC—maybe not in such a good place as you might otherwise be in, but many nursing homes have a restricted number of spots for people who are on Medicaid. So if you run out of money, you can throw yourself on the mercy of the government. That’s one possibility.
Another possibility people have suggested is that if you have LTCI, maybe your children won’t take good care of you because they want you to leave your own home and get in the nursing home… Those are all arguments that we have heard about before and other people have studied.
What we were more interested in is the fact that when you buy LTCI, it’s expensive, it costs you $3,000 to $4,000 a year depending on the plan that you have. People see that expense as a waste of money because they won’t need the care until much longer into the future. They’re focusing on the near-term cost and not understanding the long-term benefit of having coverage.
How did you look at the question of why people don’t buy LTCI?
What we had was a module for the HRS (Health and Retirement Study) that allowed us to try to evaluate some traditional questions that had been developed by psychologists. Tversky and Kahneman are two very well-known psychologists [who] have focused on how the way you frame questions to people influences the way they answer them.
We had two questions… straight out of the Tversky and Kahneman approach… two hypotheticals. One was, imagine there’s going to be an epidemic expected to kill 600 people. There are two programs proposed. If Program A is adopted, 300 people will be saved. If Program B is adopted, there’s a 50-50 chance that either 600 people will be saved or none will be saved… We showed people that and asked which they would prefer…
The truth of the matter is that, if you know your statistics, those two programs are exactly identical, but for some reason people seem to latch on to Program A, which says that 300 people will be saved. They feel that that’s much more secure than a 50-50 chance.
We asked them that question, then the second question is basically the same construction, but instead of framing it in terms of people who would be saved, it’s framed in terms of people who will die. So we say again, there’s an epidemic and 600 people may be killed. If we adopt Program A, 300 people will die—which is just the inverse of the previous one. If we [adopt Program] B, then there’s a 50-50 chance that none will die or all will die. Again, we let people choose which one they prefer. The statistics are identical in both experiments, in both wordings, but when you show people this second formulation, more people are likely to shy away from the statement that says 300 people will die because they don’t like that notion.
We randomize the order of these presentations, so some people got question one first, some people got question two first. Then what we said was anyone who answers the questions similarly in the two cases, they understand that this is really the same problem. They are not being unduly influenced by the notion of this big loss, “300 people will die.” Whereas anyone that answers them inconsistently is looking just at the framing of the questions and being unduly influenced by the words “will die or will be saved.”
Using that, we said that if people were narrow framers, and they’re also loss averse— they don’t like the idea of people dying, which most of us don’t—people would choose the safe option when we suggested “300 will be saved,” but they’ll choose the risky presentations when we say “300 will die.” Those are the inconsistent people. Those are the folks that we said were narrow framers.
In a separate part of the question, we asked them whether they had LTCI—this was part of the core HRS, actually. Then essentially what we did was a simple correlation between whether you were a narrow framer—that is your answers changed in the two presentations—and whether you had LTCI. It turned out that the people who were narrow framers were much less likely to have LTCI.
Our interview with Dr. Mitchell continues in the next post.