assiette de oeuf au plat et bacon Getty Last week, we were once again saddened to read eggs were associated with an increase in all-cause mortality. See how carefully and accurately I worded that? It wasn’t hard. I didn’t say eating eggs will increase your risk of death, or even that they were bad for
Last week, we were once again saddened to read eggs were associated with an increase in all-cause mortality. See how carefully and accurately I worded that? It wasn’t hard. I didn’t say eating eggs will increase your risk of death, or even that they were bad for you. Such conclusions, however, would be reasonable if you casually skimmed this most recent study (as many lay readers likely would) or the press release describing the results – which makes it seem like eggs actually cause people to die. You’d also arrive at this conclusion if you primarily read the popular press – an institution that almost always prefers making bold albeit incorrect causal claims to meek but accurate correlative statements.
This egg “news” saddened me on two fronts. First, soft perfectly scrambled eggs are a sublime culinary delight (this is especially true during truffle season when a perfectly scrambled egg will make you realize there is a God). And don’t get me started on the delight provided by a perfectly fried egg with bacon. Anything that degrades that experience is depressing.
Perhaps more troubling is what such studies say about the current state of both the scientific endeavor and the manner in which academic research is communicated to the public. The now infamous “egg study” is unfortunately one of a series of nutritional epidemiology studies examining broad correlations in observational data on food consumption and health.
These studies always follow the same pattern. First, the authors use large datasets to estimate a statistically significant correlation between some type of food and a health outcome. Given the amount of data available, such correlations are now plentiful. The paper’s text is then carefully crafted to use words such as “associated with” rather than “caused” while giving the strong impression that imminent death awaits those eating the offending food. As a result, when pushed, the authors can always claim: “Well, I didn’t say it was a causal relationship.” Invariably these caveats are ignored by the media in favor of attention grabbing headlines full of causal proclamations. Rinse and repeat.
Given this well-known pattern, which has only been exacerbated in our current world of “hot takes” and social media, academics have a duty to analyze how our work will be interpreted and amplified by the media. In such a world, some studies frankly just shouldn’t be written.
For those that persist with publishing studies that have a high likelihood of being misinterpreted, it would be best if the authors at least made good faith efforts to truly educate readers rather than aiding and abetting in the confusion. For example, Amitabh Chandra points out that the egg paper’s own appendix shows the negative health effect of eggs goes away once you control for other foods such as poultry, fish, or red meat (Note: The authors of the study reached out to correctly point out that this statement in the original column is too broad. It is the negative health effects with respect to coronary heart disease that goes away in the appendix results, which is a subset of overall cardiovascular disease. Other outcomes maintain the general association after controls for observables — but obviously unobservables simply can’t be controlled for here. My apologies for the overly broad first statement). This suggests it’s quite possibly not the delicious eggs but rather some other unobserved variable driving the observed relationship. This is a common practice where important caveats are buried in appendices or methods section while the authors and their witting accomplices (i.e. public relations staffers) are far less circumspect in their various introductions, press releases and public comments.
If these concerns were related to the area of nutrition studies, I wouldn’t be so worried. After all – by this point a lot of the public simply greets these studies with a shrug and waits for the countervailing study to emerge in the coming weeks.
But this problem is actually widespread throughout academia, where all too often researchers believe that writing any study is better than no study. Such a view becomes meaningfully more problematic when studies using incomplete data or poor empirical settings purport to inform policymakers about the state of the world but ultimately serve as fundamentally flawed inputs to damaging policies.
In the world of drug pricing research, we recently saw such a study, where the authors claim to estimate that rising drug costs are caused by price increases for existing drugs rather than the introduction of new innovative products. This finding (if true) would have important ramifications for the welfare impacts increasing drug costs. Unfortunately, given their data the authors are fundamentally unable to examine what drives changes in overall drug costs.
Given the policy environment around prescription drugs, it’s not surprising these results attracted a large amount of attention and press coverage in outlets such as National Public Radio, Vox, and CNBC. Most of this coverage discussed the paper’s findings about changes in overall drug costs – which shouldn’t be surprising given the term “costs” is right in the title of the paper. Unfortunately, at best the study can provide a rough estimate of what drives changes in what some patients pay out of pocket – which is important but quite a bit more limited than the claims of the paper and its press coverage.
While drug costs are a complicated issue, there are two prices that are important to understand. The first is the list price which is what the pharmaceutical company would like to be paid. The second is the net price – which is the actual price the pharmaceutical manufacturer receives from the pharmacy benefit manager, i.e. the firm that negotiates prices on behalf of insured individuals. The difference between these prices is the result of critical and complex negotiations. Drugs which face more competition to treat the same condition are forced to give large discounts, resulting in a greater difference between the list and net prices. As a result, the spread between list and net prices varies meaningfully across products.
Ultimately, net prices are the actual “price” for most insured consumers. In contrast, list prices primarily impact the spending of people who are uninsured, people whose cost sharing involves either a deductible or coinsurance (i.e. percentage based cost sharing), and some amount of Medicare Part D spending. This is not a trivial segment of the population, with recent estimates suggesting nearly half of Americans are in some way exposed to the list price. However, even the drug spending of most of these individuals is eventually driven by the net price.
The authors of this most recent paper unfortunately only had data on list prices (which given the confidentiality of the negotiating process is a quite common situation that I also find myself in). As a result, the study is very much like the egg study discussed above, i.e. even its authors know it provides an inaccurate representation of the underlying relationship. For example, the authors note: “[b]ecause rebates are often greater once several exchangeable products within the same therapeutic class have reached the market, our estimates for the relative contribution of existing drugs to the rising costs of brand-name drugs may be upward biased.”
The specifics of this admitted bias in the data are important. If it was simply that list prices were higher than net prices, there would not be as large a concern. But, as the authors note, the difference between list and prices varies markedly across drugs based on the competitive environment. Specifically, as drugs are on market longer the list price data used in the paper provides a less accurate measure of the drug’s cost – making it effectively impossible to estimate whether price increases or new products lead to changes in overall drug costs.
Even more concerning, before a reader even gets to this quite understated caveat, they are confronted with the paper’s introduction where the authors claim they “quantify the contribution of new versus existing drug products in the rising costs of prescription drugs [emphasis added].” Note no mention of list prices in this introductory text but instead a discussion of the costs of drugs – a claim that is far less circumspect and qualified than the limitations section. Furthermore, given it sits in the introduction we know it’s far more likely to get the attention of reporters and casual readers. This becomes more likely when the authors are interviewed in the media discussing their findings as showing a lack of competition a fact that simply cannot be measured in a paper using list price data.
Lacking an ability to answer the underlying question, the authors shouldn’t have written the paper at all or at a minimum should have written a vastly more limited draft (of course a paper actually emphasizing it didn’t have true drug cost data and focusing only on out of pocket costs would be less likely to published and publicized).
Saying some papers shouldn’t be written might seem like advocating a halt to scientific progress, which ultimately is a series of incremental steps towards finding the right answer. Or that I am letting the perfect be the enemy of the good. Frankly, I’m not. If these papers were about making incremental but imperfect progress forward on a hard issue, that would be one thing. But both the egg study and the drug pricing study discussed above don’t take such small steps forward. Instead, they are plagued by systematic biases that make them unable to ever deliver on the promises of the paper’s writing and packaging.
In such settings where academics can’t get the right data or lack the appropriate empirical setting to obtain consistent and correct causal estimates, it’s best we just take a pass. I know this can be frustrating – I have loads of questions I’d love to answer but simply can’t. However, as academics we must have the discipline to avoid providing misleading answers to these questions until the right setting or data appear. Continuing to write studies with caveats buried deep in the paper that predictably result in misinterpretation by the media is either nefarious or reminds me of a certain definition of insanity. Perhaps it’s time that academics contribute to a more transparent and saner world.
(Note: The relative merits of using list vs. net prices was the source of a recent back and forth between the authors and critics of this paper. I should have mentioned this in the original version of this column. Adjudicating the particulars of the dispute between these two parties requires more space than is possible in this column. While it’s true list prices impact the spending of a growing share of patients, the original paper in Health Affairs did not really attempt to limit its anlaysis of commentary to this group. For example, each time it says “drug costs” what it really should say is “drug costs for the uninsured” or “an estimate of consumer expenditures for the underinsured.” Obviously this would have made for a more limited paper that would have received less publicity and probably would have published in a lower profile journal. So while I agree that we should not ignore the costs facing this group, we should also be continually clear when we are doing so.)