In December of 2000 the National Institutes of Health released a study showing a nationwide decline in the rate of lung cancer since 1988, and even more dramatic declines in California. From the press reports it appears the study mentioned possible reasons, including state-sponsored anti-smoking campaigns. It was widely reported in California that this was proof of the effectiveness of our own anti-smoking campaign, which began in 1988.
Both the timing of the decline in cancer and its consistency over time made me doubt the conclusions that were reported — that the study in any way proved a correlation between declining cancer rates and the particular anti-smoking campaign in question — and the news reports in no way alleviated those doubts. I found this case representative of a broad problem with the way statistical studies are reported in the press and understood by both the general public and our political leaders.
The letter was submitted to the San Jose Mercury News, not for publication but in the hope that it would spur them to improve their reporting on such items. Subsequent articles on other research results make me doubt my message got through.
5 December 2000
Your December 1st article on the success of California’s anti-smoking campaign highlighted a general problem I have with the simplistic way news organizations report on statistical analyses of complex scientific and social phenomena. I have no doubt that reducing smoking ultimately reduces the incidence of lung cancer but the reporting in your story, and in particular the accompanying graph, raised doubts about the validity of the specific study making news.
In particular, your graphic showed an almost linear downward trend in lung cancer rates beginning in 1988, the very year that the anti-smoking campaign was reported to have gone into effect. I am not an expert either in the biology of lung cancer or the implementation of public education programs — I am an engineer — but it seems to me unlikely, and therefore suspicious, that an anti-smoking program would have had such an immediate and consistent effect. In the first place, lung cancer seems to be a disease associated with long-term behavior, so one would expect at least some delay between the start of an anti-smoking campaign and an observation of declining lung cancer rates. In addition, most public education programs don’t take full effect immediately, and their effectiveness changes over time, so an immediate and constant rate of decline in lung cancer would seem to have some cause other than the program in question. Finally, it seems certain that those most likely to get lung cancer would be those long-term smokers who were least likely to be affected by the anti-smoking efforts, particularly in its early phases — which would, again, lead us to expect both a delay between the start of an anti-smoking program and a decrease in lung cancer, and a changing rate of effectiveness over time.
My immediate thought, on looking at your graph, was that some other phenomenon was acting to reduce rates of lung cancer; and my first question was, “what happened before 1988?” I half expect that a look at earlier data would show the downward trend beginning well before the start of the anti-smoking campaign.
Assuming the study was, in fact, properly conducted and the conclusions valid, these doubts could have been allayed by some careful reporting on raw data and the methodology for establishing correlation. Instead, we got glowing reviews from Diana M. Bona, who directs the anti-smoking program, and Dr. Stanton A. Glantz, who is clearly identified as an anti-smoking zealot. These are not neutral evaluations.
Every day, print and television news organizations bombard us with the results of studies purporting to prove something profound about the effects of public policy, most contradicting some other studies reported last week. The election we just endured brought us a blizzard of conflicting statistics and analyses presented by the candidates and repeated by the media. Almost every advocacy organization now solicits and publishes studies showing — surprise surprise — that their particular policy agenda would be the best thing ever if only we would allow them to implement it. How are we to sort out the good from the bad in this barrage if your reporters will not present us with enough of the underlying information to make an informed evaluation?
You need to do better.
© Copyright 2000, 2005, Augustus P. Lowell