by Michael Jorrin, "Doc Gumshoe" | October 5, 2015 10:11 am
[Michael Jorrin, who I call “Doc Gumshoe,” is a longtime medical writer who writes on health and medicine topics for us from time to time. He is not a doctor, and his commentary is not investment-focused. He chooses his own topics, and, as with all our guest columnists, the words and opinions are his alone — you can see all his past columns here.]
Gumshoe denizens are, by now, accustomed to the skeptical views issuing from this corner on such subjects as miracle cures for every deadly disease known to mankind. As I hope my patient readers by now understand, I do try to comb through the tangled skeins of promotional blather about those miracle cures, searching for the elusive “grain of truth,” and I also try to arrive at a fair guess as to whether those grains, if any, could be seeds that could sprout into genuine therapies that might be, if not miraculous, at least beneficial. That’s what I shoot for.
But because I am not an enthusiastic camp follower behind the banner of “alternative medicine,” some readers assign me to the ranks of committed defenders of the medical establishment conspiracy, including Big Pharma and the government regulators. I have tried to say it ain’t so, but perhaps my protests are of no avail.
As it happens, there are quite a few of the precepts and practices of established medicine that I’m skeptical about, and perhaps you might be interested in the underlying principles of my skepticism.
Principle Number One: I am skeptical about statements that include possibly dubious numerical assertions.
Sticking a number into an otherwise general statement seems to transubstantiate the statement into an unassailable data point. The statement that air pollution kills a lot of people in China would likely be met with, “So what else is new?” But if you add a number, the statement changes and gains considerable weight. In this case, the newspaper headline was that air pollution was estimated to kill 1.1 million people annually in China. There’s no way of escaping the impact of that statement, and I don’t want to try. At the same time, I find the figure, 1.1 million, rather dubious. For one thing, determining cause of death is at best not an easy task. Are those 1.1 million Chinese otherwise healthy individuals who would have survived to meet their ends due to some other cause were it not for China’s admittedly dreadful air quality? Or, possibly, does it include a large fraction of individuals with emphysema due to some other cause, such as cigarette smoking?
I don’t mean to let air pollution off the hook as a bad thing with horrible health consequences. But statistics about deaths from air pollution do get trotted out from time to time as a way, for example, of countering arguments about nuclear power plants. Back in the 1960s, advocates of nuclear power made the claim that nuclear electric power generation would save 10,000 lives per year in the state of Pennsylvania alone, compared with coal powered electric generation. Maybe so, and maybe not.
And how about Jeff Bezos and his grandmother? When he was eleven years old, Jeff told his grandmother, with the data-based certitude that he later employed in the foundation of Amazon, that every puff on a cigarette would shorten her life by 8 minutes. The poor lady, it is reported, burst into tears. If young Jeff had merely said, “Granny, smoking is bad for you,” young Jeff’s warning would not have upset his grandmother nearly as much. But the relationship, which would extrapolate to 65,700 puffs shortening life by a year, and therefore 985,500 puffs taking 15 years off your life … Well, that way madness lies.
My own personal, embarrassing example is this: when I was about nine years old I heard on the radio that a certain American bomber plane, the B-17E, had a cruising altitude of 7 miles, which was “halfway to the moon.” Being capable of complex mathematical calculations even at that early age, I concluded that therefore the moon was 14 miles from Planet Earth, and so proclaimed to all my friends, who believed me, because I had stuck a precise figure into my assertion.
My father set me right on this pretty quickly, but I don’t think I went around telling my pals that I had been wrong. I left them to figure it out for themselves.
But, being a curious child, I asked my father the next question – “How do they know that it’s about 240,000 miles from Earth?” And he tried to come up with an answer, based on the most elementary trigonometry imaginable, that would more or less satisfy a nine-year old. However, that question, “How do they know?” relates to almost every assertion that is made in science and medicine.
Principle Number Two: I question free-floating assertions of percentages.
Percentages are meaningless without the base. An increase in price from $100 to $200 is a 100% increase. But if the price were reduced from $200 to $100, it would be a 50% reduction. You have to know the underlying number.
The most egregious examples of distorted information based on free-floating assertions of percentages that I know of were in the releases to the media about the results of the Women’s Health Initiative (WHI), which evaluated hormone replacement therapy in about 27,000 women. The most prominently publicized finding was that women taking HRT had a 23% increase in the risk of heart attacks. This was based on a very small real increase, from 30 per 10,000 patient-years in women taking placebo to 37 per 10,000 patient-years in those taking HRT. The increase in absolute risk was 7 per 10,000 patient years – less than 0.1%. But the 23% increase that got all the media attention was 7 on a base of 30 – arithmetically correct, but immensely distant from presenting the reality. Was this deliberate? I believe so. The motive behind the study was once and for all to deflate the notion that HRT was a way of keeping women young and sexually desirable, and a bit of statistical hanky-panky was a means to that end.
It’s standard for clinical trials to present their results in terms of relative risk reduction. For example, if your risk of falling victim to a particular infection is 2% (based on population-wide data), and taking a certain drug reduces that risk to 1%, the relative risk reduction is 50%, which sounds huge compared to the absolute risk reduction. I’ll confess that my example is highly unrealistic, because it’s unlikely that there would be a clinical trial with the objective of identifying a risk reduction of that size – clinical trials need what are called “index events,” and it would take a huge trial of very long duration to develop much data. However, there are many trials in which the change in absolute numbers is unimpressive, but when that change is expressed in relative terms, it really amounts to something. And, when even a very small reduction in absolute risk, even as small as 1%, applies to a large population, that reduction definitely amounts to something real.
Principle Number Three: I am fundamentally skeptical about the accuracy of many of the measurements on which medical science is based.
That doesn’t mean that I think they’re mostly, or even often, incorrect – just that they have to be understood for what they are. Measuring is not the same as counting. Counting can be exact. Measurement is an approximation, a comparison of the object being measured with a standard for measurement.
Take, as an example, a study of growth in infants and young children. Growth is an important measure of health, and children who fail to grow at the normal rate (based on age, sex, and mid-parental height) may be affected by nutritional or hormonal factors. But measurements of length or height in infants and children can vary, depending on how the child is measured. Children are longer while lying down than when standing upright. Tape measures sometimes stretch, and the angle of the bar at the top of the scale that measures height can vary, affecting the height reading by as much as an inch.
A conscientious health-worker is not likely to be so deceived by a single inaccurate measurement as to prescribe an inappropriate course of treatment, since he or she will observe other characteristics and, frequently, repeat the measurement. However, inaccurate measurements can certainly affect the results of clinical trials.
I had an experience that contributed to my skepticism. After a routine physical a couple of years ago, I was startled to see that my medical record reported that I was obese. Now, let me state without equivocation, I am NOT obese. What accounted for this obvious error? The nurse who measured my height had me at 5’ 11”, whereas I am about 6’ 2”. This was enough to propel my BMI into the “obese” range. The cause of the error was obvious – the nurse was too short to see to the top of the arm on the scale. Needless to say, I protested. No course of treatment was predicated on that blunder, but it would have stayed in my medical record. And it could have been aggregated into “data,” which is why I do not adopt a reverential posture when confronted with data.
Similar errors occur with blood pressure readings. Many physicians distrust the digital devices now proliferating, and prefer to rely on the traditional sphygmomanometer, which, of course, requires their judgment and skill. The digital devices provide the illusion of accuracy, because of the seemingly unquestionable authority of the digital readout. But it’s this illusion of accuracy that provokes my skepticism.
And of course, the results of blood tests coming from different labs frequently disagree. One lab will report your fasting blood glucose to be in the prediabetic range, while another lab will say that it’s perfectly fine. But, of course, it might also be that one day your level was a bit high, and another day it was back where it was supposed to be.
Responsible physicians would certainly not base a course of treatment on a single blood pressure reading or a single lab result. But erroneous results can and do make their way into large observational studies, and can also affect algorithms of the sort that are being increasingly used in guidelines-based treatment – about which I’m also skeptical.
What all this brought home to me is that measurement, of any sort, needs to be done meticulously and is prone to error. The cabinet-maker’s mantra is “measure twice, cut once.” I hope surgeons have the same instinct.
Principle Number Four: I tend to be skeptical about “guidelines-based” treatment.
Mostly guidelines are well-intentioned, the idea being that once a particular treatment protocol has been found to work pretty well, it should be broadly implemented, and protocols that work should be preferred over the sometimes idiosyncratic preferences of individual practitioners. For example, I strongly support vaccinating children according to the accepted schedule, and I strongly oppose leaving it up to their parents when or whether they want their child to be vaccinated.
Another type of guidelines-based treatment that I favor is the “treat to target” protocol, which is employed in a great many clinical areas, including diabetes, hypertension, and elevated cholesterol. The targets are not the same for all types of patients; they’re apt to vary with risk factors, and from time to time they get reset. Sometimes the target is numerical, but sometimes it’s more conceptual. In rheumatoid arthritis, the target is to treat to remission, where remission is defined by a number of variables. The objective of treat-to-target protocols is to get the patient as well as possible, not to simplify life for the clinician.
However, there are many “one size fits all” guidelines-based treatment protocols whose objectives, at least in part, seem to be to simplify life for clinicians. One such set of guidelines is the one promulgated by the American College of Cardiology (ACC) and the American Heart Association (AHA) about two years ago. The guidelines, overall, are very carefully crafted and based on the soundest clinical research available, and, in conclusion, recommend statin treatment for four groups: 1) individuals with clinical atherosclerotic cardiovascular disease; 2) individuals 40 to 75 years of age whose LDL-cholesterol levels are 190 mg/dL or higher; 3) individuals 40 to 75 years of age with diabetes, and LDL-cholesterol levels of 70 mg/dL or higher, but without atherosclerotic CVD; and 4) individuals 40 to 75 years of age those with neither diabetes nor CVD, LDL-cholesterol levels of 70 mg/dL or higher, and have an estimated 10-year atherosclerotic CVD risk of 7.5% or higher.
There’s very little question that some kind of cholesterol-lowering therapy is beneficial for the first two groups; numerous carefully conducted clinical trials going back more than 20 years have demonstrated that statin treatment confers clear survival benefits in persons with established CVD and with super-high LDL-C levels. Including diabetics, even those with much lower LDL-C levels, should be, in my opinion, a matter of clinical judgment rather than an automatic decision.
It’s the fourth group that raises my skepticism index. The atherosclerotic CVD risk for this group is determined by an algorithm, which places a great deal of emphasis on age. This, in turn, is justified purely by the statistics that people tend to develop this condition as they age. The Framingham Risk Calculator, which has been used for many years, uses some of the same inputs, but tends to come up with lower estimates of risk. Critics of the ACC/AHA guidelines (including the eminent Dr Steven E. Nissen of the Cleveland Clinic) suggest that these guidelines would result in many individuals who are at low risk being given statins mostly based on their age. Estimates are that the former guidelines, published in 2004 by the National Cholesterol Educational Program (NCEP) would have recommended that about 44 million Americans should be on statins, whereas the new ACC/AHA guidelines would up this number to 56 million.
There is no evidence to date that people treated with statins under the new guidelines, but who would not have been treated under the old guidelines, actually have better clinical outcomes. A small retrospective study (n = 2,435), based on the Framingham population, followed untreated patients for nine years, and determined the death rate from a cardiovascular event during that period. More (6.9%) patients who would have been eligible for statins under the old guidelines died during that period than did patients who would have been eligible under the new guidelines, of whom 6.3% died. The absolute numbers were very small – 13 patients died who would have been treated under the new guidelines, but not under the old guidelines. That does not mean that treatment for those patients would have prevented their deaths. However, it suggests that the new guidelines probably don’t lead to treatment of many patients that don’t need treatment.
It says nothing, however, about new patients in that group who actually receive treatment based on the new guidelines. It won’t be known for years whether there is a real benefit in terms of survival or in the incidence of cardiovascular events. It’s likely that there will be reports of statin-related adverse events, particularly muscle aches and, in some cases, muscle wasting.
Another issue that many clinicians have with the new ACC/AHA guidelines is that once a patient is prescribed statin treatment, no follow-up cholesterol testing is recommended. But keeping tabs on a patient’s current cholesterol levels is a way of answering the patient’s natural question: “How am I doing?” And assuring patients that they’re doing just fine is an important way of keeping them adherent to treatment. More that 31% of patients fail to renew their drug prescriptions, and the non-renewal figures are higher for drugs taken for preventive purposes, such as cholesterol-lowering drugs, than for drugs taken for immediate symptom control. And when drugs cause side effects, or are widely thought to cause side effects, the non-renewal rate goes up. For those reasons, I think leaving annual cholesterol testing out of the guidelines is a blunder.
Principle Number Five: I am deeply skeptical of pronouncements that certain conditions, or certain classes of patients, don’t need to be treated.
The most recent statement of this sort came about a month ago, from the lead author of a paper in JAMA Oncology, Dr Steven Narod, who bluntly set forth his opinion on the treatment of ductal carcinoma in situ (DCIS). Said he, “I think the best way to treat DCIS is to do nothing.”
DCIS, for the tiny few who might not have heard of it, is sometimes called Stage 0 breast cancer, although there is legitimate controversy as to whether it should be called “cancer” at all. The National Breast Cancer Institute acknowledges the controversy and accepts the reluctance of some clinicians to include DCIS as an invasive cancer as legitimate. Their concern is that women who get the diagnosis of DCIS immediately assume the worst, and may chose courses of treatment that might be appropriate for invasive breast cancer, such as double mastectomies.
DCIS will be diagnosed in almost 60,000 women annually in the US. In most of these, about 85%, the abnormal cells are contained in the milk ducts. These abnormal cells are not, strictly speaking, cancers, but they have the potential to escape the milk ducts and become cancerous; thus the current standard of care is removal of the abnormal cells and adjoining tissue to which abnormal cells may have spread.
The basis for Dr Narod’s recommendation is a study that found that in 108,196 women who received a diagnosis of DCIS between 1988 and 2011 in the SEER (Surveillance, Epidemiology, and End Results) database, 3.3% died of breast cancer. This is almost exactly the same as the breast cancer death rate in the general population. Therefore, the study authors concluded, treatment of DCIS by whatever means did not diminish a woman’s risk of invasive breast cancer and the risk of death due to breast cancer.
When news of this study hit the media, many women who had been treated for DCIS were outraged, concluding that their treatment was unnecessary and had been foisted on them by unscrupulous money-grubbing doctors, who had put them through needless agony and left them feeling diminished and unattractive.
What the media did not point out (probably because it would have required closer reading and more critical thinking) is that the study is not more than suggestive and provides the weakest kind of evidence that treatment of DCIS does not prevent invasive breast cancer. In order to find strong evidence of that, it would have been necessary to compare two cohorts of women – one group that was diagnosed with DCIS and treated, compared with a second group that was diagnosed with DCIS and not treated. That second cohort was not studied, probably because very few women get a diagnosis of DCIS and forego treatment. But unless a comparison of that kind can be made, there is no way of demonstrating that treating DCIS does not diminish the risk of invasive breast cancer.
There were other aspects of the study that did not get much publicity, and that also contradict Dr Narod’s pronouncement that “the best way to treat DCIS is to do nothing.” In women who receive a diagnosis of DCIS before age 35, the risk of breast cancer mortality was more than double than in the general population, 7.8%, and in black women it was also double, 7.0%. Who knows what the mortality would be in those groups if they did not receive treatment.
It should be pointed out that Dr Narod’s suggestion of non-treatment for DCIS was not part of the published study, but a statement to the media. And it should also be pointed out that eminent clinicians, such as Dr Monica Morrow, chief breast cancer surgeon at Memorial Sloan Kettering, disagreed with Dr Narod and said that DCIS should indeed be seen as a cancer precursor and treated as it is currently, mostly with a lumpectomy, which removes those precancerous cells in the milk duct.
… and one more …
The U. S. Preventive Services Task Force (USPSTF) has not reversed its non-recommendation of PSA testing despite the many voices raised in disagreement.
Doc Gumshoe weighted in on this topic back in May, 2013, and here’s what I said then:
em>The Evidence Regarding PSA Testing and Prostate Cancer Treatment
First, let’s look at the evidence marshaled by the USPSTF in their full report. The basis for their recommendation was that although “screening based on PSA identifies additional prostate cancers, most trials found no statistically-significant effect on prostate cancer-specific mortality.” The report cites two trials to answer the question whether PSA screening decreases prostate cancer mortality, both of these described as being of “fair quality.” One of these, conducted in the US, followed 76,693 men for 7 years and found no significant difference between men assigned to PSA screening and those assigned to “usual care,” meaning no screening. However, it turns out that of the men in the usual care group, 44% had had a PSA test before entering the trial, and 52% had a PSA test at some point during the trial. That’s what you would call a thoroughly tainted control group. So they were basically comparing men who were supposed to have had PSA tests with men who were not supposed to have PSA tests, but many or most of whom had PSA tests anyway.
The other trial, a multicenter European trial, found that PSA screening every 2 to 7 years was associated with a 20% relative risk reduction in a subgroup of 162,243 men aged 55 to 69 years.
That’s not nothing.
However, there’s more. One of the participating centers, in Sweden, decided to publish their results separately. PSA screening every two years in a group of 20,000 men resulted in a decreased relative risk for prostate cancer mortality of 44% after 14 years of follow-up. That’s quite a lot more than nothing.
Since then, there is new information. A study using the U. S. National Cancer database found that one year after the USPSTF recommendation against routine PSA testing was issued in 2011, there was a large drop, 28%, in the diagnosis of prostate cancer, due without doubt to a decline in the number of men having PSA tests. The largest decline, 38%, was in the diagnosis of early, low-risk prostate cancer. But important declines were also found in the diagnosis of intermediate (28%) and high risk (23%) prostate cancers. These, by the way, are relative risk reductions – the 23% decline would be from the number of men diagnosed with high risk prostate cancer prior to the USPSTF recommendation, and I do not have that number available, since the full study will not be published in the Journal of Urology until December.
Not diagnosing – and treating! – prostate cancer in those intermediate- and high-risk patients is tantamount, in my judgment, to condemning many of those men to an unnecessary and painful death.
And here are a few other data points from SEER: the five-year survival rate for men diagnosed with local and regional prostate cancer is close to 100%. But the five-year survival rate for men diagnosed with distant (i.e., high risk) prostate cancer is only 28.2%. Incidentally, the five-year survival rates for all forms of prostate cancer have increased dramatically since PSA testing was introduced in the mid 1980s – from 66% in 1975 to 88.4% in 1990 to 99.7% in 2007.
We’ll see what happens if the USPSTF doesn’t alter its recommendation. The lead author of the study, Dr Daniel Barocas, calls it “throwing the baby out with the bathwater.” I call it creating more work for the undertaker.
A general conclusion
It’s not that I don’t respect data. It’s not that I oppose guidelines. It’s not that I’m invariably in favor of the most aggressive treatment. But all of this highly complex and highly important matter of making healthcare decisions requires careful, considered, wise, experienced, and compassionate judgment on the part of the healthcare providers. The data and the guidelines have to be weighed and considered as they apply to the individual human being, for whose life and well-being the healthcare provider is responsible. I don’t want my life – or any of yours! – to be determined by an algorithm. I hold skepticism to be a cardinal virtue!
* * * * * *
… and in the meantime, I’ve been gathering together new bits of information about whatever progress is being made in treating Alzheimer’s disease, and I’ll share it with Gumshoe nation relatively soon.
Source URL: https://www.stockgumshoe.com/2015/10/skepticism-that-cuts-both-ways/