[Ed Note: Michael Jorrin, who I like to call “Doc Gumshoe”, is a longtime medical writer (not a doctor) who shares his thoughts with us from time to time, generally on non-financial topics in health and medicine (as today, though, he sometimes mentions a couple publicly traded companies). His words and opinions are his own. You can see his past articles here.]
The word “perspectives” can be a bit contradictory. The reader, an astute denizen of Gumshoeland, can already tell that what this post will feature is my own point of view on a large and diverse bundle of complex issues. But perspective also means that the subject matter that’s closest to the viewer looms largest in the picture. I have a small 17th century steel engraving with a religious subject taking up most of the foreground, but way in the background there is also a miniscule depiction – and I mean miniscule, less than an inch across – of a hunting party, including a couple of hunters, dogs, and deer. The hunters are not aware of what’s going on in the foreground; they’re busy with their own pursuits, which are important to them.
The same effect of perspective affects the time dimension – often the most recent events loom largest, but that doesn’t mean that they are the most significant. When that happens, we have to “put things in perspective” – meaning, trying to understand the likely long-term importance of the issues of the day.
So, first, here are some current issues and trends that loom large. Then, I’ll try to make adjustments, to account for the distortions of the perspective of the moment.
The primacy of technology, and what this implies for diagnosis and treatment
In my days as a film-maker, before I became a medical writer, I made a short documentary about the Mayo Clinic, and one of the things that they proudly demonstrated to me, and which amazed me utterly, was a blood analyzer that could produce upward of 40 test results from a single small blood sample in just a few minutes. I had taken organic chemistry in college, and I had a pretty good idea of what it took to carry out an analysis for even one single substance. What this machine could do, accurately and quickly, was many, many orders of magnitude greater than anything I had imagined.
But that kind of thing is now routine. Consider this: a person with diabetes can now have an implanted device that releases either insulin or glucagon as needed, depending on the levels of these two agents, which are monitored in real time. Insulin is needed for the metabolism of glucose in the bloodstream, and glucagon is the agent that tells the liver to convert stored glycogen into glucose, so this device keeps the diabetic individual’s glucose level in balance, avoiding both hyperglycemic (too much glucose) and hypoglycemic (insufficient glucose) episodes. The whole thing is monitored by a smart phone, which does the calculations and sends instructions back to the device. (Note, this device is not yet in wide use – diabetics are still monitoring their blood glucose with needle sticks.)
To put that device in perspective, my father was a type 2 diabetic, diagnosed in his 40s, and totally adherent to treatment. He relied on injected insulin, that being before the oral agents had been developed. The only way he could monitor his disease state was by means of urine tests – a little test strip, like litmus paper, dipped into a test-tube of urine. He then compared the color of the test strip with a reference, which told him whether he was “clear” – good news! – or whether there was sugar in his urine, the levels being one through four-plus, four-plus being decidedly bad news. But he also had to be on the alert for hypoglycemic spells, which could lead to dizziness, or even to severe reactions. To ward these off, he carried sugar-cubes with him at all times. He only had actual blood glucose tests when he saw his doctor (who was our excellent, caring, family doctor and good friend) once every few months. That was the norm. Just consider the change!
Therapeutic drug monitoring
The implanted device that I described, which at the same time monitors blood glucose and delivers the quantities of either insulin or glucagon needed to maintain blood glucose at optimum levels, is a form of therapeutic drug monitoring. This is a practice that Doc Gumshoe confidently predicts will grow in coming years, optimizing the effectiveness of drug treatment and, at the same time, looking for ways to attain the best efficacy with the lowest dose, thereby lowering costs.
Right now, once a patient is prescribed a drug, that patient is given doses of the drug based on across-the-board information about the drug’s behavior, based on studies that were done fairly early in the drug development process. Way before the crucial clinical studies are started, the developer knows some basic facts about the drug, such as the minimum lethal dose, what the therapeutic concentration of the drug should be in the patient’s serum, how long it takes to achieve that concentration, what the drug’s half-life is, and other such data. This data is gathered based on studies in animals (happily, it’s not necessary to kill human subjects to learn what the minimum lethal dose of a drug is – mice will do just fine), and also in healthy volunteers. Based on this data, the developer arrives at such vital parameters as the “correct” dose and the “correct” dosing interval and duration. But those parameters are not usually adjusted to the individual patient.
What therapeutic drug monitoring aims to do is exactly that – adjust those parameters to the individual patient. This is of no great importance when the drug is being taken for a short time – ten days for an upper respiratory infection, or something of the sort. But the importance grows with the duration of treatment, and for patients who my need to continue taking a drug for years, or for life, therapeutic drug monitoring may confer genuinely big advantages.
Factors that affect a drug’s activity in the body are absorption, distribution, metabolism, and excretion (ADME). And these can vary considerably from person to person. So, it could be highly valuable to know how much drug is in a patient’s system at a given time, and how that correlates with the patient’s disease state. Based on therapeutic drug monitoring, dosing might be increased, to control symptoms more aggressively, or decreased, to reduce adverse effects or lower costs.
Therapeutic drug monitoring needs to happen quickly, conveniently, and inexpensively. That has been achieved with the on-board blood glucose / insulin monitoring and delivery system, but not, so far, with other drugs. However, the pharmaceutical industry is working on it. The makers of testing equipment obviously have an interest in coming up with devices of all kinds – a quick Ebola virus test, for example. But imagine for a moment that a pharma outfit had an effective Ebola treatment. That pharma would have a huge incentive in developing a quick Ebola virus test, because it would dovetail with their Ebola treatment. The incentive to the pharma is much greater than to the testing equipment maker.
That’s the situation with therapeutic drug testing in general. The greatest incentive for development of the necessary testing equipment is to the pharma companies that are marketing the expensive drugs that need to be taken for long periods. It will happen, and it will become routine.
The impact of technology and data, in both diagnosis and treatment, has been transformative. Think of imaging. When they were first invented, X-rays were at the same time miraculous and primitive. Now we have, besides X-rays, ultrasound which is relatively cheap and not invasive. And we have magnetic resonance imaging (MRI), positron emission tomography (PET scans), and computerized axial tomography (CAT scans or CT scans), which can use sequential X-rays to create cross-sections of the body or an image in three dimensions. X-rays formerly used photographic film, which is sensitive to radiation as well as to visible light. The film had to be developed chemically. The exposure had to be calculated – what is it that we want to see on the X-ray? – tissue detail or bone detail? You couldn’t get both on the same film. Now, X-rays are digital, ready for viewing immediately, and the digital image can be manipulated to reveal vastly more than those older X-rays could show.
The importance of imaging in diagnosis is huge. It is now taken for granted that most parts of our bodies can be inspected in detail by one or another of the many imaging devices. Tiny cameras can snake, not only through colons, but through our arteries, looking for occlusions. To put that in perspective, when I had my annual physical examinations as a relatively young man, investigation of the colon was done by means of a rigid steel tube, about a yard long, which was inserted into my nether orifice and pushed until it would go no further. This yielded information about the descending colon only – the transverse and ascending colon were not accessible. Some of you may remember those procedures. The examining physician would say, “You may feel some discomfort.” I was thinking, “Is this really preferable to dying of cancer?”
There seems to be no question as to the benefit of colonoscopies as currently carried out. The entire colon can be scanned; discomfort is minimal (except for the tedious and disagreeable prep); any potentially pre-cancerous polyps are removed; in short, colon cancer is rendered largely preventable.
As for the benefits of coronary angiography – the procedure in which the camera cruises around in our arteries – the benefits are debated. It depends on the individual patient; there are no hard and fast rules, although it’s generally required in preparation for placement of a stent in the coronary arteries. Persons with acute coronary syndromes, which include previous heart attacks, strokes, or angina at rest, may be candidates for coronary angiograms. For most people, even for those who might be considered at some risk due to the usual factors – elevated blood pressure or cholesterol and such – the value of the coronary angiogram is questionable.
Another high-tech procedure whose benefits have been widely challenged is the full-body CAT scan. This procedure is not directed at finding the cause of a specific symptom, or looking at potential issues in a specific region of the body. Mammograms look for breast cancer, CAT scans (or MRI) look for evidence of strokes, etc. But the full-body scan casts the net exceedingly wide. Does the scan show any kind of potential problem anywhere in the entire body?
Chances are it does, since hardly any of us are perfect. Something will show up that needs further investigation. So what the full body scan does quite often is open the door to more tests. And then the question arises, what to do? Speaking for myself, if I had a full-body scan (which I almost certainly will not!), and there were suspicious signs in my kidneys or liver or pancreas, I would feel compelled to follow up with further tests – imaging, biopsies, surgery, whatever the next step might be. I would feel I had no choice. Having let that genie out of the bottle, I would have to do whatever the genie commanded, whether I wanted to or not, and whether I really honestly thought there would be a benefit from going down that path.
"reveal" emails? If not,
just click here...
The essential difference between the full-body scan and a specific diagnostic test, such as a mammogram, is that mammograms look for lesions that may be breast cancers. The radiographer is highly experienced in evaluating these images, having looked at many thousands. In a fraction of the cases, a biopsy may be recommended, and most of the time, interpretation of the biopsy is straightforward, resulting in a clear answer regarding the need for further treatment. (Although a recent report suggests that in some lesions, such as cancerous cells growing in the milk ducts (ductal carcinoma in situ), there may be considerable disagreement between the interpretations of many front-line pathologists in comparison with those of recognized experts.) And, from the most important perspective, breast cancer screening by means of mammograms has succeeded in greatly reducing breast cancer mortality, such that about three out of four women actually diagnosed with breast cancer are treated to remission.
There is no comparable benefit with the full-body scan. It is a hammer in search of a nail.
Diagnostic devices and apps
These number in the thousands, and perhaps the tens of thousands. We read about the instant-read thermometers that are being use to check whether people crossing borders have fever, and thus might possibly be infected with Ebola. No longer do we stick a thermometer in a person’s mouth and wait for the mercury column to creep up to the mark separating “normal” from “fever.” (Oh – wait – I forgot! – Mercury is now frowned upon in thermometers because the thermometer might break and the mercury is considered to be toxic.) There are immense numbers of apps that you can load on your smartphone that will monitor vital signs and tell you whether you have exercised enough, eaten too much, or in some way deviated from your healthy living regimen. My eyebrows are raised.
But there are diagnostic devices that are potentially highly valuable. An example is a microneedle skin cancer diagnostic device, which injects microscopic crystal particles (nanoparticles) subcutaneously. These can then be inspected microscopically, and the light they reflect is a clue to the presence of skin cancer.
A diagnostic area that is moving rapidly is the search for biomarkers for all manner of diseases and conditions. Biomarkers are especially valuable when the only alternative means of diagnosing a disease is a complicated, risky, or expensive procedure. Biomarkers can run the gamut, from those that are no more than associated with the disease – i.e., C-reactive protein (CRP) is a sign of inflammation, and elevated CRP is associated with a number of conditions – to those that are intrinsically linked to the disease process itself, as is the case with some liver function tests.
And biomarkers could be genuinely transformative in diseases/conditions that are already well-established before any symptoms appear. Alzheimer’s disease (AD) is the perfect example. Most research points to deposits of beta amyloid in the brain as the culprit in AD, and it seems evident that the process of beta amyloid deposition can start many years before there are any signs of dementia. So far, treatment in patients in whom dementia has already become evident is only marginally beneficial – at best, it only slightly slows the progression of dementia. A biomarker for AD would permit treatment to be started much earlier. Researchers are investigating the possibility of identifying traces of the beta amyloid, and of another substance designated as tau, in the spinal fluid, as an AD biomarker. Similarly, biomarkers for Parkinson’s and other neurodegenerative diseases are being sought as a means of starting treatment sooner.
We wouldn’t be hearing so much about the potential benefit of sequencing the entire genome of an individual in order to arrive at a course of treatment if it hadn’t become so cheap (relatively!) to do it. As of 2008, it cost over a million dollars to sequence a single person’s genome; now it costs just a few thousand, and in the context of life-saving treatment, that’s not a whole lot of money. Many current cancer treatment options rely on genomic sequencing, although not necessarily the whole genome. Some of those options are discussed in Doc Gumshoe’s recent blog about current cancer treatment, which you can check out here. And, as most of us know, a number of diseases are associated with specific genes or gene mutations; this includes some breast cancers, some neurologic conditions, amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease), and many others.
The question then arises, “shouldn’t we all have our genomes sequenced, so that we know what to be on the lookout for?” Or, even, shouldn’t all babies have their genomes sequenced at birth? For sure, the outfits that do the genome sequencing would only be too happy to have that be the general view.
In some cases, obtaining one’s genetic profile may lead to a decision that could be defended as rational. We remember that Angelina Jolie decided to have a double mastectomy based on her having the BRCA1 gene. But some oncologists disagreed with that decision, pointing out that while the BRCA1 gene increased her risk, it wasn’t determinative, and, in any case, given her heightened awareness of risk, she would assiduously monitor herself, and if a cancer did occur, she could start treatment really early, which would maximize the chances for success. (Note, in the statistic I quoted earlier, it’s likely that many of the women diagnosed with breast cancer who did not attain remission may have been screened too late.)
However, what do we feel about confirming that one is certain to develop ALS? People with that gene probably already have a good idea that they are at risk, simply because of the familial associations. But since currently there’s no cure, why undergo a test whose result is to squash your hope of escaping that particular doom? I would like to think that I have many happy and productive years ahead of me, sharing my interests and pleasures with my beloved wife. But I emphatically do not want a detailed map of my future.
The shift in new drug development away from Big Pharma
Back in December of 2011, the McKinsey Quarterly featured a piece entitled “A Wake-up Call for Big Pharma.” They reported that Big Pharma was shrinking. At the time of their report, revenues for Big Pharma have shrunk by 2% from the previous year, and midsize companies had also shrunk. But biotech had grown hugely. The recommendations of the McKinsey geniuses was that Big Pharma should hunker down, stick to their core competency, i.e., marketing, and let the biotechs do the drug development.
This has happened, but only up to a point. Several big and midsized companies cut back on their research and development staff. Just recently, pharmas that announced they were cutting back on R & D include Amgen, Biogen Idec, Glaxo Smith-Kline, and Pfizer.
But what has also been happening is that the larger pharmas have been voraciously snapping up smaller biotechs because of their pipeline products. Recently, AbbVie ponied up $21 billion, give or take a few pennies, for the biotech Pharmacyclics, which has an agent called Imbruvica. Imbruvica (ibrutinib) demonstrated very good results in treating patients with chronic lymphocytic leukemia (CLL), and the expectation is that it could move into first-line treatment for CLL as well as for other cancers. Ergo, a potential b-word drug.
However, Pharmacyclics is not the sole owner of Imbruvica. That ownership is shared with Janssen, which in turn is a part of the Johnson & Johnson empire, so presumably AbbVie and Johnson & Johnson will work out who markets the drug where. Still, $21 billion is a pretty penny, and one wonders whether the deal is worth it for AbbVie. For purposes of comparison, Gilead’s revenue went from $10.8 billion in 2013 to $24.5 billion in 2014, a jump of almost $13 billion, most of which was due to their hepatitis C agents, Sovaldi and Harvoni. How long will it take AbbVie to recoup that $21 billion, sharing the market with Janssen/Johnson & Johnson? And especially if another CLL agent comes along.
Please forgive my brief digression into discussion of financial issues, which I freely admit is not my strong suit. However, I cited that particular deal as an illustration of where the R & D is happening and how Big Pharma is responding. Yes, most R & D is being done by other than Big Pharma – biotechs and academic medical centers. Not infrequently, a small group of scientists comes up with an idea for a new treatment, does a bit of low-cost work, gets some funding either from a university or from private investors (e.g., a venture fund), starts a company, does some preliminary research based on which they get more funding … and at a certain point, when there are sufficiently robust results (we’re not talking about FDA-recognized studies), moves into the big time. Anyway we figure it, in most cases, the conclusion of this process is a deal of some kind with Big Pharma, because it’s exceedingly difficult for a biotech to fund clinical studies of the scope needed for regulatory approval, and then to move on to market the new drug. Big Pharma keeps a watchful eye on developments in biotech, and when there’s opportunity to employ their bankroll, they pounce.
Another way Big Pharma is poised to reap major financial rewards is through the booming field of biosimilars. For a while it seemed as though biosimilars were anathema to Big Pharma, but they seem to have realized that they have immense advantages in that area, and they’re moving in aggressively. Given that biosimilars are sure to enter the market, their calculation is that if the costs of competing biosimilars are roughly comparable, what insurer, physician, or patient would opt for a biosimilar made by an unknown company in a poorly-regulated environment rather than a biosimilar made by a pharmaceutical company with an excellent global reputation?
Guidelines for everything
Every medical society promulgates guidelines, and most physicians (I would guess) at least pay attention to the guidelines even if they do not follow them blindly. In Doc Gumshoe’s carefully-considered opinion, that’s just about as it should be. The medical societies consider all the evidence, and grade the evidence as to its reliability – large, well-controlled clinical trials demonstrating statistically significant differences between agents get the top grades, while smaller trials and observational studies are given somewhat lower marks. All of the data goes into a large pot, stirred around, tasted, discussed, and the committee of chefs makes a pronouncement as to what the sum total of all this evidence purports to show. The objectives of guidelines are to come to some sort of consensus regarding the best practices in specific treatment areas, and probably to bring the laggard segments of the health-care establishment up to scratch.
Sometimes guidelines target concerns that the public health experts think are not getting enough attention. The Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC) issued sweeping guidelines in 2003 based on data reporting that more than 60% of Americans were either hypertensive or in a new category that they had created, dubbed “prehypertensive,” which took in just about everybody with a systolic blood pressure reading over 120 mmHg. They were concerned that a large fraction of Americans with hypertension were unaware of their condition and the serious implications it had for heart disease, and also that most Americans who knew they were hypertensive were not getting adequate treatment. Their treatment recommendations, known as JNC7, were largely based on a clinical trial called ALLHAT, which found that in terms of outcomes, a thiazide diuretic was just as effective as two other agents, those being a calcium-channel blocker and an ACE inhibitor. (The fourth agent in the trial was an alpha blocker, a class of drug not generally used to treat hypertension). Therefore, JNC7 recommended that all new hypertension patients be started on a diuretic, followed by a beta blocker if the goals were not met.
JNC7 did not come out and say so, but a widespread suspicion at the time was that diuretics and beta blockers were favored mostly because they are cheap, and hypertension is pervasive, with significant impact on the entire healthcare system. Many cardiologists were dubious about JNC7 and prescribed newer and, in their opinion, better blood pressure drugs, such as angiotensin-receptor blockers (ARBs). One view of the JNC7 guidelines was that it was economical medicine for poor people. Another view, perhaps more charitable, is that JNC7 was a sincere attempt to address a large unmet need.
In any case, the JNC has now updated those guidelines, putting out JNC8 in October of 2014. The cut-points for starting treatment are now higher, and the list of recommended agents has been expanded. Perhaps JNC7 did its job in making many millions of Americans aware of hypertension as a significant health risk and getting treatment to millions who were getting no treatment at all.
An even wider-angle view
The health-care system, in my view, should have at least three objectives: one, to maximize the health of the community; two, to prevent illness and disease; and three, to manage illnesses and diseases effectively when these occur. As I look at it, about 90% of the effort is lavished on the third objective, maybe 9% on the second, and a bare minimum on the first. When you go to the doctor for your annual check-up, if you’re doing okay, the doctor says something like, “Keep doing what you’re doing.” The doctor might suggest losing a few pounds (not to you, dear reader) or quitting smoking or cutting back on booze, but not much else. The doctor doesn’t know a whole lot about your lifestyle, doesn’t have time to ask, and is probably reluctant to get into the deep and roiled waters of nutrition, exercise, and other things that we can do to improve our health.
That leaves the whole realm of “healthy living” to other Interested Parties, and that’s where my index of suspicion starts to rise steeply. The number of things – foods, activities, supplements – that are promoted to us under the rubric of healthy living is vast, and getting vaster. Before I try any of those things, and certainly before I adopt them, I want evidence, and the evidence mostly does not satisfy me. I don’t want only to know that it works; I also want to know how it works. And, before I try it, I want to know that it works for more than a few isolated individuals.
This is not to say that I dismiss all “anecdotal evidence.” Anecdotal evidence can be the acorn from which mighty oaks grow. Here’s just one example: the belief that “milkmaids don’t get smallpox.” This was purely anecdotal – nobody knew why, nobody had statistical records comparing the prevalence of smallpox in milkmaids with that in the general population. In the eighteenth century, a means of preventing smallpox did actually exist. It was called “variolation,” and it consisted of inoculating people with very small quantities of the substance emanating from a lesion in a person with smallpox. The inoculated patient usually got sick, but did not develop full-fledged smallpox and was thereafter immune. Around the turn of the century, a physician named Edward Jenner, investigating the anecdotal evidence that “milkmaids don’t get smallpox,” tried inoculating people with substance from cowpox lesions. And those individuals did indeed develop immunity to smallpox. That was the origin of vaccination, and the word itself comes from the Latin word for “cow” – vacca. From that anecdotal seed grew the tree of vaccination, which has saved untold millions from a horrible disease. Similar seeds have given rise to many proven interventions (I could make a long list.) But we should not ignore the forests of weeds that grow from many other anecdotal seeds.
How I got here
A recent comment (more than a bit snarky) on the Doc Gumshoe celiac disease post requires a response. The reader said: “Judging from many of your comments on this site you are quite a ‘self-proclaimed expert’ yourself. Not sure what ‘made up’ qualifications you imagine you possess but it is quite obvious most of your knowledge and expertise is rooted in conjecture, hyperbole and pseudo-intellectualism.”
Indeed, I am not a doctor, and I do not have a medical degree. I have been a medical writer for about 30 years, and a good deal of what I do is in the area of continuing medical education (CME), which is aimed at providing healthcare professionals at all levels information that may have been emerged after they completed their training, as well as the views of those eminences known in the field as “key opinion leaders,” or KOLs. There is hardly a disease state or treatment area that I have not at some point worked in, and I try hard to keep abreast of the huge stream of new information that flows daily from many sources.
I try to understand this information, and I try to evaluate it with a questioning and somewhat skeptical attitude. I can best describe my perspective on medical learning by invoking my 11th grade Advanced Algebra teacher, Miss Charlotte Truesdell and describing some of her deep-rooted principles.
Miss Truesdell profoundly believed that mathematics was the underlying foundation of everything, including all science, and that it was not difficult to understand mathematics if you just put your mind to it – “use the brains you were born with,” in her words. She did not bother much with the explanations of supposedly complex matters as set forth in our text-book. Her explanations were much, much simpler, and if we were using our wits, they seemed entirely reasonable. I think Miss Truesdell thought that the explanations in the text were unnecessarily abstruse. Probably because the text-book writers themselves had had to struggle with those concepts, they didn’t want it to sound easy for the students. But Miss Truesdell herself found mathematics to be logical – “just common sense” – and she helped us to find math, if not downright easy, at least logical and reasonable, and within the grasp of our 11th grade minds.
But at the same time, Miss Truesdell did not want us to take anything simply on her say-so. She wanted us to understand why it was so. In going through those many-step proofs, she would always go back to the underlying reasons for each step – all of the stuff that we supposedly knew and had internalized. When a student put a homework problem on the blackboard – and we did this every day – her way of correcting a mistake was not simply to say, “that is incorrect,” but to ask, “how can that be?”
Those simple, reasonable responses have stayed with me. A great deal of my perspective – not just on the medical and healthcare matters that I work with, but on a great deal more – is informed by that voice, speaking the words, “just use the brains you were born with.” And, occasionally, “how can that be?”
* * * * * * *
There will be a brief hiatus in the Doc Gumshoe posts, because I am going to have knee replacement surgery in early April, and after that will be imprisoned in the rehab torture chamber for a couple of weeks. I’ll let you know how it goes. There might be interesting details.