Let’s lead off with a news item that Doc Gumshoe, despite his widely-acknowledged acumen, could not manage to figure out in the least. It took an Op Ed in the N Y Times to sweep the cobwebs out of his noggin.
The news item was this: children born in the month of August in a number of states in the US have the highest rates of attention deficit hyperactivity disorder, as compared with children born at other times of the year. Specifically, children born in August were found to have a nearly 40% higher chance of being diagnosed and treated for ADHD than children born just one month later. This was stated in the context of a general report on the diagnosis and prevalence of ADHD, which included such puzzling bits of data as that three times as many children in Kentucky receive that diagnosis as do children in Nevada. One in five children in Kentucky is diagnosed with ADHD, and about 5% of all children in the US currently are taking some form of medication for ADHD.
My initial response to this was that it had to be a diagnosis issue, and not a matter of the prevalence of the actual condition. I could not imagine that what sign of the Zodiac kids were born under could have any effect on their ADHD, although perhaps there could be a difference based on the time of year in which the mother brought the infant to term. Was it possible that kids brought to term in the hot summer months might have different developmental characteristics than those who rested in their mothers’ wombs until the cozy winter months?
The puzzle was more or less cleared up by the aforementioned Op Ed, which in turn referred to a paper in the New England Journal of Medicine. (Layton TJ N Engl J Med. 2018 Nov 29;379(22):2122-2130) The underlying reason for the peculiar prevalence of the ADHD diagnosis in August-born children is almost certainly that in 18 states, in order for a child to be enrolled in kindergarten for the school year, a child has to reach his or her fifth birthday before September 1st. So if the birthday of the kid in question is August 30th, he/she gets into kindergarten right away, but if it’s September 1st or after, he/she has to “wait ‘til next year.” The result of this rule is that the kids born in August are the youngest in the class. And, as a result of being the youngest, they also act the youngest – a bit more inattentive and fidgety. A few months make a lot of difference in the development of a young child. Parents are well aware of this; however, teachers – who were largely responsible for labeling these kids as being affected by ADHD – are more likely to have equal expectations for all the younglings in their classroom and tag the younger ones with ADHD when they’re really only acting their age.
All of this needs to be considered in the context of what many experts consider to be the significant overdiagnosis of ADHD, which has serious potential consequences when there are attempts to treat the condition through drugs. ADHD drugs need to be considered with great care before prescribing them for children, especially very young children, who might well continue using these drugs for long periods. A misdiagnosis, based on an age difference between a kid and his peers, could have enormous consequences.
The aspirin paradox: which way to jump?
I preface this discussion by stating that I take a daily full-dose (325 mg) aspirin. I’ve been doing it for about 30 years, and I’m not going to quit. But the contradictory news items are nonetheless of concern.
Most recent was the report of the ASPREE trial, conducted in Australia and the US. (McNeill JJ N Engl J Med. 2018 Oct 18;379(16):1519-1528). This was a placebo-controlled trial, comparing 100 mg enteric-coated aspiring with placebo in 19,114 healthy older individuals, mean age 74 years, for a period of five years. The short take is that daily low-dose aspirin did not result in any benefit in terms of the primary combined endpoint, which was death, dementia, and persistent physical disability. The event rate in the placebo cohort was a tiny bit lower than in the group taking the daily aspirin – 21.2 events per 1,000 person-years in the placebo group versus 21.5 per 1,000 person-years in the aspirin group.
Particularly attention-getting in this trial was the reported failure of aspirin to bring about a significant reduction in the risk of cardiovascular disease. The aspirin treated cohort experienced 10.7 cardiovascular events per 1,000 person-years, while the placebo group experienced 11.3 events per 1,000 person years, a non-significant difference.
As expected, major hemorrhages and gastrointestinal bleeding were occurred with somewhat more frequency among the aspirin-treated group than among those subjects taking the placebo.
In contrast, a large cohort study in Sweden found that stopping low-dose aspirin led to a rapid and significant increase in the risk of heart attacks and strokes. (Sundström J. Circulation. 2017 Sep 26;136(13):1183-1192). The study followed 601,527 Swedish adults, age 40 years or more, who were prescribed low-dose aspirin, for a period of three years. In Sweden, we note, aspirin is given by prescription and not over-the-counter as in the US. At the start of the study, all study participants were taking aspirin, but during the three-year course of the study about 15% of the participants stopped taking aspirin.
Overall, 62,690 heart attacks, strokes, or deaths from cardiovascular causes took place in the persons followed by the study. The rate of these events was 37% higher in the subjects who discontinued aspirin than in those who stayed with the treatment plan. This corresponded to one additional cardiovascular event observed per year in one of every 74 patients who quit taking aspirin. The study authors note that the risk increased shortly after discontinuation of aspirin and did not appear to diminish over time. In the group of patients who had already sustained a cardiovascular event and had been prescribed aspirin in the hopes of avoiding a repeated incident, there was one additional signal cardiovascular event for every 36 patients who dropped their aspirin regimen.
A spokesperson for the American Heart Association, Dr Nieca Goldberg, agreed, commenting that “once you stop aspirin, the blood’s clotting tendency goes up.” And, as we know, blood clots are a principal cause of strokes and are also involved in myocardial infarctions.
What might account for the difference in the results of these two studies? In the Australia/US study, daily aspirin delivered no benefit, while in the Swedish study, individuals who stopped taking aspirin quickly experienced a significant uptick in the rate of serious cardiovascular events, strongly supporting the conclusion that aspirin did result in cardiovascular benefit. How can that be?
If we look at the patient populations in those two studies, clear differences emerge, and these differences probably account for the differences in the study outcomes. The Australia/US study was a prospective placebo-controlled study in older patients who were in good health at the start of the study. It was specifically focused on the question whether daily aspirin was effective in primary prevention – that is, in prevention of the primary occurrence of disease. The Swedish study was observational, in a much larger group of much younger patients. We don’t know anything about the health status of the patients in the Swedish study, but we do know that they were prescribed aspirin. It’s highly likely that in many of those patients aspirin was intended for secondary prevention, meaning that many patients had either already experienced a stroke or myocardial infarction or were at elevated risk for some kind of cardiovascular event.
And while we’re on the subject of aspirin, two studies were recently published in JAMA Oncology. (Barnard ME. JAMA Oncol. 2018 Oct doi: 10.1001/jamaoncol.2018.4149; Simon TG. JAMA Oncol. 2018 Oct doi: 10.1001/jamaoncol.2018.4154.). Both were based on analysis of follow-up data from the Nurses’ Health Studies, which ran from 1980 to 2015. One study found that women who regularly took low-dose aspirin had a 23% reduction in ovarian cancer as compared with aspirin non-users. The other study reported a 49% reduction in the incidence of cancer of the liver in women who took regular (not low-dose) aspirin twice-weekly. This follows a 2017 study, published in Cancer Epidemiology, Biomarkers and Prevention, which reported an overall 46% reduction in pancreatic cancer in men and women who took low-dose aspirin regularly. The reductions increased by 8% per year as the duration of aspirin use increased. (Risch HA. Cancer Epidemiol Biomarkers Prev. 2017 Jan;26(1):68-74))
Why do I take full-dose (325 mg) rather than low-dose (81.25 mg) aspirin? Well, about 30 years ago I happened on a study from the UK, published I think in the British Journal of Medicine, that reported that in patients who could tolerate the full-aspirin dose, the heart-protective effects were superior to those associated with the low-dose version. (I think that at that time at least, full-dose aspirin over there was 400 mg and low-dose was 100 mg, but I’m not 100% sure.) So I asked my excellent primary care physician (who is also a cardiologist), and he agreed that I could go with the full-dose version so long as we kept a close watch on any kind of GI bleeding. Which we did. Fingers crossed, all well so far.
Should CRISPR be used to create “designer babies,” now or ever?
I did not state the question that way in order to elicit a loud and definitive “No!” The queasy feeling that the term “designer babies” brings out is natural. We do not want Planet Earth to be taken over by a race of replicants, immune to the failings of mankind. On the other hand, if gene editing could safely prevent the transmission of damaging genetic material from parents to children, what would be wrong with that?
What I’m referring to here, as certainly you all know, is the recent and widely publicized case of the Chinese scientist who reported that he had removed a gene, identified as CCR5, from the sperm cells of the HIV-infected father. CCR5 is the gene that assists HIV in entering cells. The altered sperm cells were then used to fertilize the mother’s eggs, which were then re-implanted. Subsequently, the mother gave birth to twins who will not only not inherit their father’s HIV, but be immune to HIV for the rest of their lives. Is this not a good thing?
The prevailing view seems to be that it is not a good thing. The scientist, He Jiakui, is a physicist, not a physician. My personal guess is that he pulled off this stunt mostly to show he could do it, rather than based on the motivation of protecting the children who would otherwise be at risk for HIV. To begin with, as has been pointed out, there were other ways to make certain that HIV would not be passed on to the offspring – CRISPR was far from the only option. But much more important is the possibility of altering more than the CCR5 gene, or also that the CCR5 gene has unknown functions other than permitting the entrance of HIV. The human genome has about six billion units, and we do not begin to know what happens if we tinker with any part of the genome. It is conceivable that every single cell in the body could be altered by editing the genome. We have about 37 trillion cells, and at this point there is no conceivable way of tracking what’s happening to every single cell, or what unforeseen consequences that might bring.
Many eminent scientists have expressed concern not only about the deed itself – unnecessarily messing with the genome of human beings – but with the outpouring of strongly negative public response. The worry is that research in editing the human genome will be totally shut down. They point out that if it were possible, by removing snippets of genetic material, to make people immune to some diseases which at present have no cure, that would indeed be a worthwhile goal. One of the diseases mentioned is amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig ’s disease. In a certain percentage of individuals with ALS, the disease is inherited; if a child has one parent with ALS, the chances that the child will eventually develop the disease are about 50%. Unlike HIV, ALS is not an infection, but a fatal in-dwelling defect that eventually affects most of a person’s physiologic functions. There are numerous such genetic diseases, any of which might be able to be prevented through manipulation of genetic material.
"reveal" emails? If not,
just click here...
It has also been pointed out that the revulsion that many people express at the very idea of tinkering with genetic material is somewhat misplaced. Genetic material is constantly being changed, not deliberately by using tools such as CRISPR, but as the result of mutations, which occur constantly. For example, mutations in mitochondrial DNA happen frequently, and mitochondrial DNA is passed on to children by their mothers. In some cases, these children are affected by diseases related to defective mitochondrial DNA, with consequences such as blindness and early death. But in some cases, the mutations in the mother’s mitochondrial DNA may actually lead to genetic advantages in the children.
It should be more generally understood that, in fact, evolution takes place largely because of random mutations in genetic material. Most of them are of little consequence. Some of the mutations result in offspring that are less successful in the battle for survival. Those mutations eventually die out. But some mutations result in winners which survive and pass those characteristics on to their offspring. Thus did we humans evolve from sea slugs. We are all results of genetic modification.
A short PS
In response to that controversy, the Trump administration has ordered scientists at the NIH to stop procuring new human fetal tissue for research. The response from the Very Top was to dangle an offer of $20 million in research to find alternatives to human fetal tissue for studying disease. Caitlin Oakley, a spokesperson from HHS had this to say: “We are a pro-life, pro-science Administration. This means that we understand and appreciate that medical research and the testing of new medical treatments using fetal tissue raises inherent moral and ethical issues.”
The opioid abuse dilemma: pain control versus more and more overdose deaths
It was only in the early 1990s that the number of opioid overdose deaths in the US topped 10,000 in a single year, but by 2017 the death toll had reached 70,236 people. In ten states, mostly in the eastern half of the country, there were more than 15 overdose fatalities per 100,000 residents. The leader in this sorry competition is West Virginia, with 58 overdose deaths per 100,000 residents, but my own state of Connecticut is not so far behind with 31 deaths per 100,000 residents.
A major fraction of those deaths involve fentanyl, which by this time we all know is an especially potent drug. For example, in Connecticut, of those 31 overdose deaths, 20 involved fentanyl; in Maine, 22 of 34; in New Hampshire 30 of 37.
In this context, I will report on a controversy within the FDA that has bearing on this question. In mid-October of this year, the FDA’s Anesthetic and Analgesic Drug Products Advisory Committee (AADPAC) voted 10 to 3 in favor of recommending approval of a fentanyl analogue, sufentanil. This drug is estimated to be 500 to 600 times more potent than morphine and about 5 to 10 times more potent than fentanyl itself. It is taken sublingually, in tablet form.
The chair of the AADPAC, Raeford Brown, Jr, MD, happened not to have been at the meeting at which the committee voted in favor of recommending approval, since he was attending a meeting of the American Society of Anesthesiologists at San Francisco while the meeting was taking place. But he wrote a letter to the FDA warning that of the likelihood of diversion, abuse, and death if sufentanil was to be approved. He recommended that the FDA not approve the drug, despite the vote of AADPAC favoring approval. Dr Brown pointed out that the Drug Safety and Risk Management Advisory Committee (DSRMAC) had not been invited to participate in the AADPAC meeting to evaluate sufentanil.
The drug’s manufacturer, AcelRx Pharmaceuticals, has claimed that it has in place a Risk Evaluation and Mitigation Strategy (REMS) program which will be effective in reducing the risk of abuse. Arguing against the effectiveness of the REMS program, it was noted by Dr Brown and others that the very small tablets in which the drug is formulated make it especially subject to diversion, including by medical personnel.
Over the objections of Dr Brown and others, the FDA approved sufentanil on November 2, 2018, citing in particular the need for a potent oral analgesic, particularly for use in the armed forces. The trade name for sufentanil is Dsuvia. The FDA acknowledged the controversy around sufentanil, accompanying the notice of approval with an unusually long statement over the signature of Scott Gottlieb, a small excerpt of which I append here:
“To address concerns about the potential risks associated with Dsuvia, this product will have strong limitations on its use. It can’t be dispensed to patients for home use and should not be used for more than 72 hours. And it should only be administered by a health care provider using a single-dose applicator. That means it won’t be available at retail pharmacies for patients to take home. These measures to restrict the use of this product only within a supervised health care setting, and not for home use, are important steps to help prevent misuse and abuse of Dsuvia, as well reduce the potential for diversion. Because of the risks of addiction, abuse and misuse with opioids, Dsuvia is also to be reserved for use in patients for whom alternative pain treatment options have not been tolerated, or are not expected to be tolerated, where existing treatment options have not provided adequate analgesia, or where these alternatives are not expected to provide adequate analgesia.”
It is certainly the case that fentanyl and its analogs have legitimate medical uses, and that effective drugs for pain management should not be suppressed because some individuals misuse them, with dire consequences. Restricting the use of sufentanil to health professionals should, in all probability, keep the tablets themselves out of the hands of most would-be abusers. One of the ways that opioids get diverted is when a person presents at a “pain clinic” with complaints of severe untreated pain, and walks out with a prescription for an opioid, which he or she fills at the conveniently-situated “pharmacy” which is right next door. We can assume, or hope, that this will not happen with sufentanil.
However, the great majority of opioid abusers do not obtain them through regular medical or pharmaceutical channels. They come from the black market. They may be cooked up domestically or smuggled into the US. Diversion of FDA-approved pain medications into the hands of abusers or their facilitators is a small part of the overall problem.
What is not often discussed in this context is that, in fact, the rate of opioid prescribing is at an 18-year low. It has fallen every year since 2011 and is continuing to fall. Most of the individuals who abuse opioids experimented with other recreational drugs such a cocaine or amphetamines before taking up opioids as their recreational drug of choice. The great majority of opioid abusers did not initially use opioids for pain relief.
Currently, about 18 million Americans are taking opioids for long-term pain management. Many of the