Trying to Make Sense of Puzzling Health-Care News

Musings from Doc Gumshoe: Aspirin, ADHD, CRISPR and more...

By mjorrin, December 19, 2018

Let’s lead off with a news item that Doc Gumshoe, despite his widely-acknowledged acumen, could not manage to figure out in the least.   It took an Op Ed in the N Y Times to sweep the cobwebs out of his noggin.   

The news item was this: children born in the month of August in a number of states in the US have the highest rates of attention deficit hyperactivity disorder, as compared with children born at other times of the year.   Specifically, children born in August were found to have a nearly 40% higher chance of being diagnosed and treated for ADHD than children born just one month later.   This was stated in the context of a general report on the diagnosis and prevalence of ADHD, which included such puzzling bits of data as that three times as many children in Kentucky receive that diagnosis as do children in Nevada.   One in five children in Kentucky is diagnosed with ADHD, and about 5% of all children in the US currently are taking some form of medication for ADHD.  

My initial response to this was that it had to be a diagnosis issue, and not a matter of the prevalence of the actual condition.   I could not imagine that what sign of the Zodiac kids were born under could have any effect on their ADHD, although perhaps there could be a difference based on the time of year in which the mother brought the infant to term.   Was it possible that kids brought to term in the hot summer months might have different developmental characteristics than those who rested in their mothers’ wombs until the cozy winter months?

The puzzle was more or less cleared up by the aforementioned Op Ed, which in turn referred to a paper in the New England Journal of Medicine.   (Layton TJ N Engl J Med. 2018 Nov 29;379(22):2122-2130)  The underlying reason for the peculiar prevalence of the ADHD diagnosis in August-born children is almost certainly that in 18 states, in order for a child to be enrolled in kindergarten for the school year, a child has to reach his or her fifth birthday before September 1st.   So if the birthday of the kid in question is August 30th, he/she gets into kindergarten right away, but if it’s September 1st or after, he/she has to “wait ‘til next year.”  The result of this rule is that the kids born in August are the youngest in the class.   And, as a result of being the youngest, they also act the youngest – a bit more inattentive and fidgety.   A few months make a lot of difference in the development of a young child.   Parents are well aware of this; however, teachers – who were largely responsible for labeling these kids as being affected by ADHD – are more likely to have equal expectations for all the younglings in their classroom and tag the younger ones with ADHD when they’re really only acting their age.

All of this needs to be considered in the context of what many experts consider to be the significant overdiagnosis of ADHD, which has serious potential consequences when there are attempts to treat the condition through drugs.   ADHD drugs need to be considered with great care before prescribing them for children, especially very young children, who might well continue using these drugs for long periods.   A misdiagnosis, based on an age difference between a kid and his peers, could have enormous consequences.           

The aspirin paradox: which way to jump?

I preface this discussion by stating that I take a daily full-dose (325 mg) aspirin.   I’ve been doing it for about 30 years, and I’m not going to quit.   But the contradictory news items are nonetheless of concern.

Most recent was the report of the ASPREE trial, conducted in Australia and the US.   (McNeill JJ N Engl J Med. 2018 Oct 18;379(16):1519-1528).   This was a placebo-controlled trial, comparing 100 mg enteric-coated aspiring with placebo in 19,114 healthy older individuals, mean age 74 years, for a period of five years.   The short take is that daily low-dose aspirin did not result in any benefit in terms of the primary combined endpoint, which was death, dementia, and persistent physical disability.   The event rate in the placebo cohort was a tiny bit lower than in the group taking the daily aspirin – 21.2 events per 1,000 person-years in the placebo group versus 21.5 per 1,000 person-years in the aspirin group.   

Particularly attention-getting in this trial was the reported failure of aspirin to bring about a significant reduction in the risk of cardiovascular disease.   The aspirin treated cohort experienced 10.7 cardiovascular events per 1,000 person-years, while the placebo group experienced 11.3 events per 1,000 person years, a non-significant difference.

As expected, major hemorrhages and gastrointestinal bleeding were occurred with somewhat more frequency among the aspirin-treated group than among those subjects taking the placebo.

In contrast, a large cohort study in Sweden found that stopping low-dose aspirin led to a rapid and significant increase in the risk of heart attacks and strokes.   (Sundström J.   Circulation. 2017 Sep 26;136(13):1183-1192).     The study followed 601,527 Swedish adults, age 40 years or more, who were prescribed low-dose aspirin, for a period of three years.   In Sweden, we note, aspirin is given by prescription and not over-the-counter as in the US.   At the start of the study, all study participants were taking aspirin, but during the three-year course of the study about 15% of the participants stopped taking aspirin.   

Overall, 62,690 heart attacks, strokes, or deaths from cardiovascular causes took place in the persons followed by the study.   The rate of these events was 37% higher in the subjects who discontinued aspirin than in those who stayed with the treatment plan.   This corresponded to one additional cardiovascular event observed per year in one of every 74 patients who quit taking aspirin.   The study authors note that the risk increased shortly after discontinuation of aspirin and did not appear to diminish over time.   In the group of patients who had already sustained a cardiovascular event and had been prescribed aspirin in the hopes of avoiding a repeated incident, there was one additional signal cardiovascular event for every 36 patients who dropped their aspirin regimen.

A spokesperson for the American Heart Association, Dr Nieca Goldberg, agreed, commenting that “once you stop aspirin, the blood’s clotting tendency goes up.”   And, as we know, blood clots are a principal cause of strokes and are also involved in myocardial infarctions.   

What might account for the difference in the results of these two studies?   In the Australia/US study, daily aspirin delivered no benefit, while in the Swedish study, individuals who stopped taking aspirin quickly experienced a significant uptick in the rate of serious cardiovascular events, strongly supporting the conclusion that aspirin did result in cardiovascular benefit.   How can that be?

If we look at the patient populations in those two studies, clear differences emerge, and these differences probably account for the differences in the study outcomes.   The Australia/US study was a prospective placebo-controlled study in older patients who were in good health at the start of the study.   It was specifically focused on the question whether daily aspirin was effective in primary prevention – that is, in prevention of the primary occurrence of disease.   The Swedish study was observational, in a much larger group of much younger patients.   We don’t know anything about the health status of the patients in the Swedish study, but we do know that they were prescribed aspirin.   It’s highly likely that in many of those patients aspirin was intended for secondary prevention, meaning that many patients had either already experienced a stroke or myocardial infarction or were at elevated risk for some kind of cardiovascular event.   

And while we’re on the subject of aspirin, two studies were recently published in JAMA Oncology.   (Barnard ME.   JAMA Oncol. 2018 Oct doi: 10.1001/jamaoncol.2018.4149; Simon TG.    JAMA Oncol. 2018 Oct doi: 10.1001/jamaoncol.2018.4154.).  Both were based on analysis of follow-up data from the Nurses’ Health Studies, which ran from 1980 to 2015.   One study found that women who regularly took low-dose aspirin had a 23% reduction in ovarian cancer as compared with aspirin non-users.   The other study reported a 49% reduction in the incidence of cancer of the liver in women who took regular (not low-dose) aspirin twice-weekly.   This follows a 2017 study, published in Cancer Epidemiology, Biomarkers and Prevention, which reported an overall 46% reduction in pancreatic cancer in men and women who took low-dose aspirin regularly.   The reductions increased by 8% per year as the duration of aspirin use increased.   (Risch HA.   Cancer Epidemiol Biomarkers Prev. 2017 Jan;26(1):68-74)) 

Why do I take full-dose (325 mg) rather than low-dose (81.25 mg) aspirin?   Well, about 30 years ago I happened on a study from the UK, published I think in the British Journal of Medicine, that reported that in patients who could tolerate the full-aspirin dose, the heart-protective effects were superior to those associated with the low-dose version.   (I think that at that time at least, full-dose aspirin over there was 400 mg and low-dose was 100 mg, but I’m not 100% sure.)   So I asked my excellent primary care physician (who is also a cardiologist), and he agreed that I could go with the full-dose version so long as we kept a close watch on any kind of GI bleeding.   Which we did.   Fingers crossed, all well so far.

Should CRISPR be used to create “designer babies,” now or ever?

I did not state the question that way in order to elicit a loud and definitive “No!”   The queasy feeling that the term “designer babies” brings out is natural.   We do not want Planet Earth to be taken over by a race of replicants, immune to the failings of mankind.   On the other hand, if gene editing could safely prevent the transmission of damaging genetic material from parents to children, what would be wrong with that?

What I’m referring to here, as certainly you all know, is the recent and widely publicized case of the Chinese scientist who reported that he had removed a gene, identified as CCR5, from the sperm cells of the HIV-infected father.   CCR5 is the gene that assists HIV in entering cells.   The altered sperm cells were then used to fertilize the mother’s eggs, which were then re-implanted.   Subsequently, the mother gave birth to twins who will not only not inherit their father’s HIV, but be immune to HIV for the rest of their lives.   Is this not a good thing?  

The prevailing view seems to be that it is not a good thing.   The scientist, He Jiakui, is a physicist, not a physician.   My personal guess is that he pulled off this stunt mostly to show he could do it, rather than based on the motivation of protecting the children who would otherwise be at risk for HIV.   To begin with, as has been pointed out, there were other ways to make certain that HIV would not be passed on to the offspring – CRISPR was far from the only option.   But much more important is the possibility of altering more than the CCR5 gene, or also that the CCR5 gene has unknown functions other than permitting the entrance of HIV.   The human genome has about six billion units, and we do not begin to know what happens if we tinker with any part of the genome.   It is conceivable that every single cell in the body could be altered by editing the genome.   We have about 37 trillion cells, and at this point there is no conceivable way of tracking what’s happening to every single cell, or what unforeseen consequences that might bring.

Many eminent scientists have expressed concern not only about the deed itself – unnecessarily messing with the genome of human beings – but with the outpouring of strongly negative public response.   The worry is that research in editing the human genome will be totally shut down.   They point out that if it were possible, by removing snippets of genetic material, to make people immune to some diseases which at present have no cure, that would indeed be a worthwhile goal.   One of the diseases mentioned is amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig ’s disease.   In a certain percentage of individuals with ALS, the disease is inherited; if a child has one parent with ALS, the chances that the child will eventually develop the disease are about 50%.   Unlike HIV, ALS is not an infection, but a fatal in-dwelling defect that eventually affects most of a person’s physiologic functions.   There are numerous such genetic diseases, any of which might be able to be prevented through manipulation of genetic material.

It has also been pointed out that the revulsion that many people express at the very idea of tinkering with genetic material is somewhat misplaced.   Genetic material is constantly being changed, not deliberately by using tools such as CRISPR, but as the result of mutations, which occur constantly.   For example, mutations in mitochondrial DNA happen frequently, and mitochondrial DNA is passed on to children by their mothers.   In some cases, these children are affected by diseases related to defective mitochondrial DNA, with consequences such as blindness and early death.   But in some cases, the mutations in the mother’s mitochondrial DNA may actually lead to genetic advantages in the children.    

It should be more generally understood that, in fact, evolution takes place largely because of random mutations in genetic material.   Most of them are of little consequence.   Some of the mutations result in offspring that are less successful in the battle for survival.   Those mutations eventually die out.   But some mutations result in winners which survive and pass those characteristics on to their offspring.   Thus did we humans evolve from sea slugs.   We are all results of genetic modification.

A short PS

In response to that controversy, the Trump administration has ordered scientists at the NIH to stop procuring new human fetal tissue for research.   The response from the Very Top was to dangle an offer of $20 million in research to find alternatives to human fetal tissue for studying disease.   Caitlin Oakley, a spokesperson from HHS had this to say: “We are a pro-life, pro-science Administration. This means that we understand and appreciate that medical research and the testing of new medical treatments using fetal tissue raises inherent moral and ethical issues.”   

The opioid abuse dilemma: pain control versus more and more overdose deaths 

It was only in the early 1990s that the number of opioid overdose deaths in the US topped 10,000 in a single year, but by 2017 the death toll had reached 70,236 people.   In ten states, mostly in the eastern half of the country, there were more than 15 overdose fatalities per 100,000 residents.   The leader in this sorry competition is West Virginia, with 58 overdose deaths per 100,000 residents, but my own state of Connecticut is not so far behind with 31 deaths per 100,000 residents.

A major fraction of those deaths involve fentanyl, which by this time we all know is an especially potent drug.   For example, in Connecticut, of those 31 overdose deaths, 20 involved fentanyl; in Maine, 22 of 34; in New Hampshire 30 of 37.      

In this context, I will report on a controversy within the FDA that has bearing on this question.   In mid-October of this year, the FDA’s Anesthetic and Analgesic Drug Products Advisory Committee (AADPAC) voted 10 to 3 in favor of recommending approval of a fentanyl analogue, sufentanil.   This drug is estimated to be 500 to 600 times more potent than morphine and about 5 to 10 times more potent than fentanyl itself.   It is taken sublingually, in tablet form.

The chair of the AADPAC, Raeford Brown, Jr, MD,  happened not to have been at the meeting at which the committee voted in favor of recommending approval, since he was attending a meeting of the American Society of Anesthesiologists at San Francisco while the meeting was taking place.   But he wrote a letter to the FDA warning that of the likelihood of diversion, abuse, and death if sufentanil was to be approved.   He recommended that the FDA not approve the drug, despite the vote of AADPAC favoring approval.   Dr Brown pointed out that the Drug Safety and Risk Management Advisory Committee (DSRMAC) had not been invited to participate in the AADPAC meeting to evaluate sufentanil.

The drug’s manufacturer, AcelRx Pharmaceuticals, has claimed that it has in place a Risk Evaluation and Mitigation Strategy (REMS) program which will be effective in reducing the risk of abuse.   Arguing against the effectiveness of the REMS program, it was noted by Dr Brown and others that the very small tablets in which the drug is formulated make it especially subject to diversion, including by medical personnel.

Over the objections of Dr Brown and others, the FDA approved sufentanil on November 2, 2018, citing in particular the need for a potent oral analgesic, particularly for use in the armed forces.   The trade name for sufentanil is Dsuvia.   The FDA acknowledged the controversy around sufentanil, accompanying the notice of approval with an unusually long statement over the signature of Scott Gottlieb, a small excerpt of which I append here:   

“To address concerns about the potential risks associated with Dsuvia, this product will have strong limitations on its use. It can’t be dispensed to patients for home use and should not be used for more than 72 hours. And it should only be administered by a health care provider using a single-dose applicator. That means it won’t be available at retail pharmacies for patients to take home. These measures to restrict the use of this product only within a supervised health care setting, and not for home use, are important steps to help prevent misuse and abuse of Dsuvia, as well reduce the potential for diversion. Because of the risks of addiction, abuse and misuse with opioids, Dsuvia is also to be reserved for use in patients for whom alternative pain treatment options have not been tolerated, or are not expected to be tolerated, where existing treatment options have not provided adequate analgesia, or where these alternatives are not expected to provide adequate analgesia.”

It is certainly the case that fentanyl and its analogs have legitimate medical uses, and that effective drugs for pain management should not be suppressed because some individuals misuse them, with dire consequences.   Restricting the use of sufentanil to health professionals should, in all probability, keep the tablets themselves out of the hands of most would-be abusers.   One of the ways that opioids get diverted is when a person presents at a “pain clinic” with complaints of severe untreated pain, and walks out with a prescription for an opioid, which he or she fills at the conveniently-situated “pharmacy” which is right next door.   We can assume, or hope, that this will not happen with sufentanil.   

However, the great majority of opioid abusers do not obtain them through regular medical or pharmaceutical channels.   They come from the black market.   They may be cooked up domestically or smuggled into the US.   Diversion of FDA-approved pain medications into the hands of abusers or their facilitators is a small part of the overall problem.

What is not often discussed in this context is that, in fact, the rate of opioid prescribing is at an 18-year low.   It has fallen every year since 2011 and is continuing to fall.   Most of the individuals who abuse opioids experimented with other recreational drugs such a cocaine or amphetamines before taking up opioids as their recreational drug of choice.   The great majority of opioid abusers did not initially use opioids for pain relief.

Currently, about 18 million Americans are taking opioids for long-term pain management.   Many of these individuals are experiencing difficulties in obtaining their opioid prescriptions.   Physicians may be reluctant to prescribe opioids because of the chance of regulatory scrutiny – they emphatically do not want to be identified as “over-prescribers.”    In some cases, long-term pain patients find that they are being “tapered to zero,” ,whether or not their pain levels have receded, mostly so that their health-care providers can claim to be doing their part in the battle to combat addiction and overdoses.   Insurers and pharmacy chains also institute measures ostensibly to combat diversion and misuse of opioids.   The result of these initiatives is to leave a considerable number of patients under-treated for their pain.          

… and another “something” that might be relevant

I happened to come across a rather muddled piece in the Journal of Pain (Meintz SM. J Pain 2016;17(6)642-653) entitled “Differences in Pain Coping between Black and White Americans: A Meta-Analysis.”   Without going into needless detail, the thesis was that black Americans engage much more in pain-coping strategies than white Americans.   The most frequently employed strategy was called “hoping and praying,” which I’m willing to accept as a strategy, regardless of its efficacy.   The complete list of coping strategies was: hoping/praying; catastrophizing; diverting attention; coping self-statements; reinterpreting pain; ignoring; increasing behavioral activity; and excercising/stretching.  The paper did not go into detail explaining what these coping strategies consisted of.   Some are fairly obvious, such as ignoring.   Others, I’m in the dark.   As for the underlying reasons for the difference, here are a couple of sentences from the conclusion:

“Furthermore, higher levels of pain in black individuals may be confounded with the race differences observed in pain coping.   Race differences in coping may be primarily driven by differences in culture, for which race serves as a frequently measured but imprecise proxy. Indeed, Robbins and colleagues suggest that genetically identified ancestral differences account for a small fraction of the variation in pain between white and black individuals.”

What’s this about race differences in coping primarily being driven by differences in culture?   Elsewhere in the paper there was a reference to the importance of religion in the lives of black Americans; thus greater reliance on “praying/hoping.”   But if I’m reading it right, the implication is that black Americans actually experience higher levels of pain than white Americans.   But how in the world would they measure that?   The doctor – or more likely, the nurse – asks you to rate your pain on a scale of 1 to 10, where 10 is the worst pain imaginable.   Feed all that into the computer and I don’t want to use the word that describes what comes out.

But I do have a thought.   Is it possible that black Americans just happen to get less than adequate pain medication?   Is it possible that they are more frequently treated by health-care providers who are reluctant to put sufficiently powerful opioids in their hands?   It is certainly the case that individuals with similar pain levels receive widely differing pain medications.   

Here’s a story: a good friend of ours, who happens not to be African-American, had knee replacement surgery a few months after I did.   The surgery was performed at a highly respected New York City hospital, and by all accounts the procedure went well.   But our friend was in extreme pain after the surgery, and the pain continued for several days.   She lay in her hospital bed, weeping and whimpering with pain.   Is it possible that she is more sensitive to pain than I am?   Certainly.   But the chief difference between her experience and mine was that I received multiple doses of oxycodone, several times a day – on waking up in the morning, prior to going to each of my two daily physical therapy sessions, and last thing at night before going to sleep.   I definitely experienced severe pain, such as when the physical therapist was using all of his considerable strength to try to increase the flexion and extension of the leg with my new knee, but aside from that my pain level was mostly tolerable.   

Our friend, on the other hand, was being treated with hydromorphone, which is much less potent than oxycodone, and she had to plead with the nursing staff to get the next dose.   It is possible that she was judged by her physician to be more prone to becoming addicted to her pain medication.   It is also possible – and in my view, much more likely – that her physician was much more reluctant than my physician to prescribe a potent opiate less he be characterized as an “enabler” of drug abuse. 

In any case, the net result is that our friend suffered, while I did not.   I can certainly see that she would try to employ “pain coping” mechanisms to relieve her suffering, but this was not due to “cultural” differences.   Like many other individuals who need pain medication, she took the rap for the misdeeds of others.

A bit more on that thorny question

I see that I immediately lapsed into “blame the victim” mode –  as in “it’s all the fault of those evil drug fiends.”   So let me step back and give this some sober consideration.

The fact is that there does exist a “culture of addiction,” and in some social situations it could be really difficult not to try a little hit of something that might be fun.   And addiction is real.   I can say with some assurance that drug addicts do not start out with the intention of becoming addicted.   But is it fair or reasonable to view every individual who is receiving a powerful drug for the treatment of pain as a potential addict?   That’s like putting people who enjoy a glass or two of wine with their dinner in the same class as alcoholics who knock back a fifth of booze at a sitting.   It’s that kind of thinking that brought us to Prohibition.   And Prohibition did absolutely nothing to reduce the level of pathological alcoholism.

In fact, the effects of Prohibition were curiously similar to those of our current tactics to reduce drug abuse.   During Prohibition, it was easy to get hold of alcoholic beverages.   There were immense numbers of bootleggers, and these chaps made lots and lots of money.   Prohibition was a huge boon to the bootleggers.   Some of the bootleg hooch was on the up-and-up – name-brand whiskeys smuggled in from Canada, rum from Cuba, and for the wealthy, Bordeaux and Cognac.   But quite a bit of the stuff was phony.   Not infrequently, it was adulterated with wood alcohol, which is highly toxic.   (The alcohol in alcoholic beverages is ethyl alcohol; wood alcohol is methyl alcohol – mixed with acetone, it’s good for cleaning paintbrushes, but definitely not for drinking.)   So drinkers of bootleg hooch did not fare well.   Blindness and death were not infrequent consequences.

Does this remind you of anything?   A good deal of the “drugs of abuse” are also not on the up-and-up.   Addicts do not knowingly choose drugs that have been laced with fentanyl – they think they’re getting something analogous to one of the more powerful prescription opioids.   But the drug dealers are peddling something that they’re saying will deliver a powerful fix.   Adding a bit of fentanyl in the mix makes their dope that much more potent.   So what if their hands slip from time to time and the fentanyl dose crosses the red line between a thrill and a death sentence?   They lose a few customers, and that’s about it.   

Just as Prohibition did nothing to reduce alcoholism in the US, it doesn’t seem as though the measures taken by our government to combat drug addiction have done much.   As you saw above, the wave of overdose fatalities is mounting.   I’m sure that there are those who would say that without those restrictions and measures, there would be even more overdoses.   But it seems to me that there has to be a better way of dealing with this mess.   Should prescription-grade opioids be legalized and dispensed under some type of supervision?      

Doc Gumshoe, for a change, is unsure of where he stands on this.   It could be like the opium dens that were common throughout Asia and in major European and American cities in the 19th century.   Probably if the most powerful narcotics were given in setting like those opium dens and administered by well-trained attendants, there would be a marked reduction in overdoses and deaths from that cause.   But there would likely be an increase in dependency.   There would come to be a cohort of individuals for whom life consisted of little besides the hours spent in these opioid dispensaries.   They might be spared from overdoses, but is that really living?

This might be one of those problems that aren’t susceptible to a solution imposed from the top.   We each of us need to be wary of that problem and resolutely steer away from the dangers.

* * * * * * * *

Being a kindly sort, Doc Gumshoe would like to get something cheerful to your in-box around this time of year, and a likely candidate has just popped up.   It’s the kind of thing that might have attracted the attention of the miracle-cure-mongers, but as far as I can tell up to this point it’s reasonably legitimate.   

As for my homily above, don’t be stingy with comments.   I relish them all.   Best to all, Michael Jorrin (aka Doc Gumshoe)

[ed note: Michael Jorrin, who we like to call ”Doc Gumshoe,” is a longtime medical writer (not a doctor) who shares his mostly non-investing-related thoughts with the Gumshoe community a couple times a month. You can see all of his columns here.]

Share your thoughts...

13 Comments on "Trying to Make Sense of Puzzling Health-Care News"

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Tom M

Good information. I do not accept any study unless I know who is behind it. Ever wonder why over 50% of these studies are never published? Will they ever do a study that shows that IP-6 can cure liver cancer? Vitamins and minerals are thrown to the wayside because they do not generate $billions in profits. Which do we need to live healthy? Vitamins and minerals or Big
pharma drugs? Ask your body…it knows the truth.


Hydromorphone (Dilaudid) is more potent than Oxycodone (Percocet et al). The lowest dose of Dilaudid is roughly equal to a Percocet 5 or 10mg.

Jonathan Dean

The point that relatively few addicts are now people who began with prescription drugs is a key point.
I think your proposed solution is worth pretty significant thought, tho9ugh of course inconsistent with our puritanical traditions.

Post surgery I was only given a few hydromorphone which did NOTHING. I writhed in pain like an animal, whimpering, crying, and ultimately screaming in agony. I came close, very close, to ending my own life. Was this necessary? What did the doctor achieve by denying me? I’m in my 60’s, never had a drug problem, and yet the doctor felt the need to prescribe a dose unfit for an animal. My death would have been on the doctor, the CDC, and the fringe group PROP, who want to take all pain medication away. I developed CRPS, the most painful… Read more »
Thanks MJ, This is the real reason I read stockgumshoe.Having worked in the “Health care” profession for decades and now thankfully retired , I have a couple comments. The aspirin debate just like estrogen debate for women, has been going on forever it seems . Every few years the theories change as to efficacy , of said meds. As a doctor people EXPECT you to do something when they come to see you…so you prescribe something, even if it’s innocuous, in essense a placebo. With that caveat aside, regarding the studies that show if you suddenly stop taking ASA that… Read more »
Jon DoeFour

Remember when you could buy Sudafed right from the decongestant shelf at your local store. Then to stop the meth heads from buying hundreds of bottles at a time they moved pseudoephedrine to behind the pharmacy counter? Yah, that worked didn’t it. Just made it difficult for those that need the medicine.

MJ (aka DG,) I always enjoy your analyses. One minor note: You say ” evolution takes place largely because of random mutations in genetic material.” This isn’t quite true. Because all species are highly evolved and adapted to their environment, random mutations are maladaptive more than 99.9% of the time. When I studied these things looong ago (degreed in neurobiology in ’72,) the common analogy was that of shooting a bullet into the engine of a car. Theoretically, the bullet could graze the carburetor adjusting screw and make the car run better. But clearly, it was far more likely that… Read more »
Add a Topic
John Lincks

The born in August ADHD example could have been used by Malcolm Gladwell in his book “Outliers”.

Roger Bryenton

Here is an interesting thought on ADHD – children born in August were conceived in November. As it takes at least one full month to identify pregnancy, and possibly two months, that puts Christmas and New Years revelry into the equation. As most (many?) adults imbibe (and/or smoke) during festivities, the fetuses would be subject to all kinds of chemical changes, including their first drunkenness and hangover! What does the nicotine do? (Is the ADHD demographically related, ie lower income, lower nutrition, etc as well?) Does this lead to ADHD? Possibly?

Add a Topic

Unravelling mystery of how, when DNA replicates
Have an awesome 2019 and beyond…

Add a Topic
Add a Topic

Bacteria found in ancient Irish soil halts growth of superbugs—new hope for tackling antibiotic resistance

Read more at:

Add a Topic