Share

Why doctors’ advice is so often wrong, and how it affects us

Tiffany Mcleod followed the advice to the letter. She has food allergies, and was worried her children might too. Her doctor recommended she avoid eating nuts while pregnant or breastfeeding, and to keep the kids away from them until the age of three.

“You want to do what’s best for your child,” she says. “And you figure that your doctor knows what that is.”

Her doctor was following American Academy of Pediatrics (AAP) guidelines issued in 2000. But by 2008, AAP had backed off from this recommendation. Then last year, it reversed course. A large study had found that regular exposure to peanuts from four months of age reduces the risk of allergy by about 80 per cent. McLeod, who lives in Texas, in the United States, had both of her babies in the years between the changing advice. She learned the hard way that her youngest has a life-threatening allergy.

“We had to rush her to the emergency room. It was extremely scary,” she says.

It would be comforting to think the drastic change in advice with peanut allergy is unique. But this type of medical about-face isn’t rare. A recent analysis of research published in one medical journal over 10 years identified a whopping 146 such reversals. To be clear, this is not just the process of upgrading advice as better evidence comes in. These are practices that became routine before we learned they didn’t actually work. And worse, before we knew if they could cause harm.

Giving babies antibiotics may increase risk of them having food allergies

In the era of evidence-based medicine you might assume most of your doctor’s advice is based on information obtained through rigorous testing. But it is becoming clear that is often far from the case. Fortunately, people are now shining a light on the problem, and devising ways to fix it.

They have their work cut out. In recent years official advice has reversed course on everything from broad issues such as diet and nutrition, to specific techniques like using stents to keep open the narrowed arteries of people with heart disease. There have also been dramatic U-turns on cancer screening and other major public health concerns.

For many of us, this isn’t an abstract worry.

My mother was given hormone replacement therapy (HRT) in the 1990s to help ease her way through meno­pause. That was before it was found to increase the risk of heart attack, stroke and breast cancer. Nine years later, when she was diagnosed with bilateral breast cancer, and endured a gruelling year of surgery and treatment, we were left wondering if, or how much, HRT had contributed to her disease.

“The medical journals are filled with interesting ideas that get tested and fail. That’s science,” says Adam Cifu, at the University of Chicago, who co-wrote a book called Ending Medical Reversal with Vinay Prasad. “The problem is when that new technology or treatment or surgery has actually gotten out and is being given to millions of people before it’s found to not work.”

Taking guard against false medical research

How does this happen? In the late ’90s, peanut allergies were on the rise in the US and Britain. At the time, the best theory about the cause was that feeding allergenic proteins to infants before the intestinal lining was mature allowed them to seep into the bloodstream. The baby then built up antibodies to these substances and later became allergic to foods containing them.

This hunch was consistent with assumptions about how allergens affect the immune sys­tem, and studies showing few peanut allergies in countries where people don’t eat many peanuts. So steering clear of the potential allergens until children were older made sense – in theory.

“Much of what we do in medicine is theory-based. It’s only relatively recently that good quality evidence has been available for many things,” says Virginia Moyer, who develops guidelines for the American Board of Pediatrics and is a long-time campaigner for evidence-based practice. “Peanut allergy is life-threatening – we were doing the best we could with the knowledge we had at the time.”

But no data backed up that theory. And now it seems just the opposite is true: exposure to peanut protein while the immune system is immature actually decreases the likelihood of developing an allergy.

“We spend so much time training people first and fore­most in how the body works and how it breaks. So we get why things should work, and then we tend to adopt things because they should work before we know if they actually do,” says Cifu.

It’s no small concern. An analysis by BMJ Clinical Evidence of 3,000 common medical practices categorised half as having “unknown effectiveness”, and 3 per cent as likely to be ineffective or harmful. Just a third were found to be “beneficial” or “likely to be beneficial”.

A major problem is that we often do the studies only after practices are widely adopted. There is a well-trodden path to developing new treat­ments, says David Jones, a medical historian at Harvard University. Someone gets some promising early results, a lot of people get enthu­siastic about the innovation and get on board. “Then it is successfully market­ed to a willing audience of patients who are generally dissatisfied with existing treatments.”

Eventually, concerns surface and clinical trials are done. By then, though, the horse is out of the barn. People want innovation and ready access to new and better treatments. But, as Jones says, “It leaves open the door that you’ll get a lot of enthusiasm coming from small, poorly designed studies that drive unwarranted use of a new procedure before it has been fully validated.”

It’s tough to get the horse back in the barn. For instance, a treatment known as vertebroplasty, which involves inject­ing medical cement into broken vertebrae, is widely used for people with spinal fractures related to osteoporosis. In spite of randomised trials showing that it is no better than
a placebo intervention, the practice is still used, even at Massachusetts General Hospital, one of the top ranked hospitals in the US.

“Once a treatment has been dubbed ‘standard of care’ it tends to persist,” says Ted Kaptchuk, at Harvard Medical School, who studies the placebo effect. It isn’t that doctors are wilfully ignoring the evidence, it is likely they believe in what they’re doing. “The practitioners who perform verte­broplasty want to help people and probably continue to believe they are doing so,” he says.

Preventing untested practices from becoming standard care seems simple – just test them first. But even when studies are conducted early on, what they actually measure can be part of the problem. Because it’s simpler and faster, researchers often look at “surrogate outcomes”, not actual end points. So for instance, blood cholesterol levels are taken as a stand-in for the risk of heart attack or death.

For years, dieting and taking medication to keep a certain measure of blood sugar – glycated haemoglobin – below 7 per cent was recommended for people with type 2 diabetes. This was after a large study showed that diabetics with levels closest to those of non-diabetics had the best outcomes. But, in 2008, a team led by Hertzel Gerstein at McMaster University, in Ontario, Canada, discovered that those fighting to get their glycated haemoglobin below this threshold actually faced a higher risk of death.

“There comes along some piece of evidence that is defi­nitive, so much better than the existing evidence, that you have to do a total 180 on something you once thought was the best way to go,” says Prasad, Cifu’s co-author and an oncologist at Oregon Health and Sciences University.

The calibre of evidence can depend on who is paying for it. There are the perennial issues that plague medical research: the pressure on researchers to publish new and impressive findings, and medical journals’ tendency to publish positive results more than negative. But increas­ingly, the pharmaceutical and medical device industries – rather than public bodies – are funding clinical trials. Not only do they have vested interests in the initial outcomes needed to get drugs or devices approved, but once they are approved, little incentive to do the expensive, large-scale studies that could potentially upend their initial findings and hurt their bottom line.

The popularity of the anti-inflammatory drug Vioxx is one example. The US Food and Drug Administration (FDA) approved it in 1999, but later the manu­facturer, Merck, was accused of concealing risks discovered in early studies. By the time independent research showed that Vioxx increased the risk of heart attacks, 20 million people had prescriptions. It was withdrawn in 2004, but not before causing up to 140,000 preventable heart attacks. Merck pleaded guilty to criminal charges in 2011 and paid US$950 million in fines.

“There are cases where we just haven’t done the re­search we need to do before a practice gets adopted,” says Moyer. “But there are also a whole host of reasons why the evidence for certain medical practices might change – and continue to change – over time.”

Among them is the fact that you’re often aiming at a moving target.

“Diseases aren’t static – they can and do change over time,” says Gerstein. “Diabetes today is not the same as it was 50 years ago.” The number of people diagnosed with type 2 diabetes has more than doubled in the past two decades, and the age of onset has dropped. He says today’s type 2 diabetes is more likely to come with additional health issues such as cancer, kidney problems and heart disease, compared with 50 years ago. “We call it by the same name but it behaves differently,” says Gerstein.

So where do we go from here? It is understandable that patients and their advocates want new options as soon as possible, especially those who are seriously ill. At the same time, it makes sense to have the highest quality of evidence for any treatment. Balancing these demands should be straightforward. It means providing early access to treatments, but ensuring that we gather the data at the same time.

“The most important message is that we, as a society, need to keep pushing for large, well-designed research studies and keep on reassessing,” says Gerstein. “If we’re treating people the same way we did 30 years ago, then we’re probably not treating them right.”

Efforts are under way to ensure this happens. Medical speciality organisations, such as the American College of Physicians, have an expiration date on clinical guidelines, and reassess them as new studies are pub­lished. Jones says many medical conferences now insist on some form of
peer review before new practices can be presented. And as part of its safety surveillance programme, the FDA has a comprehensive adverse-event reporting system for drugs and devices.

In addition, researchers are trying to change the culture surrounding clinical trials to promote more transparency and utility. John Ioannidis at Stanford University is leading the push for medical journals to set more rigorous stand­ards for the publication of results, and to have external groups monitor those standards. The aim is to highlight potential funding biases and stop flawed results from sneaking into everyday medicine.

There are also attempts to address the problem further upstream. It used to be that medical education was pre­dominately an exercise in memorisation. But now teaching critical thinking has become central to the curriculum for doctors and other health professionals and there is a much greater focus on the need for evidence-based practice.

“I give a lecture to the medical students every year where I tell them the most important thing they need to know is that, one day, they’ll learn everything they learned in medical school is wrong. Some of it may even be considered malpractice,” says Gerstein. “What we currently believe today based on the evidence will change – so every doctor, as much as possible, has to keep up to date with new research because things can, and do, change all the time.”

It may be too much to ask for our doctors to follow every incremental change. But they should be willing to examine the benefits and drawbacks of the therapies they are offering, says Cifu.

“It should be OK for doctors to discuss options with each patient and say, ‘Look, I’m not completely convinced about this therapy because the data isn’t so good but it’s low risk and I think there’s a chance it could work for you and here’s why.’ That way patients understand they are taking a little bit of a chance but there are potential benefits.”

Patients can spur their doctors on as well. When a therapy or surgery is suggested, instead of immediately asking about side effects or cost, Cifu wants them to start at a more fundamental level. “The real questions are, how is this actually going to help me? Will this actually decrease my risk of having a heart attack?” He wants ordinary people to be empowered to ask about the evidence and possible alternatives.

It won’t be easy, and doctors can feel intimi­dated when their patients push back. But, as Cifu says, “That’s the job of a good physician – to answer those questions.”

10 cases where medical guidelines were reversed

Hormone replacement therapy
Advice:
HRT for menopausal women.
Rationale:
observational studies and animal trials suggested protective effect on heart and bones.
Adoption:
millions of prescriptions in 1990s.
Reversal:
in 2002, found to increase risk of breast cancer, heart disease and stroke. Largely discontinued, though later studies showed certain women may benefit.

Peanut allergy
Advice:
withhold nuts from young children.
Rationale:
for immature immune system, exposure increases allergy risk.
Adoption:
widespread in Western countries.
Reversal:
major trial found early exposure actually decreases allergy risk. New guidelines issued in 2015.

Cancer screening
Advice:
routine early screening.
Rationale:
early detection provides chance to intercept disease.
Adoption:
mammograms and PSA test for prostate cancer became routine in the 80s.
Reversal:
early stage cancers shown to not always develop further. PSA test now not recommended in US, mammogram screening age raised from 40 to 50.

Heart stents
Advice:
stent placement for people with heart disease.
Rationale:
used in cases of heart attack, so those with stable heart disease should benefit, too.
Adoption:
commonplace by 2004.
Reversal:
now shown to have no preventive effect and may even cause harm. Practice remains common.

Surgery for osteoarthritis of the knee
Advice:
surgical removal and smoothing of cartilage fragments.
Rationale:
thought to reduce inflammation, improve motion and decrease pain.
Adoption:
by 2002, 650,000 operations a year in the US.
Reversal:
several trials found no benefit over physical therapy alone. Surgery still common, however.

Vertebroplasty
Advice:
inject medical cement to fix fractured vertebrae.
Rationale:
thought to improve spine stability and reduce pain.
Adoption:
by 2009, 750,000 operations a year in the US.
Reversal:
the procedure has repeatedly been shown to be no more effective than a placebo intervention, but is still widely carried out.

Lowering body temperature for aneurysm surgery
Advice:
cool down body during surgery.
Rationale:
animal studies suggested improved outcomes.
Adoption:
common by the ’80s.
Reversal:
large 2005 study found no improvement and increased risk of infection.

Pre-implantation genetic testing
Advice:
screen embryos for older women undergoing IVF.
Rationale:
genetic screening should reduce pregnancy failure due to chromosome abnormalities in embryos.
Adoption:
common for older women.
Reversal:
2007 trial found screening decreased pregnancy and live birth rates for older women.

Ear tube surgery
Advice:
implant tubes in ears of children with persistent infections.
Rationale:
fluid drainage would improve hearing and cognitive development, better to do surgery sooner than later.
Adoption:
most commonly performed surgery in children.
Reversal:
review in 2014 found no adverse effect on long-term development in children who wait to have surgery or never get it. Surgery, which comes with the risk of bleeding and ear drum damage, still common.

Intensive blood sugar lowering for type 2 diabetics
Advice:
diet and medication to reduce glycated haemoglobin (long-term blood sugar measure) below 7 per cent.
Rationale:
study in 1997 suggested reaching 7 per cent decreased risk of heart attack, aim became the lower the better.
Adoption:
became routine to aim for under 7 per cent by early 2000s.
Reversal:
the Action to Control Cardiovascular Risk in Diabetes (Accord) study in 2008 found that trying to keep glycated haemoglobin so low could increase risk of death. Now seldom advised to aim for less than 7 per cent.

New Scientist