Hospitals struggle to get doctors and nurses to wash their hands. That’s a serious problem, since hand washing is one of the keys to reducing healthcare acquired infections that afflict more than a million patients a year and kill over 100,000. And it’s one of the reasons you should try your best to stay out of the hospital.
For the past few years I’ve heard suggestions that patients should take a more active role, and in fact have the responsibility to speak up. Today’s Wall Street Journal (Why Hospitals Want Patients to Ask Doctors, ‘Have You Washed Your Hands?’) covers the topic again, with a pretty strong message that patients need to take charge.
I strongly disagree.
Here’s one excerpt from the article:
The CDC has provided 16,000 copies of a video, titled “Hand Hygiene Saves Lives,” to be shown to patients at admission. In one scenario, a doctor comes into a room and the patient’s wife says, “Doctor, I’m embarrassed to even ask you this, but would you mind cleansing your hands before you begin?” The doctor replies, “Oh, I washed them right before I came in the room.” The wife says, “If you wouldn’t mind, I’d like you to do it again, in front of me.”
And here’s another:
“We’ve been focusing on intensive interventions to improve hand hygiene among health-care workers for decades, yet we’ve really shown very little progress,” says Carol McLay, a Lexington, Ky., infection prevention consultant and chair of the committee that designed the campaign [to get patients to speak up]. “We are trying to empower patients and families to speak up and understand their role.”
Am I the only one that thinks the situations described above are absurd?
Here’s how I see it:
- If infection control specialists have been failing to make progress with health care workers for decades then they need to figure out what’s wrong and fix it, not throw the problem onto patients. Here are some ideas: education to get more buy-in from clinicians on the idea of frequent hand washing, technology to track whether hand washing is occurring, harsh penalties for lack of compliance –like closing down a hospital floor, or firing or suspending staff, or making lack of hand washing subject to malpractice claims. If you believe the conventional wisdom (which I don’t –but that’s another story) then physicians will be so focused on avoiding lawsuits through defensive medicine that they’ll instantly get to 100% compliance on hand washing
- The scenario in the video of first asking a doc if he washed his hands –and then not accepting his answer that he just did it but instead wanting to see him “cleanse” his hands again– is ridiculous. That’s not my vision of patient engagement
- Lack of hand washing is reasonably visible to the patient, but what about all the other things that occur? Is it practical to verify that my doctor performed all the correct diagnostic tests, interpreted the results correctly, made the right differential diagnosis, prescribed the most appropriate antibiotic and dosing level, that the hospital stored the medications properly and disinfected their equipment, that the nurses didn’t fake their credentials and that their immunizations are up to date, that I was referred to the right specialists, etc.? All of these things –and many, many others– are important, but I count on the hospital to deal with it and the regulators to oversee that it’s done. I want quality ratings that take into account these issues and I don’t mind payment incentives that reward certain behaviors and penalize others
Don’t get me wrong. I hate the idea of doctors and nurses not washing their hands. If I’m in the hospital and I see something I’m unsure of I do speak up. I bring an advocate when I’m a patient and act as one for others. I would even bring up hand washing in certain circumstances.
But I really resent the idea that I’m supposed to be the handwashing police. Hire someone else to do the job.
For better or worse, war has provided the impetus for new medical technology. The latest wars in Iraq and Afghanistan have sent home many men and some women who suffered the loss of legs and arms. Although prosthetic limbs have been improving over the years, they are really no substitute for the real thing.
That’s starting to change now, as we learn from an LA Times article about a study published in the New England Journal of Medicine.
A report published Wednesday in the New England Journal of Medicine describes how the team fit [a patient] with a prosthetic leg that has learned — with the help of a computer and some electrodes — to read his intentions from a bundle of nerves that end above his missing knee.
For the roughly 1 million Americans who have lost a leg or part of one due to injury or disease, [the patient] and his robotic leg offer the hope that future prosthetics might return the feel of a natural gait, kicking a soccer ball or climbing into a car without hoisting an inert artificial limb into the vehicle.
[The patient's] prosthetic is a marvel of 21st century engineering. But it is [the patient's] ability to control the prosthetic with his thoughts that makes the latest case remarkable. If he wants his artificial toes to curl toward him, or his artificial ankle to shift so he can walk down a ramp, all he has to do is imagine such movements.
This is pretty remarkable stuff, and great news for the many people who have lost limbs and may benefit. But it also hints at ethical issues that society will have to deal with in the future as the technology gets better and better.
We’ve already witnessed the first signs of what’s to come with Oscar Pistorius, the so-called Blade Runner (and probably murderer) whose artificial legs propelled him in the Olympics at a rate that’s likely higher than what he would have been able to do with “real” legs.
Call me crazy (go ahead) but how long will it be until we have athletes who decide to get bionic replacements for legs, knees, arms, eyes –you name it? I think it will be just 20 years or so. After that, we may find a whole cadre of people taking on replacement parts –including internal organs– in order to improve their health and have a shot at something approaching immortality. If you think there’s a wide divide between rich and poor today just wait until the rich find a way to use replacement parts to increase their strength and extend their lifespans.
I hope I won’t be around to see that happen.
In honor of the Jewish New Year I’m re-running a post on Judaism’s insights into pharmaceutical sales and marketing practices. L’shana Tova!
Recently I heard a Rabbi discuss the prohibitions against bribes in Jewish law. He shared the Talmudic insight that “a gift blinds the eyes of the wise” and taught that this refers not just to obvious bribes but even to small, innocent-seeming gestures that appear too insignificant to influence another person but that actually do cause a conflict of interest. I told him this sounded very similar to contemporary relationships between pharmaceutical companies and prescribing physicians, where small gifts like pens and take-out lunches are tools of the trade –viewed as innocuous by their recipients but seen as a good investment by the givers.
I revisited a blog post I wrote on the topic back in 2006 along with a JAMA article (Health industry practices that create conflicts of interest: a policy proposal for academic medical centers) by Brennan et al. from the same era. I looked at the list of articles citing the Brennan piece to see if I could find something more current. Lo and behold I discovered Unconscious conflict of interest: a Jewish perspective by Gold and Applebaum in the Journal of Medical Ethics, which probes this issue in more depth. They write:
The Talmud [Tractate Kethuboth folio 105b] suggests that, due to the unconscious mechanism of influence between the giver and the receiver, the prohibition of receiving a gift is not limited to physical gifts, but extends to any other personal benefits, including ‘a bribe of words’:
Our Rabbis taught: ‘And thou shalt take no gift’; there was no need to speak of [the prohibition of] a gift of money, but [this was meant:] Even a bribe of words is also forbidden, for Scripture does not write, ‘And thou shalt take no gain.’ What is to be understood by ‘a bribe of words’? –As the bribe offered to Samuel (a Talmudic scholar who served as a judge). He was once crossing [a river] on a ferry when a man came up and offered him his hand. ‘What,’ [Samuel] asked him, ‘is your business here?’ –‘I have a lawsuit,’ the other replied. ‘I,’ came the reply, ‘am disqualified from acting for you [ie, as a judge] in the suit.’
In a meticulous reading of the story mentioned above, it is not clear whether Samuel, the Talmudic scholar, actually accepted assistance from that ‘courteous’ man. In fact, his reaction of disqualifying himself from serving as a judge seems to be related solely to the man’s offer. That man’s gesture –offering his arm– was a sufficient cause for the disqualification. The gesture alone was perceived by Samuel as a sort of speech-act that emanated –perhaps unconsciously– from the man’s desire to influence his judgement. In other words, the offering of the arm was ‘a bribe of words’.
To me this is fascinating stuff and suggests that to truly avoid the unconscious conflict of interest in the pharma/physician relationship it would be necessary to cut off all contact between pharma rep and doctor. Even when a drug rep is prohibited from distributing tsotchkes or tapping his restaurant budget, the physician still knows the rep would give him things if he could. Under this logic, the “no see” policies of some physician organizations toward pharma reps make good sense.
There is another solution, which is to educate physicians about unconscious biases and the objectives and tactics of pharma companies, device companies, health plans, and other would-be influencers. Even better would be to couple this education with conscious efforts to counteract any biases that are introduced.
Physicians are notoriously skeptical of the notion that they are influenced by gifts large or small. Therefor the article wisely concludes:
For those disinclined to accept either the insights of sociologists and anthropologists or the findings of modern neuroscience on the tendency towards reciprocity in response to the receipt of gifts and favours, perhaps the wisdom of the ancients provides a reason to rethink the unconscious influence of even small benefits on physician behavior.
As studies and accompanying news coverage about the long-term dangers of head trauma have emerged over the past few years I’ve been thinking about what American sports will look like a generation from now. The NFL’s recent $765M settlement with retired players has me thinking about it again.
In the US, football players are our modern gladiators, and we love to watch. A century ago, the survival of football was threatened due to the uproar over 19 fatalities in 1905. Rules were changed and the game became somewhat safer, but as we now are starting to understand, signs of serious damage may not emerge for years after retirement.
Will football and other violent sports survive? And if so, how will they look compared with today?
At the one extreme, I think it’s possible that football and boxing will be banned and that the rules of hockey and soccer will change substantially as a result of awareness of head injuries. If that seems extreme, consider how dramatically social norms on an issue like smoking can change over time. The Surgeon General first made noises about the danger of cigarettes in the 1960s, but when I was growing up in the 1970s and complained about secondhand smoke my mother told me to “get used to it.” It was inconceivable to me then that in 2013 smoking would be banned in so many places and so heavily stigmatized.
On the other hand, it’s been obvious forever that boxing is a dangerous endeavor. While there have been changes over time to ensure the safety of fighters the sport is still around and not all that different from what it was a generation back. So if boxing is the model then the modest changes we’ve seen so far in the other violent sports may be about as far as it goes.
One of the keys to the equation is what happens with youth sports. Parents of high school athletes are aware of the newer research on head trauma, and leagues and coaches have made reasonably strong moves to protect players with concussions from aggravating those injuries. But what of the parents who are having kids now? Will they be as eager as current and past generations to let their kids get involved in the more dangerous pursuits? I’m not sure.
An area to keep an eye on is technological change. Football helmets to protect players from death and serious injury inadvertently made things worse in some ways by encouraging spearing. With a better scientific understanding of head trauma and a desire to prevent it, equipment makers may be able to devise helmets and other gear to make the games safer without making them slower or less physical. That’s my hope.
A Wall Street Journal piece (The Office Nurse Now Treats Diabetes, Not Headaches) notes the benefits of workplace clinics but also emphasizes the potential downside of loss of privacy or employer intrusion into the personal lives of employees. The ever-skeptical Deborah Peel is trotted out to lay out an Orwellian scenario.
Workplace clinics address the big, big issues of access and convenience, and do so in ways that align the interests of employers and employees. It’s a hassle to get an appointment at a doctor’s office. Even when you have an appointment it takes time to get there and waits can be long. It almost always means time off from work. Onsite clinics are set up to be convenient and to respect the value of employees’ time. The employers are the customers. They care about time away from work and access to care. They also generally are interested in evidence based medicine, consistency, and patient safety. All of those things benefit the employee.
It can be difficult for even well-educated, well-insured people to navigate the health care system, and partially as a result there are many people walking around with conditions that they are neglecting. This example from the article struck me as a good one:
John Martin, an accounts-payable specialist at Hanesbrands, visited the company clinic in January after leaving his Type 2 diabetes untreated for seven years. Tests confirmed that the 53-year-old’s blood sugar was high and that he also suffered from hypertension. Clinic nurse practitioners put Mr. Martin on medication for both conditions and arranged for free or discounted pills. A CHS health coach helped him lose 25 pounds in two months through dietary changes and an exercise program.
“This has made me change the way I live my life,” Mr. Martin said of the clinic.
This kind of intervention is a positive thing for all involved. I know I’d be more likely to want to work for a company offering this sort of support.
This is a blog about the business of health and health care policy so I don’t often delve into the realm of personal health tips. But since it’s the first day of summer and I’m a sun-sensitive bald redhead, I’ll make an exception.
It’s the time of year that newspapers write about sunscreen. A Washington Post article talks about the safety of sunscreens, contrasting “physical” sunscreens that block sunlight by reflecting it back,with chemical sunscreens that absorb the sun’s rays and keep them from damaging skin. The fear is that chemical sunscreens may be absorbed into the skin and cause trouble, e.g., by producing free radicals that cause damage to cells.
I share this concern about sunscreen safety, but my worries are also more practical and immediate. In particular, I’ve found that when I use sunscreen at the beach I often miss a spot –like the tops of my feet or some place on my back– and end up with a big, bad burn.
Twenty years ago I was working on a consulting project at an academic medical center in New York City. I was interviewing a dermatologist who took a look at me and couldn’t resist offering the advice that I should wear sun protective clothing, especially at the beach. Since then I’ve worn sun protective gear from Solumbra in the summer. I’m partial to their zip-front swim shirts, which I wear religiously. They’re comfortable in the water or on shore and I don’t worry about getting burned or needing to reapply sunscreen once I’ve gotten wet.
This year my wife asked me to try something a little more stylish than the plain, blue Solumbra swim shirt I favor. So I bought one from Coolibar, which looks better but is honestly not nearly as good.
If you’re sun sensitive like me, or just sun sensible, I hope you’ll take care of your skin by wearing sunscreen or protective clothing.
See you at the beach!
Today’s news feeds feature a big new study on pediatric CT radiation doses. Anyone who’s been paying attention will not be surprised by the results:
- The use of CT scans in pediatrics rose dramatically from 199 to 2005 before leveling off
- Some kids get huge doses of radiation –with the amount of exposure per scan varying dramatically
- Thousands of people will eventually get cancer due to the CT scans they had as kids
Scans are popular because they provide lots of information for diagnosis, produce pretty pictures that patients can relate to, and because they are well-reimbursed. But the harms are real and the medical community as a whole has not done enough to get things under control. There are exceptions, though. The Image Gentlycampaign has called attention to this issue for years, and I have personally seen remarkable attention given to weighing the benefits and harms of CT at Boston Children’s Hospital. I’m sure they are not alone.
One of the fastest, most effective ways to address the problem of over-imaging in the broader pediatric community is through payment reform. In particular, when health systems have incentives to hold down costs and improve quality –as they do with Accountable Care Organizations (ACOs)– I suspect doctors will be much more careful about what they order. And when patients have to pay more out of pocket they may also initiate a discussion with their provider about whether the scan is needed.
Meanwhile, if your kid has had more CT scans than they need or if you yourself have had too many, you have a right to be worked up about it.
My perception is that doctors in previous generations were more likely to devote their entire lives (professional and “personal” time) to the practice of medicine. Today’s doctors are more likely to consider lifestyle and not automatically put everything into doctoring. This is partly cultural –as younger professionals in general have put more emphasis on balance– but a large part is structural, because residents are working fewer hours by law and because more doctors are working for others, which encourages an employee mentality.
I don’t really have a problem with doctors who want to have a life outside medicine, but overall I prefer to be treated by someone who’s really dedicated and wants to devote most of their waking hours to it. By the way I feel the same about other professionals I work with.
So I’d like to see some of the structural issues addressed to encourage those who want to go all out to do so. Kaiser Health News has an article on the topic today (Doctors Transform How They Practice Medicine), which gets at my point at least indirectly. The article discusses how physicians are opening “medical homes” to provide more coordinated care or opening concierge-style practices that limit the number of patients and charge extra fees.
Those are both kind of interesting but also a bit ho hum. I’d rather see a broader array of offerings including those that include more remote services and incorporate specialty care. I hope and think they’ll come because despite the fact that many docs are rushing into hospital employment, I believe many would rather work for themselves if there were a viable way to make it happen.
By David E. Williams of the Health Business Group.
USA Today has a full page article on the rise of heroin addiction in the suburbs, but adds absolutely nothing to what’s already widely known. (See, for example, my post on the topic from early 2012.) Teens and adults start by abusing the painkiller oxycontin, which is available by prescription, then turn to shooting heroin once they figure out how pricey it is to acquire oxycontin on the street.
The article presents no real ideas on what to do about the problem. If anything the article implies that it would be better to make oxycontin more widely available in order to stem the use of heroin. That’s a nonsensical approach as far as I’m concerned.
There are alternative approaches that might be more promising.
One idea is to establish better guidelines on the prescribing of painkillers after surgery. Many patients –maybe you’ve been one of them– receive an overly generous supply of oxycontin or vicodin after a minor surgical or dental procedure. Sometimes the patient gets addicted from that initial supply, other times the extras end up in the family medicine cabinet where teens might find them and try them out. It’s not always obvious how to dispose of these medications, which contributes to them hanging around.
One hurdle to overcome is that follow-up visits are inconvenient and also not very profitable for doctors. Perhaps if there were quality measures associated with good practices that would change the equation and tighten the initial supply.
Another issue relates to so-called “drug seekers.” We’ve all heard about drug seeking patients who come to the emergency room to get drugs. There are IT systems coming online that can at least identify such drug seekers and alert doctors, but this only works if the systems are consulted, which may not happen when middle-class patients are involved. It’s easy to label patients as “drug seekers,” which makes them sound like bad people. Some are. But many others are patients who are somewhere down the path toward dependency. They’re not trying to become oxycontin addicts and certainly aren’t looking to move to heroin. Rather than turning people away it would be better to have a path to refer these patients into treatment and then to track their progress.
There are great opportunities for physicians, payers, employers, consumers and pain management experts to work together to develop a more comprehensive view of the problem, to develop a strategy to address it, create new quality and safety measures related to achieving the strategy, and align incentives so that physicians are rewarded for doing the right thing.
We won’t solve the problem of painkiller abuse in one shot. But it’s reasonable to start by tightening up on the relatively easy places, such as cutting down on the distribution of unneeded post-surgical pain meds and figuring out how to better direct “drug seekers.”
I agree with the main recommendations of the Drugfree.org/MetLife 2012 attitude tracking study of teens and parents regarding drug use:
- Do more to communicate risks of medicine misuse and abuse
- Safeguard medicines at home
- Properly dispose of unused medicines
- Avoid modeling bad behavior by misusing or abusing drugs
The report raises quite a few interesting points, but some of the survey results raise more questions than they answer, and there are other issues not addressed.
Prescription drug abuse is a serious problem. One area the report focuses on is the abuse of stimulants such as Adderall. Here’s their take:
“In fact, almost one-third of parents (29 percent) say they believe ADHD medication can improve a teen’s academic or testing performance, even if the teen does not have ADHD, and one in four teens (26 percent) believes prescription drugs can be used as a study aid.”
And regarding prescription drugs in general:
“Parents and teens share the same misconceptions regarding prescription drug misuse and abuse. One in six parents (16 percent) believes that using prescription drugs to get high is safer than using street drugs, and more than one in four teens (27 percent) shares the same belief.”
“One-third of teens (33 percent) say they believe ‘it’s okay to use prescription drugs that were not prescribed to them to deal with an injury, illness or physical pain.'”
“One in four teens (25 percent) says there is little or no risk in using prescription pain relievers without a prescription, and more than one in five teens (22 percent) says the same for Ritalin or Adderall. Additionally, one in five teens (20 percent) says pain relievers are not addictive.”
While the survey is surprised at how high these numbers are, I’m surprised they are so low. And some of what the surveyors characterize as misconceptions I regard as accurate or at the very least open to debate. For example:
- All else being equal, why wouldn’t it be safer to get high from prescription drugs than street drugs? The ingredients and dosing are known, the purity is bound to be higher, there’s less physical risk of obtaining the product (if from parents’ medicine cabinet especially), almost no risk of arrest, and if something goes wrong the emergency department can have an easier time figuring out what you took. Can it really be that only 1 in 6 parents and 1 in 4 teens agrees with me on this?
- It’s interesting that only about 1 in 4 parents and teens think ADHD drugs can improve academic testing and performance. I’ll bet there’s more support from college students who are big users of these substances. And do we really know that these meds aren’t effective in “normal” people, especially when cramming for a test? Part of the issue here could be that plenty of kids with ADHD or who are just a bit restless are put on drugs and get used to having them
- Direct to consumer ads tell us to “ask your doctor if [Drug X] is right for you.” And when we do ask, many physicians say yes. This includes pain drugs. In fact I saw a DTC ad for the pain drug Lyrica today. Given that, is it such a stretch that some people could think it’s ok to take pain meds without a prescription? And instead of emphasizing that 20-25 percent of teens who are unworried about pain drugs, perhaps the report should have emphasized the 75 to 80 percent who do think there’s an issue.
I really do think prescription drug abuse and misuse is a serious problem. But the problem is not just naiveté on the part of parents and teens. It gets to the fact that unlike a generation ago, we are starting to use Rx drugs as performance enhancers, and the use of consumer advertising to promote prescription medications has predictably created a much stronger consumer mindset about the use of these substances.
By David E. Williams of the Health Business Group.