The Onus is on Scientists – Shame on the AAAS

The American Association for the Advancement of Science (AAAS) has just come out against California’s Proposition 37, which would mandate the labeling of genetically-modified foods. In my opinion, the AAAS has failed its duty as promoters of Good Science.

The question is not whether genetically-modified foods are safe. I see the benefits, and I see the downsides (especially as a security guy, since food safety testing is, in my opinion, very poorly done), and the debate will rage on for a long time. But whether genetically-modified foods are safe is not the issue. The issue is whether consumers have a right to know what food they eat. There should be no debate here. Of course people have a right to know. And what better way to hear the people’s voice than to vote on this issue? The AAAS should be pro-labeling. If the AAAS believes that genetically-modified foods are, in fact, safer, as they claim in their statement, then they can make that point and rally the troops to explain to consumers that they should specifically seek out the GM-labeled foods. But withholding knowledge? Are you kidding me?

The world would be better off if people behaved according to scientific consensus. I wouldn’t have to worry about sending my kids to a school where up to 10% of kids might not be vaccinated, for example. But does that mean we should force parents to vaccinate their children? Of course not.

The onus is on scientists to make their case. Paternalism has no place in science. People have a right to know. The AAAS Board should be ashamed.

in praise of hands-on expertise

(I don’t usually share personal stories in public fora, but in this case, and with my wife’s permission, I’m making an exception.)

“Shoulder Dystocia,” said the Obstetrician, as we neared the end of my wife’s otherwise-routine delivery of our son last week. This meant nothing to me. My wife, on the other hand, freaked out. She’s a physician and had understood something I’d missed. My child’s head, which had only just emerged, began to visibly turn blue. I froze and, not for the first time in these medical situations, felt utterly useless.

What followed is best described as a highly coordinated dance. The Doctor started a set of rough and involved maneuvers, with stern orders to the nurse to apply pressure here, apply pressure there. The nurse pushed with one hand and grabbed the phone to call for help with the other. Within 30 seconds, before the additional help even arrived, a shoulder was out, one twist, and then the other. Our son cried and his color quickly turned pink. Cord clamped, scissors handed to me, I cut, doing my best not to shake from the adrenaline. The Pediatric team evaluated our son, and, 5 minutes later, he was in my wife’s arms. His left arm was visibly sore for a few hours. By end of day, though, the pediatrician was confident he had sustained no permanent damage.

So now, a few days later, I am beginning to understand. My son’s shoulder got stuck right after his head emerged. This happens in approximately 1% of births, though oftentimes the situation resolves itself. When it doesn’t, permanent nerve damage is a not-unlikely outcome, with reduced or even no use of the impacted arm. And, because the umbilical cord is compressed, the child cannot breathe. If my son hadn’t been delivered in the 5-10 minutes that followed, he could have suffered permanent brain damage or even died.

Instead, he is a perfectly happy 1-week old baby.

We’re so accustomed to things going well, we forget how quickly things can go wrong. We don’t often enough praise the folks in our society who have deep hands-on experience, with the training to react in a highly coordinated, rehearsed, scientifically proven manner in a matter of seconds. They’re the ones ensuring things go well. Most white-collar professionals, like myself, never need this kind of precise, automatic response. We see it in athletes, but we forget that doctors, pilots, soldiers, and a few others need it too. It’s a response so well learned it’s hard to imagine it could be anything but instinct. So we thank chance, fate, or some other mystical agent. We forget the role of these hands-on experts. We figure we can do without them.

Not so. In that one moment last week, decades of accumulated medical knowledge, analyzed by dozens of researchers poring over thousands of data points, condensed and taught to a team of doctors and nurses, rehearsed through years of training and ingrained through careful checklists, came together so that my son will never need to care that this ever happened. It’s awe-some, in the true sense of the word.

What about the less obvious errors?

The New Scientist points out a case of genotyping error by one of the consumer genomics companies, where a software bug caused a genotype to appear non-human. The article attempts to be reassuring:

Before other deCODEme customers get too irate about errors in data for which they have paid almost $1000, the bug affects only a tiny portion of the results presented. Most importantly, the disease-risk summaries provided by deCODEme seem to be based on the correct genetic information.

“seem to be” is the operative terminology, indeed. As is typical in security / quality-control settings, the question here is, if the software can make such a large mistake, what about all the smaller mistakes it’s making that aren’t so obviously detectable?

Seems to me that before we start trusting these genomic tests for clinical purposes, we’ll want to make sure our genomes are read multiple times, ideally using different technologies. 99.99% accuracy sounds great until you realize you’re dealing with millions of data points, each one of which could be significant.

(And I’m not even touching on whether genomic data is sufficiently predictive, given current knowledge, to be clinically relevant, which as Zak Kohane points out in the article, isn’t a given.)

HealthEngage leaking email addresses?

For more than 10 years now, I’ve used custom email addresses when I log in to a web site I don’t fully trust, e.g. ben-SITENAME at adida.net. Until recently, the only time I’ve actually been able to trace emails to their source is when I saw how Democrats reused some of their mailing lists during the 2004 and 2008 campaigns.

This weekend, though, I received an unpleasant surprise. I got a spam email sent to ben-healthengage. HealthEngage is a health web site I tried out a few days months ago to explore how some companies are working on device connectivity. I’m 99% certain I haven’t used that email address anywhere else (why would I?) So, is HealthEngage leaking email addresses in some way, either because they’re selling them or because they’re not protecting them very well and spam crawlers are picking them up somewhere?

Either way, it’s a little bit disconcerting: this is a health-data web site, and its members surely worry about their privacy.

Empowering the Patient vs. Enabling an Artificial Monopoly

Health Information Technology is moving along fairly quickly, with the stimulus money and the rise of Personally Controlled Health Records (Indivo/Dossia, Google Health, Microsoft HealthVault). I’m quite optimistic about the future of health data: there is a growing effort to free the data in order to empower patients. And then there are some really boneheaded efforts that appear to be for patient safety, but end up creating all the wrong incentives and further blocking patients from taking an active role in their care. This week provided fantastic examples of both.

Harvard’s own Donald Berwick explains to the New York Times that it’s time to empower patients (see the original Health Affairs article):

Some examples of this new model of care? Shared decision-making would be mandatory in all areas of care, with patient preference occasionally putting evidence-based care “in the back seat.” Patients and families would participate in the design of health care processes and services and would be a part of daily rounds. Medical records would belong not to clinicians but to patients, who would no longer have to get permission to look at them or call the doctor for lab results.

Read the full interview, it’s brief and highly worthwhile. I completely agree with Dr. Berwick.

Meanwhile, in New Jersey, a proposed state law wants to fine anyone who sells software that has anything to do with health data if it hasn’t been certified by CCHIT. CCHIT is a single entity that would get to certify all health software. CCHIT is also pushing to be the lone certification authority for all stimulus-funded work. So, as if health IT wasn’t already painful enough to deal with, now we’re going to move towards a certification monopoly? Say goodbye to:

  • iPhone apps that let you track your kids’s vaccines for $4, and really most small iPhone medical apps in general, as they clearly won’t be able to afford the certification fee,
  • storing your health data online at Google Health, Microsoft HealthVault, or Indivo/Dossia.
  • open-source medical software. As hard as Fred Totter is working to get CCHIT to see the free/open-source point of view, there’s simply no incentive for a certification authority to spend time on a distributed community where it’s unclear who will pay the certification fee.

No matter how well-intentioned and knowledgeable the folks at CCHIT are, creating a certification monopoly shows a lack of understanding of how these things really work. Once the monopoly is in place, where is the motivation for CCHIT to be efficient, responsive to new healthcare models, adaptable to new software methodologies? In addition, what is the certification really worth when the vendors are paying for it anyways? We’ve seen this conflict before in the election world: the “Independent Testing Authorities” are paid by vendors to certify voting machines. At least there, there’s mild competition. How much do you think that certification really means in terms of voting security/privacy/safety? Here’s a hint: all the voting machines that were found to be laughingly insecure by the Berkeley and Princeton teams had been certified by Independent Testing Authorities.

Now, the question on everyone’s mind should be “ok, but how do we ensure that there’s some kind of oversight for health software?” A good and very important question, which I’ll try to answer in a future blog post. But for now, let’s be clear: we need more patient involvement, not less. We need new software that will enable this patient involvement, not old software with half-baked web interfaces tacked on as an afterthought. The last thing we need is a government-mandated certification monopoly. Even if they asked Dr. Berwick to run it, it would be a bad idea, because the incentives are all wrong. Innovation/disruption, which we so desperately need, comes from the new, small players, the ones that simply won’t be viable if they have to pay an upfront certification tax, both in dollars and process.

Personal health record: it’s about the feedback loop

In my basic electronics college course, the classic lab that always got the teaching assistants laughing was the robotic arm. The task seems simple: build a circuit that measures the amount of weight carried by a small robotic arm and activates its motor to balance out the weight. Inevitably, within minutes, robotic arms throughout the lab are oscillating back and forth at accelerating speeds, catapulting their small weights across the lab. A few minutes later, the robotic arms are adjusted, and they no longer respond to input, drooping at the slightest weight, dropping it on the floor.

What we easily forget – because our bodies and brains do this for us automatically – is the need to establish a sufficiently strong, but dampened, feedback loop. If you overcompensate for the weight, then reverse the overcompensation when you realize you overshot, and so on and so forth, the pendulum keeps swinging further and further away from the stable equilibrium point. If you underreact, then you drop the ball. The same goes for information system design: proper feedback loops are crucial for quality control.

A recent Boston Globe article describes how the BIDMC shipped claims data (the codes doctors send to your insurance company for billing purposes) to Google Health, and how some patients discovered that this data was really, really odd. The reaction by the BIDMC has been nothing short of exemplary in working to rectify the situation, except maybe for the fact that billing codes were shipped in the first place, something a number of folks in the industry already knew to be problematic.

But I think this whole discussion misses the important point that e-patient Dave makes:

Then imagine that one day you were allowed to see the records, and you found out there were a whole lot of errors, and the people carefully guarding your data were not as on top of things as everyone thought.

The point here is that putting the patient into the loop – the data feedback loop – is not just a convenience for the patient, it’s a very real way to debug a health record. So instead of knocking Google Health or Microsoft HealthVault or Dossia or Indivo, the question to ask here is: what’s the quality of hospital medical records if patients are left out of the feedback loop? Sure the Google Health errors would likely have been far less egregious if the BIDMC hadn’t shipped billing codes, but that’s a distraction. The core issue remains: what is the feedback loop that ensures a patient’s health record is reliable?.

The Boston Globe, for all of its good reporting, misses the point and hints at overreaction by removing the feedback:

In the meantime, said Tang, who was recently appointed to a new committee advising the Obama administration on health technology, the risks to patients need to be studied further. “Probably for some patients it’s a net benefit, and for others it’s a risk,” he said.

The solution is not a continued paternalistic attitude that claims some patients are ill equipped to deal with this data. As the healthcare system gets more complicated, more specialized, and more fragmented, the most promising and feasible quality-control feedback loop is the patient. Sure, for a little while, the data that filters down to PCHRs will be problematic. But at least it will be checkable by the patient, and that’s quality control you simply won’t get from a hospital medical record divorced from patient oversight. Over time, personally controlled health records are likely going to become a far more reliable source of data than the hospital medical records themselves.

Finally, a feedback loop needs closure. One important technical challenge for personally controlled health records will be ensuring that patients can make corrections to their data in a way that filters back to the hospital’s record. And not just by being featured in the Boston Globe.

Pinker on Personal Genomics

As some folks know, I’ve spent the majority of my time over the last 1.5 year as a member of the Faculty at Harvard Medical School in the Informatics group, thinking about security and privacy of web platforms for managing personal health data, including genomic data. I’ve had trouble blogging about it, because I’m still learning quite a bit and it’s difficult to know where to start.

But now I don’t have to do an introductory post, because Steven Pinker did it already in the NY Times, much more beautifully and informatively than I could ever have done. If you’re at all interested in the topic, his article is a must-read.

My favorite is his prediction near the end, with which I completely agree:

People who have grown up with the democratization of information will not tolerate paternalistic regulations that keep them from their own genomes, and early adopters will explore how this new information can best be used to manage our health. There are risks of misunderstandings, but there are also risks in much of the flimflam we tolerate in alternative medicine, and in the hunches and folklore that many doctors prefer to evidence-based medicine. And besides, personal genomics is just too much fun.

Privacy Advocacy Stunts

Deborah Peel, a well-known patient privacy advocate, and EPIC have joined forces to ask Google some questions about Google Flu Trends. Google is analyzing its search logs to detect flu outbreaks by region, which is super nifty.

Peel and EPIC ask:

There are, however, privacy concerns surrounding this new tool.

[…]

In the aggregate, the data reveals useful trends and should be available for appropriate uses. But if disclosed and linked to a particular user, there could be adverse consequences for education, employment, insurance, and even travel. The disclosure of such information could also have a chilling effect on Internet users who may be reluctant to seek out important medical information online if they are concerned that their search histories will be revealed to others… If Google has found a way to ensure that aggregate data cannot be reidentified, it should publish its results.

So this is clearly a stunt meant to scare people who somehow haven’t yet realized that Google has search logs.

If there’s a privacy problem “surrounding this new tool”, then it should be evident from the tool itself. Since data is aggregated at the State level, and since the output is simply an estimate of flu activity for the whole State, there is no privacy risk to speak of. And, Google tells you in detail how FluTrends works.

Of course, Google does have access to your individual search records. So does Yahoo. If they don’t handle that data securely, or if they report individual data to outside entities, then yes there is a privacy problem, potentially a very large privacy problem. But that is completely independent of Flu Trends.

And it’s not like this aggregate data analysis is a new thing for Google: they’ve been analyzing and publishing trends for a while.

I’m all for privacy advocacy, and I do believe that Google needs to improve its commitment to privacy in general, with respect to anonymization of data, disclosure of data resale, and more. But I’m not so sure these privacy advocacy stunts are a good idea, especially on issues where privacy is actually well handled.

UPDATE: I see that Fred Totter is commending Peel and EPIC on this action, saying it reassures him. Interesting. But why is this reassuring? Surely, Google could have been mining data before Flu Trends? What is it about releasing this tool, with its detailed disclosures and explanations, that somehow tickles the privacy bone? Worth a second blog post soon’ish, I think.

Genomic Records & Voting

So part of my research is on voting. And another part is on the privacy of genomic medical records (which, admittedly, I haven’t spoken about much on this blog yet). It’s not often that I find an article that combines both. But I guess it was inevitable:

In the coming era of personal genomics — when we all can decode our genes cheaply and easily — political candidates may be pressed to disclose their own DNA, like tax returns or lists of campaign contributors, as voters seek new ways to weigh a leader’s medical and mental fitness for public office.

Totally agreed. It’s not “may,” in my opinion, it’s inevitable. I think by 2016, it will be part of the Presidential Election discussion. If genetic testing is widespread, and there’s any history of mental illness or heart disease in a candidate’s family, then the press will come looking, and refusal to disclose will be seen as admission of a problem.

This is the issue that privacy advocates bring up regularly but that fails to resonate with people, even though it really should: the mere availability of data, even if locked with a password, threatens one’s privacy. It creates expectations of due release in certain conditions. Employers might consider it normal due diligence to ask if you’re genetically pre-disposed to anger and aggression when they hire you, just like it was perfectly acceptable, 30 years ago, to ask a woman if she planned on having kids anytime soon.

The only protection we have is legal and societal. It needs to become legally forbidden to ask for this kind of information, and it needs to become socially unacceptable, too. The Genetic Information Non-discrimination Act (GINA) takes us in the right direction on this front, though it’s likely not the last word.