The New Scientist points out a case of genotyping error by one of the consumer genomics companies, where a software bug caused a genotype to appear non-human. The article attempts to be reassuring:
Before other deCODEme customers get too irate about errors in data for which they have paid almost $1000, the bug affects only a tiny portion of the results presented. Most importantly, the disease-risk summaries provided by deCODEme seem to be based on the correct genetic information.
“seem to be” is the operative terminology, indeed. As is typical in security / quality-control settings, the question here is, if the software can make such a large mistake, what about all the smaller mistakes it’s making that aren’t so obviously detectable?
Seems to me that before we start trusting these genomic tests for clinical purposes, we’ll want to make sure our genomes are read multiple times, ideally using different technologies. 99.99% accuracy sounds great until you realize you’re dealing with millions of data points, each one of which could be significant.
(And I’m not even touching on whether genomic data is sufficiently predictive, given current knowledge, to be clinically relevant, which as Zak Kohane points out in the article, isn’t a given.)