In an previous posting, I raised some questions about an op-ed ("Justice Flunks Math") on the judge's refusal to depart from the court-appointed expert's written report in the prosecution of Amanda Knox and Raffaele Sollecito. This week, a flurry of opinionated comments appeared, and I let those that seemed to have at least some analysis or substance through the gate.
In my previous posting, I took issue with the op-ed's assertion that the trial judge "demonstrated a clear mathematical fallacy: assuming that repeating the test could tell us nothing about the reliability of the original results" and its apparent suggestion that retesting the same DNA sample would be comparable to testing a coin for bias by repeatedly tossing it. I argued that "[w]ithout some specification of precisely what made the initial testing problematic and whether those problems could be reduced sufficiently with retesting, it seems precipitous to convict the judge who overturned the guilty verdict of 'bad math.'"
Whatever the merits of the indictment of the judge, my thanks to those who offered information on whether retesting might be significantly more revealing than the initial testing. That is an interesting question in its own right.
In this regard, an author of the op-ed, Professor Leila Schneps kindly explained that the "confirming retest" (the phrase in her op-ed) did not mean a retest of the same sample (like flipping a coin again) but rather an analysis of a "new knife blade sample," a "rich sample ... from the place where the blade joins the handle of the knife." This new sample, she suggests, might be "positive for Meredith Kercher," in which case, "it would have correctly settled two of the questions left outstanding in the courtroom: was the first electropherogram showing the DNA on the knife correctly interpreted as Meredith's, and was Meredith's DNA actually on the knife?"
If we posit that the new sample is large enough to produce unambiguous results, then it could reveal whether "Meredith's DNA [was] actually on the knife." But Professor Schneps also states that the "rich sample" was "significantly lower than the quantity 'advised' by the kit, although the kit's website shows many examples of tests on smaller samples, some even smaller than the knife blade DNA, that gave positive and accurate results."
If the sample is this impoverished, are we not back in the realm of low-template DNA testing, where the worry is that stochastic effects can be dominant? The mathematical argument here seems to be that even though it might not be surprising to spot, by chance alone, some peaks in a new test that also are present in Meredith's genotype, the probability of those peaks plus the ones seen in the original testing of a different sample from the knife would be negligible unless Meredith's DNA was on the knife. In this way, the additional testing overcomes the low signal-to-noise ratio in each sample. That is a fair argument (as far as it goes), and the same logic underlies some protocols for testing contact DNA.
Still, given the difficulties and the level of discord over the best approaches to conducting and interpreting LT-DNA testing (see, e.g., A. Carracedo, P.M. Schneider, J. Butler & M. Prinz, Focus issue—Analysis and Biostatistical Interpretation of Complex and Low Template DNA Samples, Forensic Science International: Genetics 6 (2012) 677–678), and the court's experts' concerns about contamination, I wonder whether even the most mathematically erudite judge would have been so quick to order additional DNA testing in this case. Consequently, I am not yet prepared to give the judge a flunking grade for "a clear mathematical fallacy."
Home » error »
LT-DNA
» Bad Math or Passable Law? DNA Testing in the Continuing Prosecution of Amanda Knox and Raffaele Sollecito
Langganan:
Posting Komentar (Atom)
0 Response to "Bad Math or Passable Law? DNA Testing in the Continuing Prosecution of Amanda Knox and Raffaele Sollecito"
Posting Komentar