The article proceeds to describe deplorable conditions in the drug testing lab, but all it says about latent print work is that “[t]he city has since hired a certified fingerprint examiner to run the lab, who has announced plans to resume its fingerprint examination and crime scene processing operations, and begin the procedure for seeking accreditation.”the St. Paul, Minn., police department’s crime lab suspended its drug analysis and fingerprint examination operations after two assistant public defenders raised serious concerns about the reliability of its testing practices. A subsequent review by two independent consultants identified major flaws in nearly every aspect of the lab’s operation, including dirty equipment, a lack of standard operating procedures, faulty testing techniques, illegible reports, and a woeful ignorance of basic scientific principles. 1/
Curious as to what the latent-print examiners had been doing, I turned to a local newspaper article entititled "St. Paul Crime Lab Errors Rampant." It reported that “[t]he police department hired two consultants to work on improving the lab after a ... Court hearing last year disclosed flawed drug-testing practices” and that “the lab recently resumed fingerprint work by certified analysts.” 2/
One consultant, “Schwarz Forensic Enterprises of Ankeny, Iowa, ... studied the crime lab's latent fingerprint comparison, processing and crime scene units” and found that “[p]ersonnel appeared to have attended seminars and training, but there wasn't formal competency testing or a program to assess ongoing proficiency.”
These untested personnel offered an opportunity to see how poorly monitored analysts performed. Would they succumb to the widely advertised cognitive biases that might cause latent print examiners to declare matches that do not exist? Would they declare matches more frequently than certified examiners? Apparently not:
the Schwarz report said." In other words, the incidence of false negatives and missed opportunities to make identifications or exclusions was high, but no false-positive errors were found. “A review of 246 fingerprint cases found the unit successfully identified prints only ‘in cases where the print detail is of extraordinarily high quality.’”"'Despite these deficiencies, no evidence of erroneous identifications by latent print examiners was found; but we did find numerous examples of cases wherein examiners had failed to claim latent prints as suitable for identification and/or to identify prints to suspects,'
This outcome is consistent with more rigorous studies showing that when latent print examiners make mistaken comparisons, the errors are usually false exclusions—not false matches. 3/ This tendency reflects a different sort of bias—an unwillingness to declare a match unless the match seems quite clear.
Of course, 246 instances without false positives from worrisome fingerprint analysts does not prove that they never make false matches. If this group were making false identifications 1% of the time, for instance, the probability that no false positives would be seen in a run of 246 independent cases (each with the same 1% false-match probability) would be (1 – .01)246 = 8%.
The absence of false positives also is consistent with an intriguing 2006 report by Itiel Dror and David Charlton. 4/ These investigators had six experienced, certified, and proficiency-tested analysts examine sets of prints from four cases in which, years ago, the examiners had found exclusions and another four cases in which they had made identifications. The subjects did not realize that they had seen these prints before. In some instances of previous exclusions, the examiners were told that a suspect had confessed. In none of these cases did the examiners depart from their earlier judgment of a match.
On the other hand, in cases of previous identifications, when examiners were told that the suspect was in police custody at the time of the crime, two examiners switched from an exclusion to an identification, and one switched to “cannot decide.” Although these sample sizes are too small to justify strong and widely generalizable conclusions, it looks like it is easier for information that is not needed for the analysis to prompt an exclusion than an individualization.
Dror and Charlton interpret their results as supporting (among other things) the claim “that the threshold to make a decision of exclusion is lower than that to make a decision of individualization.” This higher threshold would make it more difficult to bias an examiner to make a false identification than to make a false exclusion.
Did any of the 246 St. Paul cases involve contextual bias of one kind or another? If so, it would be interesting to find out if these examiners resisted contextual suggestions favoring identifications or exclusions in those cases. Audits like these could be helpful not only in getting laboratories with problems back on track, as in St. Paul, but also as a source of information on the risks of different types of errors in various settings and circumstances.
Notes
- Mark Hansen, Crime Labs Under the Microscope after a String of Shoddy, Suspect and Fraudulent Results, ABAJ, Sept. 2013
- Mara H. Gottfried & Emily Gurnon, St. Paul Crime Lab Errors Rampant, Reviews Find, Pioneer Press, Feb. 14, 2013
- See, e.g., Fingerprinting Under the Microscope: Error Rates and Predictive Value, Forensic Science, Statistics, and the Law, April 30, 2012; Fingerprinting Error Rates Down Under, June 24, 2012, Forensic Science, Statistics, and the Law.
- Itiel E. Dror & David Charlton, Why Experts Make Errors, 56 J. Forensic Identification 600-16 (2006)
0 Response to "Fingerprinting Errors and a Scandal in St. Paul"
Posting Komentar