"Remarkably Accurate": The Miami-Dade Police Study of Latent Fingerprint Identification (Pt. 3)

The Department of Justice continues to communicate the sweeping view that fingerprint examiners make extremely few errors in their work. A few days ago, it issued this bulletin:

Miami-Dade Examiner Discusses a Highly Accurate Print Identification Process
In a new video, Brian Cerchiai talks about an NIJ-supported study conducted by the Miami-Dade Police Department on the accuracy of fingerprint examiners. The study found that fingerprint examiners make extremely few errors. Even when examiners did not get an independent second opinion about their decisions, they were remarkably accurate. But when decisions were verified by an independent reviewer, examiners had a 0-percent false positive, or incorrect identification, rate and a 3-percent false negative, or missed identification, rate.

A transcript of the NIJ video can be found below. The naive reader of the bulletin might think that Miami Dade's latent print examiners do not make false identifications -- they are "remarkably accurate" in their initial judgments -- and they have a "0-percent" rate of incorrectly declaring a match in their cases. In previous postings, I suggested that this first characterization is a remarkably rosy view of the results reported in the study, but I did not address the verification phase that brought the false positive rate (of 3 or 4%) for the judgments of individual examiners down to zero.
Today, Professor Jay Koehler shared his reactions to both aspects of the Miami Dade study on a discussion list of evidence law professors. I have not reread the study myself to verify the details of his analysis, but here is his take on the study: Photograph of latent print on a bottle.

Regarding the Miami-Dade fingerprint proficiency test (funded by the Department of Justice) - and DOJ’s claim that it showed a 0% false positive error rate - I urge you to be skeptical.

First, the study was not blind (examiners knew they were being tested) and the participants were volunteers. If we are serious about estimating casework error rates, these features are not acceptable.

Second, the Department of Justice’s press release indicates that the study showed examiners to be “remarkably accurate” and “found that examiners make extremely few errors.” But the press released doesn’t actually state what those remarkable error rates were.

Here they are: the false positive error rate was 4.2% (42 erroneous identifications out of 995 chances, excluding inconclusives), and the false negative error rate was 8.7% (235 erroneous exclusions out of 2,692 chance, excluding inconclusives). In case you are wondering whether the false positive errors were confined to a few incompetents, 28 of the 109 examiners who participated in the study made an erroneous identification. Also, the identification errors occurred on 21 of the 80 different latent prints used in the study.

The error rates identified in this study produce a likelihood ratio of about 22:1 for a reported fingerprint match.  This means that, one should believe that it is about 22 times more likely that the suspect is the source of the latent print in question than it was prior to learning of the match. Not 22 million or billion times more likely to be the source of the latent print in question. Just 22 times more likely.

But not all false positive errors are equal, and most of those reported in this study really shouldn’t count as false positive errors if we are concerned with who is the source of the fingerprint as opposed to which finger is the source of the fingerprint. The authors report that 35 of the 42 false positive errors seemed to be nothing more than “clerical errors” in which the correct matching person was selected but the wrong finger was identified. If we move those 35 minor false positives into the correct calls category, we are left with 7 major false positive errors (i.e., a person who was not the source is falsely identified as the source). This translates to a 0.7% false positive error rate (i.e., about one false positive error per 142 trials), and a likelihood ratio of 130:1. Better, but still not even close to millions or billions to one.

Third, the study provides some evidence about the value of verification for catching false positive errors, but caution is needed here as well. The 42 false positives were divided up and assigned to one of three verification conditions: a group of different examiners, a group of examiners who were led to believe that they were the 2nd verifiers, and the original examiners themselves (months later). The 0% post-verification error rate that the Department of Justice touts is an apparent reference to the performance of the first verification group only. None of the 15 false positive errors that were sent to this group of verifiers was repeated. But some of the original false positive errors were repeated by the second and third group of verifiers. The authors are silent on whether any of the 7 major false positive errors were falsely verified or not.

Appendix: NIJ Video Transcript: How Reliable Are Latent Fingerprint Examiners?
Forensic Science, Statistics, and the Law gratefully acknowledges the U.S. Department of Justice, Office of Justice Programs, National Institute of Justice, for allowing it to reproduce the transcript of the video How Reliable Are Latent Fingerprint Examiners? The opinions, findings, and conclusions or recommendations expressed in this video are those of the speaker and do not necessarily represent the official position or policies of the U.S. Department of Justice.

Research Conducted by the Miami-Dade Police Department.
Speaking in this video: Brian Cerchiai, CLPE, Latent Fingerprint Examiner, Miami-Dade Police Department

The goal of the research was to determine if latent finger print examiners can make and be able to make identifications, exclude properly prints not visible to the naked eye. In this case, we had these 13 volunteers leave over 2000 prints on different objects that were round, flat, smooth and we developed them with black powder and tape lifts.

We did the ACE which is analyze compare evaluate. Where we gave latent examiners - 109 latent examiners - unknown finger prints or palm prints and latents to look at and compare to three known sources. So essentially, compare this latent to one of these 30 fingers or one of these six palms.
[Slide text] 109 examiners compared the unknown latent prints to known sources. Can they match the prints correctly?
So as participants were looking at the latent list and comparing them to the subjects, we asked them if they could identify any of those three subjects as being the source of that latent print. In that case, they would call that an identification. If we asked them to exclude, we are basically asking them to tell us that none of those three standards made that latent or were not the source of that latent print.

That ACE verification (ACE-V) process works, secondly, the examiner looks at that comparison and does their own analysis comparison and gives their evaluation of that decision.

When we found that under normal conditions where one examiners made an identification and the second examiner verified that no erroneous identification got passed that second latent examiner. So it had a false positive rate of zero.
[Slide text] With verification, 0% false positive.
So when we are looking at ACE comparisons where one latent examiner looked a print and one latent examiner analyzed compared and evaluate and came up with a decision. We came up- there was a false positive rate which basically an erroneous identification where they identified the wrong source.
[Slide text] Without verification, 3% false positive.
Without verification, there was a three percent error rate for that type of identification. And we also tracked a false negative rate where given those three standards, people were erroneously excluded that source; where you’re given the source, check one of these three people and then you now eliminate that one of those latent print does not come from one of those three people, even though it did. So that would be a false negative. And that false negative rate was 7.5 percent.
[Slide text] Without verification, 7.5% false negative.
And what we did during the third part of our phase in this test was – we were testing for repeat ability and reproduce ability. We sent back answers over - after six months we sent back participants their own answers and we also gave them answers from other participants. But all those answers came back as if they were verifying somebody’s answers.
[Slide text] To test the error rate further, an independent examiner verified comparisons conducted by other examiners.
Under normal conditions we’d give them the source, latent number and basically agree, disagree or inconclusive. With a biased conditions, we’d give them the identification answer that someone identified, given the answer of a verifier. So now, it’s already been verified and now we want them to give a second verification. Having those print verified, ending out those erroneous identification to other examiners not one latent examiner under just regular verification process-not one latent examiner identified that, they caught all those errors. What actually brought the error rate – the reported error rate dropped down to zero.
[Slide text] The independent examiner caught all of the errors, dropping false positive error rate to 0%.
We maintained our regular case load, this was done in the gaps in between, after hours. The hardest part of doing this was not being dedicated researchers. That’s why it took us quite a long time to get this done. Now that it’s finally out here and we are doing things like this -- giving presentations this year. We really hope to expand on this research. The results from this study are fairly consistent with those of other studies.
[Slide text] This research project was funded by the National Institute of Justice, award no. 2010-DN-BX-K268. The goal of this research was to evaluate the reliability of the Analytics, Comparison, and Evaluation (ACE) and Analysis, Comparison, Evaluation, and Verification (ACE-V) methodologies in latent fingerprint examinations.
Produced by the Office of Justice Programs, Office of Communications. For more information contact the Office of Public Affairs at: 202-307-0703.

0 Response to ""Remarkably Accurate": The Miami-Dade Police Study of Latent Fingerprint Identification (Pt. 3)"

Posting Komentar

wdcfawqafwef