An area the shade of grey

In an upcoming sexual abuse trial in San Diego, the defense team hopes to present data from a brain-scan lie detector (called No Lie fMRI) as evidence.

This is kind of messed up. Check out this sequence of quotes, taken from the Wired article.

Hank Greely, the head of the Center for Law and the Biosciences at Stanford (in an e-mail to Wired.com):
"The studies so far have been very interesting. I think they deserve further research. But the technology is very new, with very little research support, and no studies done in realistic situations. Having studied all the published papers on fMRI-based lie detection, I personally wouldn't put any weight on it in any individual case. We just don't know enough about its accuracy in realistic situations."

Emily Murphy, a behavioral neuroscientist at the Stanford Center for Law and the Biosciences:
"The defense plans to claim fMRI-based lie detection (or “truth verification”) is accurate and generally accepted within the relevant scientific community in part by narrowly defining the relevant community as only those who research and develop fMRI-based lie detection."

Brooklyn Law School's Edward Cheng, who studies scientific evidence in legal proceedings:
"Technology doesn't necessarily have to be bulletproof before it can come in, in court. It's not clear whether or not a somewhat reliable but foolproof fMRI machine is any worse than having a jury look at a witness. It's always important to think about what the baseline is. If you want the status quo, fine, but in this case, the status quo might not be all that good."

Bulletproof? How about validated by replicated scientific data? How about peer reviewed by someone other than those trying to push the commercialization of the technology? The validity fMRI technology in general is already a source of controversy within the neuroscientific community. To permit this as evidence in court would be reckless.

Jonah Lehrer offers his informed opinion here.

No comments: