Security Review: Brain Electrical Oscillation Signature Profiling in Criminal Trials

By eland at 1:49 pm on November 19, 2008Comments Off on Security Review: Brain Electrical Oscillation Signature Profiling in Criminal Trials

In September of 2008, a 24-year-old woman in Maharashtra, India, was found guilty of murdering her fiancé. Her trial set a (troubling) legal precedent because brain electrical oscillation signature (BEOS) testing was cited as major evidence in her conviction. This controversial method involves placing electrodes on the head of the accused and analyzing visual recognition signals to prove whether or not someone has prior recollection of a crime.

A large area of the brain is devoted to receiving, analyzing, and processing visual signals. It is well known that the brain recognizes different objects in different ways; for example, a separate visual channel exists for recognizing faces. This is what allows people to differentiate between thousands of human beings by recognizing and utilizing small differences in the structure of the face. More importantly for crime and punishment, however, it turns out that the brain reacts in different ways to images that do or do not match the images stored in a person’s memory. Therefore, in theory, if someone had committed a crime and images from the crime scene were shown to that person at a later date, that person’s brain would produce signals that show high levels of recognition. A person who was unconnected with the crime, on the other hand, would not recognize anything, and would not produce these same signals. This concept is particularly interesting to forensic scientists because these responses are extremely difficult, if not impossible, to fake; a person can lie, but their visual recognition system cannot. Furthermore, recent advances in neuroscience and brain-computer interfaces have allowed for devices to pick up these signals.

This method is still new and has not yet stood up to scientific rigor. Currently, proponents of the system argue that it is 95% accurate, but this has not been tested on a large scale. Furthermore, that leaves 5% of decisions that are incorrect. As a result, even though neuroimaging scans have been used as evidence in U.S. courts from time to time, these scans have served some purpose other than lie detection. It sounds like this case in India is unique because a brain scan was used to prove that someone was lying and was used to incriminate her directly. These lie-detecting scans are not currently admissible in U.S. courts, but they could become admissible at some point in the near future. The false positive rate could conceivably decrease as technology improves. I should also point out that even though these devices are similar in nature to polygraph tests, polygraph machines are known to be unreliable and can be consciously deceived. Since visual recognition patterns are a subconscious process, the reliability of visual recognition could, in theory, be much better. For the purposes of this security review, I am discussing the use of neuroimaging in criminal trials as it relates to security and privacy.

Everyone is an indirect stakeholder associated with this technology, as it almost inherently forces us to refine the concept and boundaries of “mental privacy.” Up until recently, devices could not read people’s minds directly. They still cannot, really, except in specific cases where some kinds of brain signals can be uniquely identified. In most cases where this happens, however, the user voluntarily allows his or her own brain to be analyzed. Therefore, in some sense, the user gives away the right to privacy in relation to whatever thoughts or processes are being studied. Actually, even the woman in India voluntarily subjected herself to this test, and only later claimed that there was any wrongdoing. Perhaps she believed that refusing the test would be more incriminating than taking it and showing evidence of recognition. But what if people are ever forced to subject themselves to such tests? To what extent are thoughts private? It seems that defining all thoughts as private information is consistent with our modern definitions of privacy.

Nevertheless, if this technology turns out to work as described, it could be a “holy grail.” Our entire criminal system is built around the idea that it is impossible to know the internal state of the mind of the accused with a high degree of certainty. The defendant, however, knows whether or not he or she committed the crime. Brain imaging is not perfect and probably could never be perfect, but, on a surface level, could be an extremely powerful form of evidence. Being able to reliably enforce sanctions against members of society who commit crimes could improve the efficiency of the whole criminal system. In this sense, everyone is an indirect stakeholder in a second way; everyone could benefit from the additional security that this system could help provide.

Certainly, the defendants are direct stakeholders in relation to this technology. We consider people to be innocent until proven guilty, and this technology could be a strong way of suggesting guilt. However, the Fifth Amendment to the U.S. Constitution protects those on trial from being forced to testify against themselves. This amendment exists to protect people, although polygraph tests are admissible in some states. Being forced to submit to a lie-detecting brain scan could be considered as self-incriminating. However, as the case in India showed, even if the defendant is allowed to take such neuroimaging tests voluntarily, but refuses, this can suggest guilt. The criminal system is already walking a fine line in relation to the rights of the defendant, and the very existence of this technology puts a new spin on those rights. The judges, lawyers, and jury members are also direct stakeholders because they must be expected to evaluate the results of neuroimaging tests correctly and in adherence to legal procedures.

One asset is the integrity of the information provided by the system. Regardless of the outcome of the test – recognition or no recognition – if the information is being used as evidence in a criminal trial, it is extremely important that the information is correct and has not been tampered with. A second asset is the privacy of the defendant, which I am extending slightly to include the asset of “personal determinism” or whatever exactly is protected by the right to avoid self-incrimination.

Therefore, one adversary is a malicious or uninformed test administrator who either conducts the test incorrectly or fakes a result somehow. A second adversary is a malicious judge or lawyer who interferes with the pictures presented to the defendant; if the pictures are chosen poorly or maliciously, it could cause the person taking the test to have the wrong reaction (for example, if the defendant’s favorite hat were Photoshopped onto a picture of a dead body). A third threat is malicious code written by the designers of the system that causes the system to fail or do something wrong.

One major weakness is the potential fallibility of the lie-detecting system, which threatens the integrity of the results. In addition to inaccuracies inherent in the signal detection, components of the system could fail; for example, electrical spikes or surges could flip bits, the programs could be incorrect, or the operating system could corrupt data. The system could have some kind of virus or be running malicious code. Even more frightening, someone could tamper with the system and influence the output.

A second weakness is the possibility for private information to be leaked through the output of the system. Assuming that the person’s recognition of objects related to the crime were allowed as admissible evidence in court, the user’s psychological responses to objects could give away other private data. Say, for example that a man were on trial for murdering a woman whom he had been cheating with. The man could be innocent, and the test could prove his innocence, but the man’s recognition of objects in the apartment of the woman might give away that he was cheating.

A defense against the first weakness is ensuring that the test is administered under very controlled conditions. For example, several trained technicians should be present in the room during the test, reducing the probability that someone is administering the test incorrectly or with malicious intent. Secondly, the electronic components of the system should be designed with safeguards and programming-level guarantees. Thirdly, the jury and judges need to be instructed about the possible failings of the system and the results of the test should never be used as the only incriminating evidence in a conviction.

The second weakness is even trickier to address. In some cases, it may be impossible to guarantee protection of private information. Certainly, the succession of images presented to the defendant is critical; they should be carefully chosen so that they only betray information about the crime and not about other experiences.

Thus, there are large risks to security and privacy that are raised by BEOS systems, but there are also potential benefits associated with improved personal security (because it could become easier to implicate criminals) and more effective justice. It seems like the accuracy of the system needs to be scientifically tested and improved before it can be useful in courtroom setting. The security safeguards I mentioned, and several others, would need to be implemented. And, honestly, I think it violates self-incrimination protections.

This system is just in its infancy, but the bigger picture of the emerging ability to build devices that can read people’s minds robustly is frightening. I have only discussed this from a courtroom perspective, but this technology could be used for more nefarious causes as well. If McCarthyism remerges in the future, it would be scary if we could implicate people for recognizing “un-American” objects. Hopefully, this will not happen, and if it does, we need to examine carefully the protections of “mental privacy” that we should provide.

Filed under: Security ReviewsComments Off on Security Review: Brain Electrical Oscillation Signature Profiling in Criminal Trials

Comments are closed.