Psychological association admits that forensic experts are biased in favor of who hires them
October 1, 2013
Forensic psychologists and psychiatrists are ethically bound to be impartial when performing evaluations or providing expert opinions in court. But new research suggests that courtroom experts’ evaluations may be influenced by whether their paycheck comes from the defense or the prosecution. The research is published in Psychological Science, a journal of the Association for Psychological Science.
The findings reveal that experts who believed they were working for prosecutors tended to rate sexually violent offenders as being at greater risk of re-offending than did experts who thought they were working for the defense.
“We were surprised by how easy it was to find this ‘allegiance effect,’” says psychological scientist Daniel Murrie of the University of Virginia. “The justice system relies often on expert witnesses, and most expert witnesses believe they perform their job objectively — these findings suggest this may not be the case.”
Murrie and co-author Marcus Boccaccini at Sam Houston State have worked in forensic psychology for years, watching the adversarial justice system use forensic experts to gain an advantage in their cases.
“We became increasingly curious about whether forensic psychologists and psychiatrists could actually do what their ethical codes prescribed: handling each case objectively, regardless of what side retained them,” says Murrie.
Murrie and Boccaccini decided to conduct a “real world” experiment, providing 118 experienced forensic psychiatrists and psychologists from several states the opportunity to participate in a two-day workshop covering the psychological tests used to evaluate sexually violent predators. In exchange, the experts agreed to provide paid consultation to a state agency that was supposedly reviewing a large batch of sexually violent offender case files.
The experts returned weeks later to meet with an actual attorney and score risk assessment instruments for offenders as part of the paid consultation – unbeknownst to them, each expert was given the same four files to review.
Even though the experts used the same well-known assessment instruments to evaluate the same offenders, the risk scores they assigned turned out to be significantly different depending on who they thought was paying them: Those who believed they were hired by the prosecution tended to assign higher risk scores, while those who believed they were hired by the defense assigned lower risk scores.
Murrie notes that most people in this line of work really do try to be objective, and not every expert in the study demonstrated biased scoring. But the findings suggest that some of the experts were swayed by the side that retained them.
“In short, even experts were vulnerable to the same biases as the rest of us, in ways that left them less objective than they thought,” says Murrie.
The researchers hope that the study will prompt experts in his field to take a hard look at how evaluators are trained and how they practice.
“Demonstrating that allegiance is a problem is the first step towards solving the problem,” Murrie concludes. “The justice system certainly needs the expertise experts can offer, but the system also needs to be able to trust that their input is truly objective.”
The abstract for this article can be found online.
Co-authors on this research include Lucy Guarnera from the University of Virginia and Katrina Rufino from Sam Houston State University.
This research was supported by the National Science Foundation Law & Social Science Program.
No comments.
Post your own comment here: