The U.S. Supreme Court’s recent decision in Williams v. Pennsylvania, handed down during the turmoil in the presidential campaign over the heated rhetoric on judicial impartiality, adds to the Supreme Court’s growing jurisprudence on the due process requirements for judicial disqualification. The issue in the case—whether a justice on the Pennsylvania Supreme Court could properly adjudicate a death penalty case when he had previously been the prosecutor who authorized capital charges against the defendant—set the stage for a ruling that could have provided broad guidance on the due process parameters for judicial disqualification, especially in criminal cases. Yet the Court’s holding may end up having only limited impact. As others have already started to note, the test announced by the Court—“that under the Due Process Clause there is an impermissible risk of actual bias when a judge earlier had significant, personal involvement as a prosecutor in a crucial decision regarding the defendant’s case”—will be hard to prove and adds little additional guidance to what is already available under existing ethical standards for judicial recusal in most states. In addition, my guess is that there are few cases in which a prosecutor-turned-judge will be asked to rule on a case in which he or she was previously involved, so this test is likely to directly apply to only a narrow band of future situations.
From another perspective, however, Williams is significant—at least for those of us with an interest in behavioral science—because the decision stands out as yet another example of how the Court continues to make proclamations about the nature of human psychology and decision-making without identifying the scientific source for its conclusions. For example, the decision notes (without authority) that “bias is easy to attribute to others and difficult to discern in oneself,” an observation that is supported by the extensive literature on what experts call naïve realism. Later in its opinion, the Williams Court elaborated on the difficulty in making accurate self-assessments of bias, noting that when a judge is asked to participate in a case in which he or she previously served as a prosecutor, there is “a risk that the judge ‘would be so psychologically wedded’ to his or her previous position as a prosecutor that the judge ‘would consciously or unconsciously avoid the appearance of having erred or changed position.’” Again, the Court cited no scientific authority for these observations, yet easily could have done so by noting the ample research addressing the power of unconscious influences on judgment and decision-making (much of which is summarized by famed psychologist and Nobel Prize winner Daniel Kahneman in his well-known book, Thinking Fast and Slow).
Nor is Williams the Court’s first foray into the frailties of human cognition. Seven years ago, for instance, in Caperton v. A. T. Massey Coal Co., the Court noted that the “Due Process Clause has been implemented by objective standards that do not require proof of actual [judicial] bias. In defining these standards the Court has asked whether, ‘under a realistic appraisal of psychological tendencies and human weakness,’ the interest ‘poses such a risk of actual bias or prejudgment that the practice must be forbidden if the guarantee of due process is to be adequately implemented.’” As in Williams, these pronouncements are (again) without citation to empirical support or scientific authority.
Why the Court has avoided providing a scientific basis for its broad exclamations about how judges can be expected to respond to conflicting interests is curious, especially given the extensive research available to support the Court’s observations about human nature. Indeed, many of the Court’s conclusions in Williams and its progeny flow directly from the research in psychology and related disciplines, as the Ethics Bureau at Yale noted in its excellent amicus brief, which I signed onto, in the Williams case. Three aspects of research on human psychology seem particularly relevant and could serve to animate the Court’s jurisprudence in the area:
1. Bounded Ethicality. As I have noted elsewhere, the research on bounded ethicality focuses on the overconfidence bias, in which people tend to overestimate their positive—and minimize their negative—attributes. These include a number of traits that are relevant to considerations of conflicts of interest, including the desire to believe that one is competent, ethical, and deserving. This means that we all tend to believe that we are more capable of neutral and impartial evaluation than we really are. The “illusion of objectivity,” as it is often described, can cause us to underestimate the degree to which we will be influenced by conflicting interests. One pernicious aspect of this bias is we are blind to it, as the mechanism by which it occurs happens outside conscious awareness—producing what has been dubbed the “bias blindspot.” Simply put, people think they are being objective, even when they are not.
2. Motivated Reasoning. Another psychological mechanism at play is motivated reasoning, a phenomenon that causes everyone to seek out, interpret, and remember information in a manner that tends to conform to preexisting wishes, wants, and desires. This results in an asymmetric reasoning process, where we tend to believe information that is consistent with our desired outcome, and undervalue information that is inconsistent with our goals and beliefs. The effect of this process is particularly pronounced when self-interest is at stake, meaning that we tend to reason toward a preferred conclusion when it is in our interest to do so, while discounting evidence to the contrary. As with overconfidence bias, motivated reasoning occurs without a trace of conscious awareness, leaving the decision-maker with the belief that she has been objective, notwithstanding evidence to the contrary. And, as with the overconfidence bias, motivated reasoning helps to produce errors in judgment when conflicts of interest are at stake, especially when self-interest is pitted against duties owed to others. Choices that seem based on an objective assessment of the information often will, in fact, be swayed by unconscious processes at work.
3. Cognitive Dissonance. Decades of research on cognitive dissonance theory indicates that people seek to reduce the tensions between beliefs and behavior. After a decision has been reached and acted upon, and there is no more room to change one’s behavior, the mechanism to reduce the dissonance is to change one’s mind, so as to confirm to conduct that has already transpired. In other words, the power of rationalization will take hold when the decision-maker seeks out evidence that supports prior conduct. So, for example, when a decision-maker has concluded that no conflict existed, and acted upon that choice, reasoning based on a post hoc rationalization is likely to result.
To be sure, there is much to laud in the Court’s willingness in Williams and prior decisions to acknowledge the psychological realities of human decision-making. Yet the absence of citation to the sound empirical basis for these conclusions makes the Court’s jurisprudence less compelling than it otherwise could be. Perhaps, in the future, as the Court grapples with the thorny questions of judicial disqualification, it will start to infuse its decisions with an empirical basis for its claims, fortifying its judgments about how judges (like everyone) respond when conflicts of interest are present. There is certainly ample scientific evidence for support.
Read a response to this blog by Professor Victor M. Hansen, who also gives his thoughts on Williams v. Pennsylvania, available here.