There is a prodigious literature on responsibility, most of which emphasizes the internal properties of persons constitutive of responsible agency. We can call this approach ‘internalism,’ in contrast to ‘externalism’ (see previous post). Internalism is reflected, for example, in debates about whether character or control is the proper locus of responsible agency. These debates tend to focus on embodied internal properties (reflective states, values, virtues), and sometimes a limited range of exogenous factors. The most salient example of the latter are direct neural interventions, like nefarious neurosurgery, counterfactual brain implants, and brainwashing. While I don’t want to deny that these are interesting and constructive discussions, there’s a broader range of situational factors that we can explore—factors that operate on the agent’s responsibility-constitutive neural states less directly, but might have equally significant effects on these states.
One of the problems with focusing on direct neural interventions in discussions of responsibility is that these interventions are actually fairly useless, given the current state of cognitive neuroscience. The range of realistic moral neuro-interventions is (at this time) very limited. We cannot do personality-altering neurosurgery, for example. We also cannot very effectively enhance a person’s moral psychology with pharmaceuticals. In spite of some transhumanitsts’ optimistic claims about ‘moral neural enhancements’ (e.g., Paul Zak), existing examples of these interventions, like oxytocin nasal spray and transcranial magnetic stimulation, are not very effective at inducing moral traits (properly understood). Oxytocin boosters, for example, are ambivalent in their effects, producing both positive effects (trust) and negative effects (outgroup bias); are relatively weak at modifying the targeted personality structures; and are sensitive to a person’s background psychology, meaning that they sometimes produce trust but sometimes don’t. Some people are categorical distrusters, resistant to the effects of oxytocin. These considerations are meant to show that we cannot simply enhance ‘moral agency’ by intervening in neural states with pharmaceuticals and other interventions. What we can do, given sufficient political will and organization, is address social and situational factors that hinder the acquisition of responsible agency.
Some philosophers have begun to pursue this line of research (externalism), focusing on the moral landscape. Notably, Manual Vargas discusses the ‘moral ecology,’ or set of moral considerations in a given culture that underpin responsibility (2013), and Susan Hurley writes about the importance of the ‘public ecology’ (2011). These approaches mark a methodological shift away from internalism, toward a more sociological and anthropological analysis. Hurley and Vargas are both interested in (i) how public policies and practices inform our collective uptake of responsibility, and (ii) how we can best intervene to improve these structures. They are both, as far as I can tell, consequentialists and pragmatists, interested in altering the moral landscape in positive ways. Their work addresses such ‘ecological factors’ as psychiatric discourse, advertising, and political campaigns, and how these things either scaffold or sabotage our responsible agency.
I wish to expand on this discourse by focusing on another aspect of the moral ecology: namely, epistemic injustice (EI). EI, as Miranda Fricker describes it (2007), refers to an epistemic climate in which pernicious social stereotypes and social schemes give rise to systemic discrimination against historically disadvantaged social groups—groups like women, African Americans, and the queer community. One of the vehicles for EI identified by Fricker is ‘testimonial injustice,’ which occurs when someone fails to give another person the credit or trust that he deserves. For example, if a police officer distrusts someone’s testimony because the person is African American (as often happens), he is enacting testimonial injustice (Fricker 2007: 1).
While Fricker does not describe the cognitive mechanisms through which EI operates, good candidates seem to be: negative implicit biases, which are relatively automatic and unconscious prejudices toward vulnerable social groups (e.g., implicit racial bias, implicit gender bias, implicit disability bias); negative explicit biases, which are conscious prejudices toward these social groups; stereotype threat, i.e., the tendency to conform to negative stereotypes about one’s social group, impairing performance on certain tasks, though not competency; and low self-regard, which is a negative (perhaps conscious) self-conception that similarly impairs task performance. I see these mechanisms, which flourish in a climate of EI such as ours, as responsibility-impairing, insofar as they impair rationality—the ability to grasp and appreciate another person’s epistemic and moral character, to give others the credit they deserve, to respond appropriately to other people’s moral testimony, etc. As I will argue, these cognitive distortions can interfere with our shared moral practice, or ‘moral conversation’ as McKenna calls it (2013), by preventing us from holding people responsible in a rational way—a way that responds to their epistemic and moral attributes—and taking responsibility for ourselves in a rational way.
When we discredit or disvalue others due to the cognitive effects of EI, we exhibit and entrench responsibility-deficits in our own moral psychology, and in the same stroke, we harm the target’s responsibility status, either directly—by limiting the person’s social role options—or indirectly—by lowering the quality of public knowledge. When we harm public discourse, we deprive ourselves of epistemic justification for our reactive attitudes, making our moral attitudes, at best, weakly justified. We can never, that is, hold someone responsible with any confidence.
I’ll expand on the first part of this claim first: our responsibility deficits harm other people’s responsibility status.
Suppose that a hiring committee is seeking the most qualified candidate for an opening in the company, and they make their selection on the basis of implicit racial bias rather than merit, as per the famous Bertrand & Mullainathan study (2014). They select the best applicant out of the candidates with stereotypical Anglo-Saxon name, rather than the best applicant, full stop. The committee members in this scenario are acting irrationally by their own lights, since they are explicitly committed to hiring the best candidate (full stop), and have (as per the study) even advertised a commitment to employment equity. Since they are acting against their rational values and commitments, as well as on the basis of a relatively uncontrollable, automatic, and unconscious cognitive state (implicit bias), they are responsibility-impaired as members of the hiring committee. At the same time, by choosing an Anglo-Saxon candidate, they are preventing a more qualified candidate, from a socially disadvantaged group, from achieving her rational goals. If she cannot find equivalent work elsewhere, she is responsibility-impaired by virtue of not being able to attain a role responsibility that she finds valuable, and that society values.
(This is what happens when there is a ‘glass ceiling’, or social and material obstacles, barring certain social groups from obtaining jobs with certain role responsibilities; the person’s range of responsibilities are artificially circumscribed, and thus her responsibility-constitutive capacities are not as robust or expansive as they could be).
Now suppose that the hiring committee is explicitly biased against stereotyped social groups. Are they rationality impaired? Yes, but not in their instrumental rationality (their means-end reasoning). In this case, they are impaired in their substantive rationality—they cannot (due to their cognitive architecture) act in a manner that serves their substantive interests, both as business people and as moral agents. In our society, companies benefit from hiring the best candidate for the job from a range of demographic backgrounds, and society benefits from demographic diversity within companies (combined with positive diversity mindset ad other positive epistemic conditions). Thus, the company, and the broader society, would be served by a moral objective (less racist) hiring choice. Moreover, the hiring committee, as moral agents, would be better moral agents if they remediated their racist explicit beliefs and values; thus, they would be more morally responsible if les racist. For these reasons, explicit racism also impairs the agent’s responsibility.
Job applicants from stereotyped social groups may also be responsibility-impaired by EI, if they exhibit stereotype threat—as tends to happen in this type of climate. Supposing that job applicants for a certain job were subject to stereotype threat, they would be responsibility-impaired in the sense that they would be prevented from exhibiting their competency on relevant tasks, such as the interview, and thus they would be less likely to get the job. Manifestations of stereotype threat tend to undermine one’s ability to demonstrate one’s character profile in public contexts, and to control one’s performance on relevant tasks, so in these regards this cognitive state is responsibly-impairing on the traditional models. It can also be responsibility-impairing on my view if it prevents one from obtaining role responsibilities that one is capable of exceling in, only because of stereotype threat effects.
On the other hand, the hiring committee might underrate the applicant’s interview performance, if they harbour implicit biases toward the person’s social group, as often happens in climates of EI. (This is why applications are becoming increasingly anonymized, at least in responsible institutions that are committed to real employment equity). Bias against certain social groups can directly impair the responsibility prospects of those groups by barring them from valuable role responsibilities, and it can also indirectly harm their responsibility status by perpetuating harmful stereotypes—stereotypes that induce implicit bias, stereotype threat, and other cognitive states that typically manifest in responsibility-harming ways.
This discussion illustrates how EI tends to undermine responsible agency in members of different social groups in (characteristically) differential patterns. Namely, EI undermines the responsibility of those in positions of privilege by making them susceptible to negative implicit and explicit biases, and it undermines the responsibility of members of disadvantaged social groups by making them susceptible to stereotype threat and low self-regard (generally speaking). EI-based cognitive distortions also undermine our collective responsibility by tainting the public discourse, perpetuating pernicious stereotypes that reinforce EI. In a climate of EI, we cannot be confident that we are gauging people’s responsibility—particularly, that we are blaming and praising people—rationally, given that rationality-impairing cognitive distortions are pervasive and inescapable. So, we can only tentatively, or weakly, hold people responsible—at least until we repair the flaws in our epistemic climate. An attitude of moral uncertainty and epistemic humility is presently forced on everyone—particularly privileged individuals—by EI.