Blog Coordinator

« Addiction | Main | Paternalism and Consent »

08/21/2016

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Of possible interest is work in psychology about first and second order desires for autonomy per se, where they look at these in different cultures (your manipulation condition) where autonomy is seen as more or less valued eg:

http://researchrepository.murdoch.edu.au/14704/1/Self-determination_Accepted_version_JCCP.pdf

In Self-Determination Theory, autonomy is a basic need, and that failure to fulfill this leads to diminished well-being. Critics had argued precisely that autonomy in "collectivist" cultures might be realised by conforming to expectations where this was a second order ("internalised") desire.

Hi Suzy,

Wouldn't a conclusion be that most normally socialized people not autonomous, or at least lacking a lot of autonomy?
I take it your position is that human agents are never (or almost never?) fully autonomous, but it seems to me that the truth condition in particular results in a very big loss of autonomy on this framework, even when there is no manipulation. Am I reading this wrong?

For example, it seems to me that most Christians, Muslims, Sikhs, Zoroastrians, Hindus, Buddhists, etc., very probably have the policy of only treating as reasons those considerations that are true, but that is violated in the vast majority of their religion-related activities. The more committed to and involved in their religion a person is (e.g., a priest or nun, an imam), the more autonomy they would lose for that reaason. In some cultural environments, most people (i.e., not just religious leaders) would end up losing a lot of autonomy for that reason.

Granted, adherents to each of those religions would disagree with me on that issue, since they believe one of them is true. But assuming one of those religions is true, then the others - to the extent they disagree, and it's a big extent in at least some cases - would have that problem (and so would I).

In most of those cases, there is no manipulation (unless perhaps one counts some of the people who made up stories a very long time ago as manipulators of present-day believers. However, the connection appears too thin, since the creators of the religions did not know how their creations would develop after centuries or millenia - or even that they would exist for so long -, and didn't intend to manipulate people after such a long period), but the loss of autonomy due to violations of the policy of only counting true considerations as reasons remains.

A similar issue appears in the case of ideologies not usually called "religions", such as Marxism, libertarianism, different versions of progressivism, fascism, different versions of ideological racism, etc., to the extent that the ideologies in question are wrong (and at least all but one are mostly wrong, it seems to me).

The issue of trustworthiness also seems present in the case of some of the world's major religions at least, even if to a considerably lesser extent than in your hypothetical case.

Hi Angra,
That's a great point, and one that I've spent quite a bit of time struggling with, because you're quite right that religion poses a problem for my theory.

One of the things I'm committed to is that we're less autonomous to the extent that we're living a lie - if my whole life is built around an end that is other than what I take it to be, I think my autonomy is significantly damaged. There were some horrifying cases that came to light in the UK a few years ago, where undercover police officers entered relationships with women without telling them their true identity, then just disappeared when the cases were closed. Not knowing who your partner is would seem like a prime example of living a lie, and hence of being less autonomous. Insofar as the religiously devout take themselves to be in a relationship with a God, and build their lives around that relationship, then yes, I think that they're less autonomous if that God doesn't actually exist.

That said, I don't think the religiously devout and the cult member will necessarily have their autonomy effected to the same degree, even if we assume that in both cases their core beliefs are false.

One place to look for a difference between the religiously devout and the cult member is in the network of beliefs/values/etc that support their respective activities. While many of these beliefs/values etc will be derived from the agent's faith, they may not all be contingent on that faith - in other words, if the core belief was lost, I'm assuming that some of these values/beliefs would remain (for instance, valuing charitable works, or believing that certain acts were immoral). If the religiously-oriented activities the agent is undertaking cohere with beliefs/values that are self-sustaining in the absence of the core religious belief, then I think those activities are still relatively (if not perfectly) autonomous.

My hunch is that the cult members' network of beliefs/values/etc will typically be more dependent on a few core falsehoods than the religiously devout person's.

That might not be a strong enough distinction, though. And you're right that we could avoid all of this by leaning on a non-manipulation condition, and then showing that the cult-member is manipulated and the religiously devout agent isn't. Getting such a condition nailed down is notoriously tricky, though. If it relies on the intentions of the manipulator, I worry that it's not really tracking the problem - how am I any more in control of my life because someone has inadvertently tricked me than if they deliberately have? And if it doesn't rely on the intentions of the manipulator, it's hard to see how to differentiate it from normal socialization, and hence we're still trapped in the same problem.

Suzy,

To the extent I grasp the concept of autonomy/self-governance (though I admit I might not grasp it), I don't think that the intentions or even the presence of a manipulator influences the degree of control an agent has over her life beyond the way in which the manipulator interacts with the agent (so, an accident that looks the same to the agent would have the same effect on autonomy as manipulation), so while such a condition would avoid this problem, I tend to agree that's not the way to go.
There is still the issue of religion and other ideologies (though one option is not to interpret it as a problem of your theory, but a true conclusion).
I think you make a pretty good point that many beliefs or values that come from the agent's religion would remain even if the agent were to realize that the religion is false. However, I don't think that that avoids the issue as long as the beliefs in question are false, and as long as the agent has a policy of only taking true considerations as reasons.
For example, if the agent believes she has a moral obligation to pray facing Mecca five times a day, and she also has a moral obligation to cover her head in the presence of unrelated men, maybe she'd only lose the former belief but not the latter if she were to realize her religion is false. However, both beliefs are false, and so if she acts on them, it seems the problem of autonomy remains.
Upon further consideration, though, I guess it might be challenged that religious believers usually have the policy of treating only true considerations as reasons.

On that note, one way to construe the "true considerations" condition in your example is to include proper epistemic probabilistic assessments.
For example, if it's epistemically proper for me to reckon that P is probably true, my policy (at least, the one I would be inclined to endorse) leads me to treat "P is probably true" as a reason for acting (if I care about P, etc.), even if P turns out to be false. But "P is probably true" is true, when asserted from my epistemic situation.
If the policy you have in mind is like that and religious believers have that policy, then whether that is a problem for their autonomy seems to depend on whether their false religious beliefs are epistemically justified - though I think they're usually not so, but I guess someone might challenge that.

On the other hand, if the policy you have in mind is to only take true considerations as reasons regardless of an agent's epistemic situation, then after considering the matter, I don't seem to have that policy, and maybe one could challenge that most religious people have it. But then again, if most religious people don't have that policy, would a manipulated cultist likely have it?

Hi Angra,
I agree that the fact a belief would survive loss of faith doesn’t avoid the problem that it conflicts with a policy of only taking true considerations as reasons. I was trying to get at something slightly different with that point.

Take the two examples you’ve given: I believe I have an obligation to pray to Mecca five times a day, and I believe I have an obligation to cover my head. For arguments sake, let’s assume both of these are false. Both of these beliefs thus conflict with my policy of only taking true considerations as reasons. But they both also presumably cohere with a whole range of other beliefs/values/policies I have (perhaps I value modesty; perhaps I believe that ritual provides a shape to a life, and so on). How autonomous the respective false beliefs are is then a matter of weighing up the elements they conflict with against the elements they cohere with.

But we also need to look at how much weight to grant to the beliefs/values/policies that are upheld through my false belief (such as my valuing of modesty). My idea was that, if these beliefs/values/policies were themselves in conflict with my policy of only taking true considerations to be reasons, then upholding them has less of a positive effect on my autonomy. Asking whether these beliefs/values/policies would survive a loss of faith was a shorthand way of asking whether they are compatible with only taking true considerations to be reasons. In other words, if everything that supports a particular belief or action would crumble if my false belief was exposed, then that belief/action is less autonomous than another belief or action that is supported by values/beliefs etc that would survive exposure of my false belief.

I’ll have to think more about the probabilistic construal of policies. I’ve been drawn to a slightly different modification – rather than postulating a policy of treating ‘P is probably true’ as a reason for action, I’d been toying with the idea of policies of treating P as true as restricted to certain kinds of activities. Consider Columbus’ exploration of the Americas – if the popular story is true, he spent his life convinced he was actually exploring the East Indies. If that’s right, then much of Columbus’ life was driving by a false belief, and hence not very autonomous. But this seems a strange way to think about practices like exploration, or experimentation: surely we (typically) explore or experiment fully aware of the likelihood that some of our beliefs are false. That’s just part of the nature of the practice. This may not be much help in the religion case (it seems wildly presumptuous to assume that many religious people take their religious practices to be akin to exploration or experimentation) but I think it does avoid some counterintuitive implications about the impact of false beliefs on the autonomy of actions.

Not sure how relevant this is but... e.g. a cult member has some very strange beliefs, at the same time as they have an in theory commitment to principles of rationality. So a conflict exists between their beliefs and their supposed values. Could we not say that the individual had a good amount of autonomy on some level but were just blameworthy for not doing better and taking their own "self-governing policy" more seriously?

Hi Greg,
I think you're quite right that the cult member is going to end up believing things that conflict with their values. I'd say that this is another way in which their autonomy is compromised (though I think that's compatible with them having a good amount of autonomy 'on some level', as you note - for instance, they could be autonomous in their non-cult-oriented day to day activities).

Whether their failure to live up to their policies and values is something they're blameworthy for - I'm not so sure. While I think we can be blameworthy despite not being particularly autonomous, there do seem to be excusing factors that might render an agent blameless for her own autonomy failings. I might fail to live up to one of my values because I've been deceived, for instance, or because I've been coerced. I think I'd be blameless in such cases.

I think you've put your finger on something interesting with the failure to take one's own self-governing policies seriously, though; this does seem to be something for which we can rightly be held accountable. I doubt that's always what's going on in cult cases, though. A manipulated agent could be making a good-faith effort to live up to her self-governing policies (and values) and just not have the resources to see that she's failing to do so.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan