Blog Coordinator

« Farewell: Live Long and Prosper | Main | Passing the Torch! »

03/29/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Joshua, I don't really have anything to comment other than I find this research absolutely fascinating and compelling, and making progress in a way that most scholarship does not.

As you may know, I think the free will problem is really the problem of constitutive luck. And that problem is really the question of "the true self" - and it's boundaries. So I think that you've been on the right track at least since your contribution (with coauthors) to Kane's Handbook on The Bounds of the Self.

Joshua,

First off, just thinking to ask this research question is positively brilliant.

But I don't really get why you go in a "true self" direction. I mean, it's a plausible option. But so is, in my opinion, a "true morality" option - something along the lines of what Susan Wolf would say. Rather than detour through the internal world of the self, why not just look out upon the moral universe, so to speak? Might not that be a simpler explanation of what your subjects are doing?

As usual, you and your colleagues are doing fascinating research, Josh!

One quick big picture thought: It strikes me as a bit odd that people would "show a tendency to regard an agent's true self as morally good." At least in the States, roughly half the country thinks the other half is on the wrong moral track, perhaps even morally depraved. Consider how conservatives view liberals and vice versa.

Maybe this links up with a puzzle that has been guiding some of my research lately. It seems to me that most of us know right from wrong, generally speaking. Yet I, like many others, think other people have many false, often unjustified, moral beliefs. Naturally, when they act on them, I think they act immorally. My current leaning is toward the idea that many people really do know right from wrong, but this is limited in scope in predictable ways. We generally know how to treat people we interact with regularly, especially those in our in-group. But we're more likely to err in other contexts, especially regarding complicated topics that involve the disputed moral issues of our time.

To try to relate this back to your research project: perhaps this tendency (to regard an agent's true self as morally good) is there but isn't a very strong one? Do your data happen to provide any guidance on that?

Very interesting post and paper Joshua, thanks. Is this an empirical ground for the Socratic observation that no one knowingly does evil? That is, does this show that people generally have some deep sense of that claim?

A quick thought without having read the paper: I'm sympathetic with Paul Torek's worry, but with a twist. Namely, it might be that people confuse what they themselves think is the true morality (or, more generally, is good) with their interpretation of other people's judgements of what the true morality (or what is good) is. So the idea is that if people consider X to be good, then they will think other people think X is good too, just because they think X is good themselves, independently of whether or not other people actually think so. (Perhaps there is some resemblance-oriented heuristic underlying the judgement.)

If that is right, in the weakness of will case, they will be unlikely to ascribe weakness of will as they will be attributing the opinion that the person actually considers B good to the presumably weak-willed person, because they consider B good themselves. And in the valuing case, they will be attributing the opinion that they themselves consider good actually is considered good by the person in the vignette too. And this is so even though the vignette formulations indicate that the people in them disagree.

Joshua,

This is extremely interesting stuff. Thanks for sharing.

I have a question about how you (and your colleagues) conceive of the "true self." (My apologies if this is answered in detail in the paper, which I only skimmed.) In your post, you suggest that we should shift from thinking about notions such as weakness of will and valuing in terms of individual's psychological states (e.g., desires, beliefs, intentions) and say, further, that "facts about the agents beliefs, intentions, etc. are only relevant insofar as they provide information about that agent's true self." How are you conceiving of the "true self?" What is it--a soul, a set of principles, something else? It seems that you are not thinking of it as constituted by psychological states (at least, of the familiar kinds). But I don't see any positive proposal for what it might be. I would be interested to hear more.

Also, I wonder what you think about this possibility. It seems to me that the following might be consistent with your data: respondents tend to attribute false self-conceptions to characters in the vignettes who take themselves to be morally bad. For example, even though the agent in the valuing case takes herself to be committed to racial bias, she is, in fact, committed to racial equality. It is just that she doesn't know this about herself; she has a false self-conception. And this false self-conception underwrites her decision about what to do. She makes the conscious decision to pursue A because she takes herself to be in favor of racial bias, when, in fact, her true commitments support pursuing B.

The reason I bring up this possible explanation is that it not only seems to conform to the data (but correct me if I'm wrong), it also is not at odds with giving psychological states pride of place in the explanation of what is driving these intuitions. In other words, if this is what's happening, then there does not seem to be reason to appeal to a "true self" that is not constituted by the agent's psychological states. Rather, if this proposal is correct, what these studies suggest is that respondents tend to discount morally bad individuals' self-understanding in a systematic way. When a person takes herself to have morally bad values (or whatever), we tend to take her to be incorrect. She doesn't have morally bad values; she is deluded about what her actual values are.

I wonder what you think about this.

So sorry to be late in getting back to you all! These are really fantastic suggestions, and I am looking forward to discussing them all further. (I'll spread my responses out into a number of separate comments.)

Kip,

Thanks so much for saying this. I really appreciate your kind words.

Paul,

Good question. I guess there are two different reasons why we favor an explanation that goes through this notion of a 'true self.'

The first is flat-footedly empirical. We ran a whole series of experiments to test the idea that these effects are driven by true self attributions, and the actual experimental results consistently suggest that they are.

The second is more conceptual or philosophical. If we only noticed an asymmetry for moral responsibility judgments, then I think it would be very reasonable to suggest that the whole asymmetry was driven directly by the moral considerations (with no detour through the self). But we find that same effect for judgments of weakness of will and for judgments of valuing. It seems pretty clear that these other judgments have something to do with the self, and in light of that, we were thinking it would make sense to revisit the moral responsibility asymmetry and consider the idea that it too might arise because of something about people's understanding of the self.

Josh,

First off, I should say that we all owe you a big debt for creating this whole field of research. Your paper on weakness of will was a real game-changer.

In answer to your question, our results show that people have a pretty overwhelming tendency to regard the true self as good, but it is important to distinguish between the claim that (a) people think that an agent's true self is good and (b) people think that the agent is good.

For example, in one recent study, Julian De Freitas looked at the judgments of highly misanthropic participants. These participants think that people are generally bad and are likely to do bad things... but all the same, they continue to say that people's true selves are good. (In other words, they think that your true self is good but that you will probably do something bad because you won't act on your true self.)

I wonder if this also helps to address your worry about the political divide. Conservatives might indeed think that liberals are bad, but at the same time, they might believe that there is some deeper voice within all liberals that is drawing toward a better (i.e., more conservative) way of life.

Alan, Olof and Ben,

These questions all get right to the heart of the issue. How exactly do people understand the true self? What sort of thing do they think it is, and why exactly do their true self attributions show this puzzling impact of moral judgment?

Broadly speaking, I suppose that one might consider two families of possibilities:

1. Perhaps people understand the true self as being something a particular collection of mental states, much like the states we posit within scientific psychology, but they simply show a tendency to think that these psychological states draw people toward whatever is morally good.

For example, it might be that people's intuitive psychology includes the idea that people have emotions, intentions, beliefs... and then also a 'true self.' Then it might be that people don't necessarily think that your emotions, intentions and beliefs are morally good, but that they do think your true self is morally good.

2. It might be that this notion of a true self is in some way radically different from the sorts of notions we employ in scientific theories of the mind. Then, if we correctly understood what sort of notion we were dealing with here, it might be that we would easily be able to see why moral considerations played a distinctive role in people's application of that notion.

Perhaps the most natural implementation of this latter view would be to say that people's ordinary notion of the true self involves a kind of *essentialism*. If we were thinking, for example, about a novel, we might say that certain aspects of it really lay at the essence of the novel while others were inessential to it. Or if we were thinking about a band, we might say that some of their songs truly embody what the band is all about while others in some way betray the essence of the band. Clearly, this type of thinking is not an attempt to build up anything like a scientific theory; it seems to be something more directly normative. One might reasonably conjecture that our notion of a true self is simply the application of this more general essentialism to our thinking about the self.

Hi Joshua,

Thanks for your reply. The essentialism point is interesting. But I have to admit that I don't fully understand the first option you mention. Perhaps you can help me out.

In the first paragraph you say that, on this option, the true self is "something [like] a particular collection of mental states." But then in the second paragraph, you seem to be distinguishing between mental states and the 'true self.' Am I reading you wrong on this? Perhaps what you have in mind is that the true self, on this intuitive psychology, is constituted by mental states, just not the mental states we normally appeal to (e.g., beliefs, desires, intentions). Is this what you have in mind? And if so, what could these different kinds of mental states be?

Also, I was wondering what you thought of the possibility I mentioned in my comment. It doesn't seem to fit into either of the two options you present in your most recent comment. The possibility I had in mind was that people attribute a 'true self' to others that is morally good, and that this 'true self' is constituted by mental states of the familiar type, but people attribute a false self-conception of one's own mental states to those who take themselves to be morally bad. This would explain the data (it seems to me) but also not require a conception of the true self that takes us beyond our familiar ways thinking about our psychologies.

Hi Ben,

Thanks so much for this follow-up question. I think it might be helpful here to consider the actual methods we used in our Experiment 1. So I'll quickly describe what we did, and then you can see whether it seems to address your worry.

In that experiment, we reran the original study about valuing but also asked people whether they agreed with three further statements:

1. [Feelings] “Jim was being drawn toward racial discrimination by his feelings.”
2. [Beliefs] “Jim was being drawn toward racial discrimination by his beliefs.”
3. [True Self] “Jim was being drawn toward racial discrimination by his true self—the person he really is deep down.”

The results showed that the original effect was successfully explained by our measure of judgments about the true self but *wasn't* explained by the measure of judgments about either feelings or beliefs. In other words, in the case where the agent does the wrong thing, people still recognize that he has a different belief than they do; it's just that they think that his true self is at odds with his beliefs.

So I was thinking that the best ways of explaining these results would be either (a) to say that people conceive of the true self as a collection of psychological states that is distinct from the agent's beliefs or (b) to say that people conceive of the true self as something other than just a collection of psychological states.

Thanks once again for your thoughts on this!

Joshua,

Thank you for your response. It is helpful. And thanks for sticking with me on these issues.

I think that the option (b) you suggest is really interesting. But I'm still not seeing how the option (a) that you suggest is supposed to work in the context of the overall thesis you present in your post.

Firstly, it seems to me that option (a) is consistent with several familiar accounts of the deep (or true) self. For example, hierarchies of desires or intentions (a la Frankfurt and Bratman, respectively) are distinct from beliefs. And so if one thought that these constituted the true self, then these results might not seem so surprising. In particular, they might not show, as you say in the post, that we need to rethink the familiar categories we operate with when we theorize about values, weakness of will, etc. Perhaps you could help me to see why they do show this. I think that would be very interesting.

Secondly, this option, as an explanation of the effect you found, may be ambiguous between (i) the true self as distinct from people's beliefs (or feelings, or etc.) and (ii) the true self as distinct from people's conception of their own beliefs (or feelings, or etc.). After all, it seems that we are sometimes incorrect about our own psychologies; we take ourselves to have certain beliefs, desires, etc. that we do not, in fact, have. So it might be the case that what people are doing in the cases you are interested in is distinguishing between people's *actual* or true psychologies and the psychologies they take themselves to have. We might put it this way: one's true self is distinct from one's beliefs about one's own psychology, and it is constituted by one's actual psychological states (with the exception of false beliefs about one's own psychology). So far, anyway, I can't tell whether this interpretation of the true self is at odds with your experimental results, or even your interpretation of the data.

Thanks again for taking the time to share your thoughts on these issues. I find this all very interesting.

Hi Ben,

Thanks for continuing this line of questions. I feel like we are really getting at something interesting. Just as you say, although our experiment asked about beliefs and feelings, it did not ask about second-order desires.

I know that you won't find this to be sufficient in itself, but the vignette we gave to participants specifically states that the agent did not have the relevant second-order desire. Here it is in full:

George lives in a culture in which most people are extremely racist. He
thinks that the basic viewpoint of people in this culture is more or less
correct. That is, he believes that he ought to be advancing the interests of
people of his own race at the expense of people of other races.

Nonetheless, George sometimes feels a certain pull in the opposite
direction. He often finds himself feeling guilty when he harms people of
other races. And sometimes he ends up acting on these feelings and doing
things that end up fostering racial equality.

George wishes he could change this aspect of himself. He wishes that he
could stop feeling the pull of racial equality and just act to advance the
interests of his own race.

In this case, participants say that George actually values racial equality. Then, although it is obviously an empirical question, my guess is that they would also say that he does not want to want racial equality. (But as I said, we did not ask that question, and it is possible that they would give a different response there.)

More generally, my guess is that this whole interest in second-order desires as a cue to the true self is a mistake. The problem is that when an agent wants to want something, we generally think that this thing is morally good. (Then, as the experiments show, our own moral judgments then guide our intuitions about the true self.) However, in cases where our own moral judgments come apart from attributions of second-order desires, I think our true self attributions will be guided not by the second-order desires but by our own moral judgments.

Hi Joshua,

Thanks for this reply. I does seem that we are getting at something interesting. And it's clear to me that I should just go ahead and read your paper closely. So I'll do that soon.

Thanks for engaging with me on these issues. Your work on the influence of moral judgments on people's intuitions is quite fascinating.

Josh K:
Great point that we need to distinguish the agent from her true self. I wonder, though: that works well for actions, but it's less clear with beliefs maybe? Actions and intentions are commonly treated as possibly not arising from one's true self. But would people say something like "He thinks women have a moral right to abortion, but deep down he knows it's wrong"? Perhaps you and your collaborators already have evidence in favor of this. Maybe people think the true self tends to be good precisely because they think the true self tends to have knowledge of the good?

If so, epistemologists might be overlooking this! Accordingly, our moral epistemology would have to get more complicated. It might also have practical implications. If we treat people's true selves as not morally bankrupt, then maybe moral dialogue isn't pointless. By our own lights, we shouldn't just write opponents off as evil or insane (compare Haidt's work).

Hi Josh,
Thanks for this post: it is really interesting stuff. I love the way it brings together lots of seemingly different findings under a single elegant theory.

Here’s what I think is going on in the paper John Turri and I wrote on weakness of will (which I was glad to see you classify as ‘amazing’!). People think that weakness of will involves a failure to act on a commitment (i.e. a practical judgment or an intention), but that it can also involve a failure to *form* certain commitments. In particular, there are certain behaviors that are stereotypically associated with weakness of will, and people will tend to describe such behaviors as weak-willed regardless of whether there is any commitment to avoid the behaviour. That is, the absence of the kind of commitments we see as rational ones to make it taken as evidence that the person is weak-willed. This is a marked departure from the standard philosophical theories of weakness of will, all of which involve the failure to act on a practical commitment— the recent philosophical debate has been over the question of which commitment is more central. (Josh May’s paper with Richard Holton does a very nice job of making progress on this question.)

Now, one way of reading our results is to say that people think that the True Self actually has the relevant commitments. In other words, when we gave people vignettes that lacked the relevant commitments, participants just attributed those commitments anyway. I think that’s plausible interpretation, and it would fit with your larger point that there is a basic assumption that the True Self is morally good. In the case of our paper, the more cautious thing to say might be that people assume that the True Self is rational or has rational ends— our cases largely avoided moral examples. But if anything, this might help your overall point: it’s not just that we assume that the True Self is morally good, but that we assume that it has a range of normatively good characteristics.

But: in our paper John and I specifically asked about this, in part because it seemed to us that the ‘attribute implicit commitments to the True Self’ interpretation was so plausible. And it turned out that the participants in our experiments didn’t seem to be attributing weakness of will because they were implicitly attributing rational commitments to the True Self. True Self attributions partially mediated weakness of will attributions, but only partially. The effect of stereotypical cases on weakness of will attributions was primarily direct. And when we directly asked participants whether a failure to ever form a commitment to refrain from stereotypically weak-willed behavior was, in fact, weak-willed, they overwhelmingly said yes.

This might actually support Josh May’s suggestion (above) that there’s a tendency, but it’s not very strong.

Hi Josh,

We actually do find that people are sometimes willing to regard an agent's beliefs as falling outside the true self. For example, in one study, participants were told about a man who was gay but who believed that homosexuality was morally wrong. Politically conservative participants tended to say that this belief was part of his true self. By contrast, politically liberal participants tended to say that his belief was not part of his true self.

Actually, I think that this sort of thing lies at the heart of the effect that you and Holton discovered for weakness of will attributions. Take a case in which an agent does something that she believes to be morally wrong. If people think that this belief is incorrect, they will think that it does not lie within her true self. Thus, when she goes against this belief, they will think that she is not really going against her true self, and they will therefore conclude that she is not showing weakness of will.

Obviously, you have a special expertise when it comes to this effect. So, what do you think?

Hi Mathieu,

So nice to be hearing from you on this thread. I very much hope that folks around here get a chance to read your recent paper on this topic.

In any case, it seems like it might be helpful to distinguish between two questions:

1. Do true self attributions explain everything that needs explaining about weakness of will attributions?

2. Do true self attributions explain this one specific effect: namely, the impact of moral judgment on weakness of will attributions?

In your paper, you find a surprising new effect on weakness of will attributions which, you suggest, is not adequately explained in terms of true self attributions. This leads you to a negative answer to question 1. But are you thinking it should also lead us to give a negative answer to question 2?

Hi Joshua,

That’s a nice question. I don’t think that our paper necessarily leads to a negative answer to 2. But I don’t see our paper as offering much in the way of *support* for 2, either. I think Sousa and Mauro’s excellent paper gives strong support for your ‘True Self as morally good’ theory, but I wouldn’t count our paper among those offering the theory strong support.

If I wanted to make the contrast between our view and yours stark, I’d put it like this: on your view, people assume that the True Self is morally good/rational, and so interpret immoral/irrational behaviour as weak-willed, because it violates the True Self’s commitments. On our view, by contrast, people think that failing to have moral/rational commitments is itself a form of weakness of will. (Does that mean that the folk are implicit voluntarists about values and judgments? How else could a failure to believe something involve a failure of the will? Who knows…) I’m not sure the contrast needs to be so stark , and I’ll have to ask John if he agrees with this suggestion. But I can at least see a way in which our views are genuinely competing explanations of at least some cases.

That’s not to say our paper offers no support at all, of course. There are two ways that it at least helps give a positive answer to 2. First, we did find that True Self attributions played some mediating role in weakness of will attributions— the effect of stereotypes was stronger and more direct, but the True Self played some role. Second, our paper pretty clearly shows that the standard philosophical accounts of weakness of will— as the violation of a consciously held practical commitment— does not come close to fully capturing the folk concept. So to the extent that this opens space for the True Self attribution view, our paper gives it indirect support.

Hi Josh, just saw your great WiPhi video on this topic (see below for link). It reminded me of a question that you probably answer in the paper (but I can't remember): how do y'all test to make sure we are not making our judgments about true self based on what we believe to be *our own true self*, rather than what we judge to be morally good or valuable? That is, in cases where others have conflicting attitudes, perhaps we're saying that their true self is more likely to be represented by the attitude we most identify with our own true self. Now, because we typically think our own true self is good and we typically think our own values (or true self cares) are, well, valuable, it may be hard to test for this interpretation. We just have to find enough people to test who say their true self is represented by something they also say is not morally good--then we ask them to judge the true self of another person who has conflicting attitudes about that something. I'd predict that if we found a bunch of smokers or over-eaters or meat-eaters who are willing to say that their smoking, over-eating, or meat-eating is part of their 'true self' but also say that they think those activities are not valuable or even morally wrong, then we'd also find they would say that these activities are also part of the true self of another person who says they think these activities are wrong but yet does them. I feel like there are better ways to test this hypothesis--maybe you can think of some. (I'll look back at my old response to you and Roedder to see if I tried anything like that.)

https://www.khanacademy.org/partner-content/wi-phi/metaphys-epistemology/v/the-true-self

Hi Mathieu,

This is a super interesting issue you are bringing up, and definitely one that would be worthy of further study. I actually hadn't thought of the idea that the hypothesis from your paper could serve as an alternative explanation for these data, but it does seem like a very intriguing possibility to investigate.

There's just one thing I would like to understand better, though. The exciting result from your paper is that people attribute weakness of will in *more* cases than one might originally have expected. Specifically, they attribute weakness of will even in cases where the agent has no prior commitment. For this reason, you end up with this new idea that the absence of a commitment can itself be a case of weakness of will.

But the phenomenon we are trying to understand goes in the opposite direction. It is a case in which people attribute weakness of will in *fewer* cases than one would expect. The finding is that even when the agent does form a commitment to do something morally bad, people don't think the agent shows weakness of will when she fails to live up to this commitment. So is there some way that the hypothesis you develop might help to explain this pattern of results?

Eddy,

Thanks for this very helpful comment! We had considered a less sophisticated version of the hypothesis you propose, and we ran some experiments to rule it out, but my sense is that the studies we conducted wouldn't help to address the more complex idea you propose here.

Basically, we wanted to check to see whether participants just show an across-the-board tendency to think that, deep down, other people are just like them. So we tried asking about things where participants would be less likely to make a strong value judgment. For example, we asked participants whether they preferred living in the city vs. in the country. (The assumption was that people who prefer living in the city do not think it is morally wrong to prefer living in the country.) Sure enough, in those kinds of cases, we do not find that participants think that other people's true selves line up with their own preferences.

But you could obviously respond that these participants might not think that their preference for living in the city is part of their own true self. This response would actually strike me as a very powerful one, so I don't think that our study helps to properly address your hypothesis.

What we really need, then, are cases where participants attribute something to their own true selves but do not think that the opposite mental state would be in any way bad. Though we haven't checked this, the theory predicts that there should be no effect in such cases.

(To take just one example, if a participant is gay and feels that homosexuality lies within his or her true self, the theory predicts that he or she *should not* be drawn to attribute homosexuality to everyone else's true self as well.)

Hi Josh, what about the examples I gave in first comment where people have an attitudes they take to be part of their deep self but think is not valuable or morally good? That seems the cleanest way to test between your theory that (unless we are convinced that someone is evil--an important qualification) we tend to think people's deep selves are morally good and my hypothesis (that I'm agnostic about!) that we tend to think people's deep selves are like our own (at least in ambiguous cases like the ones you test). I'd be surprised if a gay participant would not attribute a gay deep self to the agents in the cases you present where their beliefs and emotions conflict.

I found this case in an unpresented bit of Thomas and my response to you and Roedder, which sounds a little like the cases you tried regarding city vs country:

Gary lives in a culture in which many people enjoy watching their favorite sports teams play games and they spend a lot of time doing so. Gary is one of these people. He spends a lot of time watching his favorite teams play games and he believes this is a good way to spend his time. ¶ Nonetheless, sometimes Gary feels a certain pull in the opposite direction. He finds himself feeling guilty when he spends time watching games. And sometimes he ends up acting on these feelings and doing something else during the games. ¶ Gary wishes he could change this aspect of himself. He wishes he could stop feeling the pull to do other things and just continue watching his teams play games.
Despite his conscious beliefs, Gary actually does not value watching his teams play games.
To what extent do you value watching your sports teams play games? [or ask about whether they think it is moral or immoral to spend time doing so?]
And then ask questions about whether people think watching their team is part of their deep self.

And then run the contrast case where Gary the intellectual thinks watching games is a waste of time but finds himself wanting to watch them.

Hi Eddy,

These are great suggestions, which should definitely form the basis for some follow-up work. I've been thinking a little bit more about what our theory predicts in these cases, and I wanted to see whether this sounded right to you.

Consider two people both of whom have no desire at all to watch sports. Now suppose that the first believes that it can be really valuable for other people to watch sports (he just has no desire to do so himself), while the other thinks that it is actually deeply wrong for other people to watch sports.

Assume that both of these people are completely convinced that their own true selves contain no desire to watch sports. Still, our theory predicts that the two people should have different reactions to the vignette about Gary that you just presented. If that result actually does come out, it would provide evidence that there is some effect of value judgment that is not going through first-person true self attributions.

However, all of that is completely compatible with the idea that there could also be a separate effect that did go through first-person assessments. Thus, suppose that I have some desires that I think of as deeply wrong but that I still regard as part of my true self. The hypothesis you were considering predicts that I should then be especially likely to attribute these desires to other people's true selves as well. Although we don't have any data on the topic, I actually think that this prediction would come out. If so, the results would not provide evidence against our theory per se, but they would indicate that there is something further going on, which could be beautifully explained by the hypothesis you propose above.

Hi Josh, yes, there is nothing inconsistent about our different hypotheses--and as suggested above, they overlap in most cases, since I'm sure there's a tendency for people to think of their own deep self attitudes as valuable and good. But this makes me want to test the examples mentioned above. It'd be really interesting to explore how people regard their own motivations that they regard as bad or problematic (like smoking, over-eating, certain sexual desires, watching TV, etc.). I bet some people would describe them as part of their deep self (they motivate so much of my behavior, they don't go away, they make me feel good, etc.), some would adamantly reject them as part of their deep self (that's not really me, I'm trying not act on them, I feel like they control me), and some would be deeply ambivalent about them. Figuring out how people describe these attitudes would be interesting--and then figuring out how they ascribe them to others would be interesting. I also see some complicated connections to the issue we've discussed of whether some of these attributions are related to people thinking the essence (or deep self) of a person (or artefact or living organism) is its proper functioning or teleology.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan