In this post I argue that one’s theory of morality should
fundamentally motivate their behaviour. If an individual espouses that certain
actions are morally wrong, they ought to follow by what they say. In this situation,
I think hypocrisy is unacceptable for moral and epistemic reasons.
Let us define an act
of hypocrisy as a situation in which an individual (or group) make a
normative behavioural claim which has a scope that includes themselves, and
then acts contrary to their claim. A normative behavioural is a claim about what is a right behaviour in accordance to some (perhaps implied) criterion.
As an example, it is an act of hypocrisy if
Adam states that ‘everyone should stop smoking’, but he himself does not. This
is because
1) Adam belongs to the category of everyone.
2) Adam is making a normative behavioural claim
3) Adam is not following his normative behavioural claim
With this analysis we seem to have provided three conditions
that are jointly sufficient. Whilst the second and third conditions seem
obviously necessary, I will now argue for the necessity of the first.
It is not hypocritical if a parent tells a child that
‘children need to go to sleep at 9pm’, and then they themselves go to sleep at
10pm. This is because the parent is not making a claim that they themselves
must fall under. They are implicitly suggesting that different standards must
be applied. Thus, for them not to follow the child’s standard is not an act of
hypocrisy. Now scoping is not always apparent semantically. For example,
suppose I made the claim that ‘You should donate to charity’. Whilst the
literal scope only applies to the individual referred to by ‘ you’, one can
interpret this claim as applying to any individual, including myself. As a
result, it can be argued that it is hypocritical for me to ask others to
donate, whilst I myself do not. If one is unsure about scope, it is always
worth asking, for normative scope is not something that is easily hidden.
Now I use normativity to describe that which is right or
good. Normative behavioural claims are claims about how we should behave or
what is the correct way to behave. It is natural I think for us to ask ‘right
based on what’? This is both a metaphysical point (what grounds the rightness)
but also a structural point in that it points us to some kind of measure. For
example with the behavioural claim not to smoke, the measure might be the
health. When studying mathematics, I might be told ‘you should use this sort of
notation’. Here the measure seems to be something to do with clarity. The
measure here is fundamentally important, but I do not think it matters in terms
of deeming whether something is hypocritical or not. The measure seems to only
house relevance when determining whether a normative behavioural claim is
convincing or not, as it provides us with a means to assess it.
An interesting question: is hypocrisy itself bad? How should
we think about the normative behavioural claim ‘one should not act
hypocritically’? First we note that the use of ‘one’ indicates that we are
making a broader claim that presumably encompasses all individuals. Now the
measure here is not apparent; are we epistemically motivated (accuracy
maximising), morally motivated (happiness, rights etc) or even pedagogically
(maximising learning)? Furthermore, there are many different acts of hypocrisy,
each with their normative behavioural claim and thus each with their own
measure. Thus what seems like a rather simple sentence actually is very
complicated. It covers a wide range of individuals, with a context dependent /
not immediately apparent measure applied onto a whole class of sentences that
themselves have differing measures.
Now for my thesis. I will argue in favour of the following
position ‘Under the moral and epistemic measure, one should not commit acts of
hypocrisy when dealing with morally measured behavioural claims’. That is, it
is morally and epistemically wrong to be hypocritical about one’s moral
beliefs. Or rather, if one makes a moral claim, they ought to act upon such
claims, for both epistemic and moral reasons.
Now there are two separate issues here with my discussion of
the moral and epistemic. I will have to tackle them separately. The reason I
discuss both is because I want to achieve a conclusion where it is not only the
morally right thing to do to follow one’s morality, but also the logical and
rational thing to do. For this argument to apply I will make the assumption that both
morality and epistemology can be somehow measured by a kind of utility
function, and then argue in favour of some kind of maximisation principle.
We begin with the epistemic measure. A common view in formal
epistemology is to see epistemic good in terms of accuracy maximisation of
one’s credence function. We take it that there are a range of propositions that
one might be interested in. Propositions are either true or false, and we
assume that their truth or falsity is of the mind independent kind. Our
personal credence function assigns different values to each proposition
depending on our level of confidence in the proposition’s truth value. We can
apply a kind of distance metric on our credence function to measure its
accuracy and thus epistemic goodness. The closer one is to the truth, the better.
Now the question here is how we are to establish whether a
given individual is committing an epistemic bad or is epistemically problematic.
We don’t expect agents to have perfect accuracy largely because accuracy seems
to depend on evidence. It also seems hard to establish a numerical threshold
for being irrational. What we can however do is establish a number of norms
that – if violated – establish an agent as being epistemically problematic. We
can then link these norms to our argument with hypocrisy; if a hypocritical
agent necessarily violates certain norms then the act of hypocrisy is
epistemically problematic.
A standard epistemic or credal norm is for an agent’s
credence function to be probabilistic. That is, it follows the probability
axioms. One can establish an accuracy first argument to show that an agent
striving to maximise accuracy will always have a probabilistic credence function.
This is because non probabilistic credence functions are always accuracy
dominated by some other credence function. The norm however that I wish to
focus is a norm of consistency; rational agents will not have inconsistent sets
of beliefs.
The assumption here is that agents can choose what they
believe. I also assume that rational agents can determine the logical
consistency between sets of beliefs. I’m not asking for perfectly ideal agents
here, for logical consistency should be attainable for most individuals.
Certainly I think that an agent who believes that both p and not p even after
being told of their inconsistency, can be thought of as violating this
consistency norm and behaving epistemically poorly.
With this argument out of the way, what I therefore intend
to establish is that epistemically rational agents. Essentially I need to
establish there is something fundamentally contradictory with believing that
one should morally act in a certain way, but not feeling compelled to act in
that way.
Now I want to suggest that the moral case here is something
special because of how it interacts with one’s beliefs and motivations. To see
this, let us first consider the following normative behavioural claim that
instead uses the health measure: “You should not smoke”. Suppose a smoke tells
us this, thus behaving hypocritically. They might however make the claim that
they do not feel compelled to behave in a certain way. That the fact smoking
affects their health does not necessarily
motivate them to behave in a certain way. That is, they can accept that smoking
is bad from a health perspective but perhaps they place no value in health and
thus don’t feel compelled to change their behaviour. Whilst somewhat odd, it
doesn’t seem like there is anything epistemically bad about this – their
position is consistent. Morally speaking, it might even be a good practice –
they want to encourage others to have healthy lifestyles since they recognise
other individuals value health, even if they themselves do not.
The moral case I think is different and the reason for this
is that I think morality is intrinsically
motivating. That is, one does not need a reason to do moral things aside
from the fact that it is good. One does not need a reason not to do immoral
things, aside from the fact that immoral things are bad. If someone claims that
something is moral, then they have inherent and sufficient reason to be
compelled to do it. Contrast this to health – I can question what the value of
good health. Perhaps I don’t care if I live. It seems like I can question why I
should behave in accordance to what is healthy.
Yet it seems like when dealing with morality, the fact of morality is
enough.
In this sense, to think that something is moral can be to
think that something is intrinsically motivating. If you think something is a
moral norm then you also think that you have an intrinsic motivation to fulfil
it.
Let's look at an example. Suppose that I think that eating is meat is morally - I'm convinced by an environmental or pain based argument and think I shouldn't eat meat. Now this is a common situation, and many people indeed choose to continue eating meat. Now some might respond that they think that the pleasure they get from eating meat overcomes the negatives to the environment or animals. That's plausible, although unconvincing in my view. Suppose however that when pressed, they think that this pleasure argument fails to outweigh the reasons not to eat meat. That the moral argument overrides all others. The situation seems rather problematic, and not only for moral reasons. An individual has agreed that they should behave in a certain way for moral reasons, that there aren't any arguments to outweigh this requirement to behave, yet they continue not to do so.
It is strange because the individual is perfectly free to think that their pleasure does outweigh the negatives, yet they choose to adopt the line of thought that it is problematic, yet fail to behave in accordance to it. I've argued that morality provides an intrinsic justification to behave. In this situation, the individual has accepted that this is an overriding justification to behave in a certain way. Yet they still fail to do so. It seems clear there is something epistemically problematic.
The key premise in this argument I think is the claim that morality is intrinsically motivating. This premise is really needed in order to tie together morality and belief. I think I need to develop the justification for this premise and see whether there are any other ways to tie morality and belief.
I really do like the overall argument though because to me, failing epistemically is in some ways much worse than failing morally. We can all envision in our minds the evil genius who seems perfectly rational, yet ignores the demand to be moral. There is something satisfying with an argument that can slight them as irrational.