Whilst this might seem like an odd title, I think it represents an interesting and non trivial issue. One might be aware that a similar question is often asked in terms of utiliarianism. Can one be a utilitarian without applying a utilitarian calculus in all our actions? Or alternatively, is it possible that one can maximise utility in their actions by specifically not applying this utility maximising method in terms of making decisions?
I make this comparison because there appears to be some sort of link between acting rationally and acting to maximise utility. Now we ought to recognise that utilitarianism often aims for a degree of agent neutrality where the utility of each agent is valued equally. I think it would be a mistake to say the same about rationality. In fact, it seems reasonable to think that rationality might even lead us to an egotistic view of utility where we ought to prioritise what benefits us over others.
Philosophically, rationality is thought of as acting in relation to reason, that is, acting in accordance to certain facts about reality. For now let us ignore the issues of whether facts can exist; assume they can. Presumably we typically act in ways that benefits ourselves, and to act in reason we might think is to apply a degree of logical thought and to use evidence to help act in the best way as to benefit our aims. Rationality is often focused around logical thought and there seems to be a specific focus on valuing different factors in different way. Suppose that there exists a set of actions $A$ each with induced reasons $A_1, A_2 . . $ . We might think of a rational agent is able to effectively assess which action to take according to the individual reasons attributed to it. Now it appears as if rationality is flexible in relation to goals. That is, while we might criticise a person for a having a given goal it appears as if we are still able to say that they have acted rationally in relation to that goal.
For example, suppose that we are hungry. Let $A$ be {Find food, listen to music, go to sleep}. We might think a rational agent would be one who would be able to look at the reasons for each and conclude that the action of Finding food is the best action. Further we have some inclination that rationality is linked somewhat with intelligence. That a more intelligent agent should be more likely to pick out the optimal action in a given set. In seems as if it is possible to make claims that one person is 'more rational' than another, even though rationality can be used in its own right. In this sense it appears we can think of rationality both in terms of a minimal standard as a well a a scale.
Suppose we adopt this type of definition of rationality in terms of actions and reasons. So each agent $X$ has a certain goal(s) $X_G$ and a set of possible actions $A_X$. Each action $A^X_1, A^X_2 . . $ has attached to it a number of reasons attached to it. Each agent is endowed a Decision function $v_X$ which takes a set of Actions and their induced reasons and outputs a specific action. A rational agent is one whose Decision function is effective in attaining $A_G$ given the available actions and decisions. A more rational agent is one whose decision function is more effective
Time to revisit our question. Is it rational for an agent to think rationally? Note that this a meta claim where the agent must decide on a general strategy in terms of how they make decisions. These strategies could to be 'act nice', 'act that maximises overall utility' and 'act to benefit another'. Note that these strategies are different from the agents actions, although we might think of a rational strategy as one that most aligns with the persons goal. Now it might at first glance appear that the answer is obviously yes. By adopting a strategy that tells you to in each individual case optimise your action in relation to a goal it seems as if this would lead to the optimal goal overall.
However note that there are cases where there appears to be certain value in taking actions that one might traditionally think to be irrational. As a very basic example we might consider the prisoners dilemma where it might be argued that rational decision making leads to an in optimal outcome. Note however that this case can be resolved by going towards a super rational position where agents know that others are rational. In our traditional case however (and I think in real life) this seems to be an unfair assumption. Doe's this mean we ought to assume that other agents are irrational and act in accordance to this?
There are other cases where our rational evaluation seems ineffective. There is something cold about the notion of rationality that it has difficulty when dealing with issues when emotions are involved.
I will continue this type of post at a later date.