Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Rationality
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Disputes about the concept of rationality== There are many disputes about the essential characteristics of rationality. It is often understood in [[Relation (philosophy)|relational]] terms: something, like a belief or an intention, is rational because of how it is related to something else.<ref name="Knauff2021b"/><ref name="Moser2006"/> But there are disagreements as to what it has to be related to and in what way. For reason-based accounts, the relation to a reason that [[Justification (epistemology)|justifies]] or explains the rational state is central. For coherence-based accounts, the relation of coherence between mental states matters. There is a lively discussion in the contemporary literature on whether reason-based accounts or coherence-based accounts are superior.<ref name="Heinzelmann2022"/><ref name="Lord2018-1"/> Some theorists also try to understand rationality in relation to the goals it tries to realize.<ref name="Moser2006"/><ref name="Pinker2022"/> Other disputes in this field concern whether rationality depends only on the agent's [[mind]] or also on external factors, whether rationality requires a review of all one's beliefs from scratch, and whether we should always be rational.<ref name="Knauff2021b"/><ref name="Moser2006"/><ref name="Harman2013"/> === Based on reason-responsiveness === A common idea of many theories of rationality is that it can be defined in terms of reasons. In this view, to be rational means to respond correctly to reasons.<ref name="Broome2021"/><ref name="Moser2006"/><ref name="Heinzelmann2022"/> For example, the fact that a food is healthy is a reason to eat it. So this reason makes it rational for the agent to eat the food.<ref name="Heinzelmann2022"/> An important aspect of this interpretation is that it is not sufficient to merely act accidentally in accordance with reasons. Instead, ''responding'' to reasons implies that one acts [[Intention|intentionally]] because of these reasons.<ref name="Broome2021"/> Some theorists understand reasons as external facts. This view has been criticized based on the claim that, in order to respond to reasons, people have to be aware of them, i.e. they have some form of epistemic access.<ref name="Heinzelmann2022"/><ref name="Lord2018-1"/> But lacking this access is not automatically irrational. In one example by [[John Broome (philosopher)|John Broome]], the agent eats a fish contaminated with [[salmonella]], which is a strong reason against eating the fish. But since the agent could not have known this fact, eating the fish is rational for them.<ref name="Broome2007"/><ref name="Kiesewetter2017"/> Because of such problems, many theorists have opted for an internalist version of this account. This means that the agent does not need to respond to reasons in general, but only to reasons they have or possess.<ref name="Broome2021"/><ref name="Heinzelmann2022"/><ref name="Lord2018-1"/><ref name="Lord2018-3"/> The success of such approaches depends a lot on what it means to have a reason and there are various disagreements on this issue.<ref name="Mele2004a"/><ref name="Heinzelmann2022"/> A common approach is to hold that this access is given through the possession of [[evidence]] in the form of cognitive [[mental state]]s, like [[perception]]s and [[knowledge]]. A similar version states that "rationality consists in responding correctly to beliefs about reasons". So it is rational to bring an umbrella if the agent has strong evidence that it is going to rain. But without this evidence, it would be rational to leave the umbrella at home, even if, unbeknownst to the agent, it is going to rain.<ref name="Broome2021"/><ref name="Lord2018-3"/> These versions avoid the previous objection since rationality no longer requires the agent to respond to external factors of which they could not have been aware.<ref name="Broome2021"/> A problem faced by all forms of reason-responsiveness theories is that there are usually many reasons relevant and some of them may conflict with each other. So while salmonella contamination is a reason against eating the fish, its good taste and the desire not to offend the host are reasons in favor of eating it. This problem is usually approached by weighing all the different reasons. This way, one does not respond directly to each reason individually but instead to their [[weighted sum]]. Cases of conflict are thus solved since one side usually outweighs the other. So despite the reasons cited in favor of eating the fish, the balance of reasons stands against it, since avoiding a salmonella infection is a much weightier reason than the other reasons cited.<ref name="Broome2007"/><ref name="Kiesewetter2017"/> This can be expressed by stating that rational agents pick the option favored by the balance of reasons.<ref name="Mele2004a"/><ref name="McClennen2004"/> However, other objections to the reason-responsiveness account are not so easily solved. They often focus on cases where reasons require the agent to be irrational, leading to a rational dilemma. For example, if terrorists threaten to blow up a city unless the agent forms an irrational belief, this is a very weighty reason to do all in one's power to violate the norms of rationality.<ref name="Broome2021"/><ref name="Moriarty2020"/> === Based on rules of coherence === An influential rival to the reason-responsiveness account understands rationality as internal coherence.<ref name="Heinzelmann2022"/><ref name="Lord2018-1"/> On this view, a person is rational to the extent that their mental states and actions are coherent with each other.<ref name="Heinzelmann2022"/><ref name="Lord2018-1"/> Diverse versions of this approach exist that differ in how they understand coherence and what rules of coherence they propose.<ref name="Mele2004a"/><ref name="McClennen2004"/><ref name="Broome2021"/> A general distinction in this regard is between negative and positive coherence.<ref name="Harman2013"/><ref name="Thagard1998"/> Negative coherence is an uncontroversial aspect of most such theories: it requires the absence of [[contradiction]]s and [[Consistency|inconsistencies]]. This means that the agent's mental states do not clash with each other. In some cases, inconsistencies are rather obvious, as when a person believes that it will rain tomorrow and that it will not rain tomorrow. In complex cases, inconsistencies may be difficult to detect, for example, when a person believes in the axioms of [[Euclidean geometry]] and is nonetheless convinced that it is possible to [[Squaring the circle|square the circle]]. Positive coherence refers to the support that different mental states provide for each other. For example, there is positive coherence between the belief that there are eight planets in the [[Solar System]] and the belief that there are less than ten planets in the Solar System: the earlier belief implies the latter belief. Other types of support through positive coherence include explanatory and [[Causality|causal]] connections.<ref name="Harman2013"/><ref name="Thagard1998"/> Coherence-based accounts are also referred to as rule-based accounts since the different aspects of coherence are often expressed in precise rules. In this regard, to be rational means to follow the rules of rationality in thought and action. According to the enkratic rule, for example, rational agents are required to intend what they believe they ought to do. This requires coherence between beliefs and intentions. The norm of persistence states that agents should retain their intentions over time. This way, earlier mental states cohere with later ones.<ref name="Heinzelmann2022"/><ref name="Harman2013"/><ref name="Lord2018-1"/> It is also possible to distinguish different types of rationality, such as theoretical or practical rationality, based on the different sets of rules they require.<ref name="Mele2004a"/><ref name="McClennen2004"/> One problem with such coherence-based accounts of rationality is that the norms can enter into conflict with each other, so-called rational [[dilemmas]]. For example, if the agent has a pre-existing intention that turns out to conflict with their beliefs, then the enkratic norm requires them to change it, which is disallowed by the norm of persistence. This suggests that, in cases of rational dilemmas, it is impossible to be rational, no matter which norm is privileged.<ref name="Heinzelmann2022"/><ref name="Mintoff1997"/><ref name="Priest2002"/> Some defenders of coherence theories of rationality have argued that, when formulated correctly, the norms of rationality cannot enter into conflict with each other. That means that rational dilemmas are impossible. This is sometimes tied to additional non-trivial assumptions, such that [[ethical dilemmas]] also do not exist. A different response is to bite the bullet and allow that rational dilemmas exist. This has the consequence that, in such cases, rationality is not possible for the agent and theories of rationality cannot offer guidance to them.<ref name="Heinzelmann2022"/><ref name="Mintoff1997"/><ref name="Priest2002"/> These problems are avoided by reason-responsiveness accounts of rationality since they "allow for rationality despite conflicting reasons but [coherence-based accounts] do not allow for rationality despite conflicting requirements". Some theorists suggest a weaker criterion of coherence to avoid cases of necessary irrationality: rationality requires not to obey all norms of coherence but to obey as many norms as possible. So in rational dilemmas, agents can still be rational if they violate the minimal number of rational requirements.<ref name="Heinzelmann2022"/> Another criticism rests on the claim that coherence-based accounts are either redundant or false. On this view, either the rules recommend the same option as the balance of reasons or a different option. If they recommend the same option, they are redundant. If they recommend a different option, they are false since, according to its critics, there is no special value in sticking to rules against the balance of reasons.<ref name="Mele2004a"/><ref name="McClennen2004"/> === Based on goals === A different approach characterizes rationality in relation to the goals it aims to achieve.<ref name="Moser2006"/><ref name="Pinker2022"/> In this regard, theoretical rationality aims at epistemic goals, like acquiring [[truth]] and avoiding falsehood. Practical rationality, on the other hand, aims at non-epistemic goals, like [[moral]], prudential, political, economic, or [[aesthetic]] goals. This is usually understood in the sense that rationality follows these goals but does not set them. So rationality may be understood as a "[[minister without portfolio]]" since it serves goals external to itself.<ref name="Moser2006"/> This issue has been the source of an important historical discussion between [[David Hume]] and [[Immanuel Kant]]. The slogan of Hume's position is that "reason is the slave of the passions". This is often understood as the claim that rationality concerns only how to reach a goal but not whether the goal should be pursued at all. So people with perverse or weird goals may still be perfectly rational. This position is opposed by Kant, who argues that rationality requires having the right goals and [[motivation|motives]].<ref name="Mele2004a"/><ref>{{cite book |last1=Smith |first1=Michael |editor-first1=Alfred R |editor-first2=Piers |editor-last1=Mele |editor-last2=Rawling |title=The Oxford Handbook of Rationality |date=2004 |publisher=Oxford University Press |isbn=978-0-19-514539-7 |url=https://oxford.universitypressscholarship.com/view/10.1093/0195145399.001.0001/acprof-9780195145397-chapter-5 |chapter=HUMEAN RATIONALITY |doi=10.1093/0195145399.001.0001 |access-date=2022-08-14 |archive-date=2023-12-30 |archive-url=https://web.archive.org/web/20231230142147/https://academic.oup.com/oxford-scholarship-online |url-status=live }}</ref><ref>{{cite book |last1=O'Neill |first1=Onora |editor-first1=Alfred R |editor-first2=Piers |editor-last1=Mele |editor-last2=Rawling |title=The Oxford Handbook of Rationality |date=2004 |publisher=Oxford University Press |isbn=978-0-19-514539-7 |url=https://oxford.universitypressscholarship.com/view/10.1093/0195145399.001.0001/acprof-9780195145397-chapter-6 |chapter=KANT: Rationality as Practical Reason |doi=10.1093/0195145399.001.0001 |access-date=2022-08-14 |archive-date=2023-12-30 |archive-url=https://web.archive.org/web/20231230142134/https://academic.oup.com/oxford-scholarship-online |url-status=live }}</ref><ref name="Kolb2008"/><ref name="Moser2006"/> According to [[William Frankena]] there are four conceptions of rationality based on the goals it tries to achieve. They correspond to [[egoism]], [[utilitarianism]], [[Perfectionism (philosophy)|perfectionism]], and [[Ethical intuitionism|intuitionism]].<ref name="Moser2006"/><ref name="Frankena1983"/><ref>{{cite book |last1=Gonzalez |first1=Wenceslao J. |title=New Perspectives on Technology, Values, and Ethics: Theoretical and Practical |date=8 October 2015 |publisher=Springer |isbn=978-3-319-21870-0 |page=64 |url=https://books.google.com/books?id=1gO0CgAAQBAJ&pg=PA64 |language=en |access-date=14 August 2022 |archive-date=30 December 2023 |archive-url=https://web.archive.org/web/20231230142157/https://books.google.com/books?id=1gO0CgAAQBAJ&pg=PA64#v=onepage&q&f=false |url-status=live }}</ref> According to the egoist perspective, rationality implies looking out for one's own [[happiness]]. This contrasts with the utilitarian point of view, which states that rationality entails trying to contribute to everyone's [[well-being]] or to the greatest general good. For perfectionism, a certain ideal of perfection, either moral or non-moral, is the goal of rationality. According to the intuitionist perspective, something is rational "if and only if [it] conforms to self-evident truths, intuited by reason".<ref name="Moser2006"/><ref name="Frankena1983"/> These different perspectives diverge a lot concerning the behavior they prescribe. One problem for all of them is that they ignore the role of the evidence or information possessed by the agent. In this regard, it matters for rationality not just whether the agent acts efficiently towards a certain goal but also what information they have and how their actions appear reasonable from this perspective. [[Richard Brandt]] responds to this idea by proposing a conception of rationality based on relevant information: "Rationality is a matter of what would survive scrutiny by all relevant information."<ref name="Moser2006"/> This implies that the subject repeatedly reflects on all the relevant facts, including formal facts like the laws of logic.<ref name="Moser2006"/> === Internalism and externalism === An important contemporary discussion in the field of rationality is between [[Internalism and externalism|internalists and externalists]].<ref name="Moser2006"/><ref name="Langsam2008"/><ref name="Finlay2008"/> Both sides agree that rationality demands and depends in some sense on reasons. They disagree on what reasons are relevant or how to conceive those reasons. Internalists understand reasons as mental states, for example, as perceptions, beliefs, or desires. In this view, an action may be rational because it is in tune with the agent's beliefs and realizes their desires. Externalists, on the other hand, see reasons as external factors about what is good or right. They state that whether an action is rational also depends on its actual consequences.<ref name="Moser2006"/><ref name="Langsam2008"/><ref name="Finlay2008"/> The difference between the two positions is that internalists affirm and externalists reject the claim that rationality supervenes on the mind. This claim means that it only depends on the person's mind whether they are rational and not on external factors. So for internalism, two persons with the same mental states would both have the same degree of rationality independent of how different their external situation is. Because of this limitation, rationality can diverge from actuality. So if the agent has a lot of misleading evidence, it may be rational for them to turn left even though the actually correct path goes right.<ref name="Broome2021"/><ref name="Moser2006"/> [[Bernard Williams]] has criticized externalist conceptions of rationality based on the claim that rationality should help explain what motivates the agent to act. This is easy for internalism but difficult for externalism since external reasons can be independent of the agent's motivation.<ref name="Moser2006"/><ref>{{cite journal |last1=Kriegel |first1=Uri |title=Normativity and Rationality: Bernard Williams on Reasons for Action |journal=Iyyun: The Jerusalem Philosophical Quarterly / ืขืืื: ืจืืขืื ืคืืืืกืืคื |date=1999 |volume=48 |pages=281โ292 |jstor=23352588 |url=https://www.jstor.org/stable/23352588 |issn=0021-3306 |access-date=2022-08-18 |archive-date=2022-08-18 |archive-url=https://web.archive.org/web/20220818102627/https://www.jstor.org/stable/23352588 |url-status=live }}</ref><ref>{{cite web |last1=Chappell |first1=Sophie-Grace |last2=Smyth |first2=Nicholas |title=Bernard Williams: 5. Internal and external reasons |url=https://plato.stanford.edu/entries/williams-bernard/#InteExteReas |website=The Stanford Encyclopedia of Philosophy |publisher=Metaphysics Research Lab, Stanford University |access-date=10 August 2022 |date=2018 |archive-date=10 July 2022 |archive-url=https://web.archive.org/web/20220710193743/https://plato.stanford.edu/entries/williams-bernard/#InteExteReas |url-status=live }}</ref> Externalists have responded to this objection by distinguishing between [[Motivation#Motivational reasons and rationality|motivational and normative reasons]].<ref name="Moser2006"/> Motivational reasons explain why someone acts the way they do while normative reasons explain why someone ought to act in a certain way. Ideally, the two overlap, but they can come apart. For example, liking chocolate cake is a motivational reason for eating it while having [[high blood pressure]] is a normative reason for not eating it.<ref>{{cite web |vauthors=Alvarez M |title=Reasons for Action: Justification, Motivation, Explanation |url=https://plato.stanford.edu/entries/reasons-just-vs-expl/ |website=The Stanford Encyclopedia of Philosophy |publisher=Metaphysics Research Lab, Stanford University |access-date=13 May 2021 |date=2017 |archive-date=26 July 2021 |archive-url=https://web.archive.org/web/20210726142611/https://plato.stanford.edu/entries/reasons-just-vs-expl/ |url-status=live }}</ref><ref>{{cite journal |vauthors=Miller C |title=Motivation in Agents |journal=Noรปs |date=2008 |volume=42 |issue=2 |pages=222โ266 |doi=10.1111/j.1468-0068.2008.00679.x |url=https://philpapers.org/rec/MILMIA-2 |access-date=2022-08-18 |archive-date=2021-05-13 |archive-url=https://web.archive.org/web/20210513123328/https://philpapers.org/rec/MILMIA-2 |url-status=live }}</ref> The problem of rationality is primarily concerned with normative reasons. This is especially true for various contemporary philosophers who hold that rationality can be reduced to normative reasons.<ref name="Broome2021"/><ref name="Broome2007"/><ref name="Kiesewetter2017"/> The distinction between motivational and normative reasons is usually accepted, but many theorists have raised doubts that rationality can be identified with normativity. On this view, rationality may sometimes recommend suboptimal actions, for example, because the agent lacks important information or has false information. In this regard, discussions between internalism and externalism overlap with discussions of the normativity of rationality.<ref name="Moser2006"/> ==== Relativity ==== An important implication of internalist conceptions is that rationality is relative to the person's perspective or mental states. Whether a belief or an action is rational usually depends on which mental states the person has. So carrying an umbrella for the walk to the supermarket is rational for a person believing that it will rain but irrational for another person who lacks this belief.<ref name="Knauff2021b"/><ref name="Precis"/><ref>{{cite book |last1=Carter |first1=J. Adam |last2=McKenna |first2=Robin |title=Routledge Handbook to Relativism |date=2019 |publisher=London, U.K.: Routledge |url=https://philpapers.org/rec/CARRAE-9 |chapter=Relativism and Externalism |access-date=2022-08-18 |archive-date=2022-08-18 |archive-url=https://web.archive.org/web/20220818103039/https://philpapers.org/rec/CARRAE-9 |url-status=live }}</ref> According to [[Robert Audi]], this can be explained in terms of [[experience]]: what is rational depends on the agent's experience. Since different people make different experiences, there are differences in what is rational for them.<ref name="Precis"/> === Normativity === Rationality is [[Normativity|normative]] in the sense that it sets up certain rules or standards of correctness: to be rational is to comply with certain requirements.<ref name="Broome2021"/><ref name="Heinzelmann2022"/><ref name="Pinker2022"/> For example, rationality requires that the agent does not have [[contradictory]] beliefs. Many discussions on this issue concern the question of what exactly these standards are. Some theorists characterize the normativity of rationality in the deontological terms of [[obligation]]s and [[Permission (philosophy)|permissions]]. Others understand them from an evaluative perspective as good or valuable. A further approach is to talk of rationality based on what is praise- and blameworthy.<ref name="Moser2006"/> It is important to distinguish the norms of rationality from other types of norms. For example, some forms of [[fashion]] prescribe that men do not wear [[Bell-bottoms|bell-bottom trousers]]. Understood in the strongest sense, a norm prescribes what an agent ought to do or what they have most reason to do. The norms of fashion are not norms in this strong sense: that it is unfashionable does not mean that men ought not to wear bell-bottom trousers.<ref name="Broome2021"/> Most discussions of the normativity of rationality are interested in the strong sense, i.e. whether agents ought always to be rational.<ref name="Broome2021"/><ref name="Kiesewetter2017"/><ref name="Broome2007"/><ref name="Salas"/> This is sometimes termed a substantive account of rationality in contrast to structural accounts.<ref name="Broome2021"/><ref name="Heinzelmann2022"/> One important argument in favor of the normativity of rationality is based on considerations of praise- and blameworthiness. It states that we usually hold each other responsible for being rational and criticize each other when we fail to do so. This practice indicates that irrationality is some form of fault on the side of the subject that should not be the case.<ref>{{cite journal |last1=Kiesewetter |first1=Benjamin |title=Prรฉcis Zu The Normativity of Rationality |journal=Zeitschrift fรผr Philosophische Forschung |date=2017 |volume=71 |issue=4 |pages=560โ4 |doi=10.3196/004433017822228923 |url=https://philpapers.org/rec/KIEPZT |access-date=2021-06-07 |archive-date=2021-06-07 |archive-url=https://web.archive.org/web/20210607055013/https://philpapers.org/rec/KIEPZT |url-status=live }}</ref><ref name="Salas"/> A strong counterexample to this position is due to [[John Broome (philosopher)|John Broome]], who considers the case of a fish an agent wants to eat. It contains salmonella, which is a decisive reason why the agent ought not to eat it. But the agent is unaware of this fact, which is why it is rational for them to eat the fish.<ref name="Broome2007"/><ref name="Kiesewetter2017"/> So this would be a case where normativity and rationality come apart. This example can be generalized in the sense that rationality only depends on the reasons accessible to the agent or how things appear to them. What one ought to do, on the other hand, is determined by objectively existing reasons.<ref name="Littlejohn "/><ref name="Salas"/> In the ideal case, rationality and normativity may coincide but they come apart either if the agent lacks access to a reason or if he has a mistaken belief about the presence of a reason. These considerations are summed up in the statement that rationality [[supervene]]s only on the agent's [[mind]] but normativity does not.<ref>{{cite journal |last1=Broome |first1=John |title=Rationality vs normativity |journal=Australasian Philosophical Review |date=nd}}</ref><ref>{{cite journal |last1=Kiesewetter |first1=Benjamin |title=Rationality as Reasons-Responsiveness |journal=Australasian Philosophical Review |year=2020 |volume=4 |issue=4 |pages=332โ342 |url=https://philpapers.org/rec/KIERAR-2 |doi=10.1080/24740500.2021.1964239 |s2cid=243349119 |doi-access=free |access-date=2021-06-07 |archive-date=2021-06-05 |archive-url=https://web.archive.org/web/20210605114434/https://philpapers.org/rec/KIERAR-2 |url-status=live }}</ref> But there are also thought experiments in favor of the normativity of rationality. One, due to [[Frank Cameron Jackson|Frank Jackson]], involves a doctor who receives a patient with a mild condition and has to prescribe one out of three drugs: drug A resulting in a partial cure, drug B resulting in a complete cure, or drug C resulting in the patient's death.<ref>{{cite journal |last1=Jackson |first1=Frank |title=Decision-Theoretic Consequentialism and the Nearest and Dearest Objection |journal=Ethics |date=1991 |volume=101 |issue=3 |pages=461โ482 |doi=10.1086/293312 |s2cid=170544860 |url=https://philpapers.org/rec/JACDCA |access-date=2021-06-07 |archive-date=2021-06-07 |archive-url=https://web.archive.org/web/20210607055018/https://philpapers.org/rec/JACDCA |url-status=live }}</ref> The doctor's problem is that they cannot tell which of the drugs B and C results in a complete cure and which one in the patient's death. The objectively best case would be for the patient to get drug B, but it would be highly irresponsible for the doctor to prescribe it given the uncertainty about its effects. So the doctor ought to prescribe the less effective drug A, which is also the rational choice. This thought experiment indicates that rationality and normativity coincide since what is rational and what one ought to do depends on the agent's mind after all.<ref name="Littlejohn"/><ref name="Salas"/> Some theorists have responded to these [[thought experiment]]s by distinguishing between normativity and [[Moral responsibility|responsibility]].<ref name="Salas"/> On this view, critique of irrational behavior, like the doctor prescribing drug B, involves a negative evaluation of the agent in terms of responsibility but remains silent on normative issues. On a competence-based account, which defines rationality in terms of the competence of responding to reasons, such behavior can be understood as a failure to execute one's competence. But sometimes we are lucky and we succeed in the normative dimension despite failing to perform competently, i.e. rationally, due to being irresponsible.<ref name="Salas"/><ref>{{cite journal |last1=Zimmerman |first1=Michael J. |title=Taking Luck Seriously |journal=Journal of Philosophy |date=2002 |volume=99 |issue=11 |pages=553โ576 |doi=10.2307/3655750 |jstor=3655750 |url=https://philpapers.org/rec/ZIMTLS |access-date=2021-06-07 |archive-date=2021-06-07 |archive-url=https://web.archive.org/web/20210607055016/https://philpapers.org/rec/ZIMTLS |url-status=live }}</ref> The opposite can also be the case: bad luck may result in failure despite a responsible, competent performance. This explains how rationality and normativity can come apart despite our practice of criticizing irrationality.<ref name="Salas"/><ref>{{cite journal |last1=Sylvan |first1=Kurt L. |title=Respect and the Reality of Apparent Reasons |journal=Philosophical Studies |date=2020 |volume=178 |issue=10 |pages=3129โ3156 |doi=10.1007/s11098-020-01573-1 |s2cid=225137550 |url=https://philpapers.org/rec/SYLRAT |doi-access=free |access-date=2021-06-07 |archive-date=2021-06-07 |archive-url=https://web.archive.org/web/20210607055019/https://philpapers.org/rec/SYLRAT |url-status=live }}</ref> ==== Normative and descriptive theories ==== The concept of normativity can also be used to distinguish different theories of rationality. Normative theories explore the normative nature of rationality. They are concerned with rules and ideals that govern how the [[mind]] should work. Descriptive theories, on the other hand, investigate how the mind actually works. This includes issues like under which circumstances the ideal rules are followed as well as studying the underlying psychological processes responsible for rational thought. Descriptive theories are often investigated in empirical [[psychology]] while [[philosophy]] tends to focus more on normative issues. This division also reflects how different these two types are investigated.<ref name="Knauff2021b"/><ref name="Sturm2021"/><ref name="Pinker2022"/><ref name="Over2004"/> Descriptive and normative theorists usually employ different [[methodologies]] in their research. Descriptive issues are studied by [[empirical research]]. This can take the form of studies that present their participants with a cognitive problem. It is then observed how the participants solve the problem, possibly together with explanations of why they arrived at a specific solution. Normative issues, on the other hand, are usually investigated in similar ways to how the [[formal sciences]] conduct their inquiry.<ref name="Knauff2021b"/><ref name="Sturm2021"/> In the field of theoretical rationality, for example, it is accepted that [[deductive reasoning]] in the form of [[modus ponens]] leads to rational beliefs. This claim can be investigated using methods like [[rational intuition]] or careful deliberation toward a [[reflective equilibrium]]. These forms of investigation can arrive at conclusions about what forms of thought are rational and irrational without depending on [[empirical evidence]].<ref name="Knauff2021b"/><ref>{{cite book |last1=Pust |first1=Joel |title=Intuitions |date=2014 |url=https://academic.oup.com/book/5802/chapter-abstract/148988403?redirectedFrom=fulltext |chapter=3 Empirical Evidence for Rationalism? |access-date=2022-08-18 |archive-date=2022-08-18 |archive-url=https://web.archive.org/web/20220818111248/https://academic.oup.com/book/5802/chapter-abstract/148988403?redirectedFrom=fulltext |url-status=live }}</ref><ref>{{cite web |last1=Daniels |first1=Norman |title=Reflective Equilibrium |url=https://plato.stanford.edu/entries/reflective-equilibrium/ |website=The Stanford Encyclopedia of Philosophy |publisher=Metaphysics Research Lab, Stanford University |access-date=28 February 2022 |date=2020 |archive-date=22 February 2022 |archive-url=https://web.archive.org/web/20220222215102/https://plato.stanford.edu/entries/reflective-equilibrium/ |url-status=live }}</ref> An important question in this field concerns the relation between descriptive and normative approaches to rationality.<ref name="Knauff2021b"/><ref name="Pinker2022"/><ref name="Over2004"/> One difficulty in this regard is that there is in many cases a huge gap between what the norms of ideal rationality prescribe and how people actually reason. Examples of normative systems of rationality are [[classical logic]], [[probability theory]], and [[decision theory]]. Actual reasoners often diverge from these standards because of [[cognitive biases]], heuristics, or other mental limitations.<ref name="Knauff2021b"/> Traditionally, it was often assumed that actual human reasoning should follow the rules described in normative theories. In this view, any discrepancy is a form of irrationality that should be avoided. However, this usually ignores the human limitations of the mind. Given these limitations, various discrepancies may be necessary (and in this sense ''rational'') to get the most useful results.<ref name="Knauff2021b"/><ref name="Harman2013"/><ref name="Moser2006"/> For example, the ideal rational norms of decision theory demand that the agent should always choose the option with the highest expected value. However, calculating the expected value of each option may take a very long time in complex situations and may not be worth the trouble. This is reflected in the fact that actual reasoners often settle for an option that is good enough without making certain that it is really the best option available.<ref name="Moser2006"/><ref name="Bendor2009"/> A further difficulty in this regard is [[Hume's law]], which states that one cannot deduce what ought to be based on what is.<ref>{{cite web |last1=Cohon |first1=Rachel |title=Hume's Moral Philosophy: 5. Is and ought |url=https://plato.stanford.edu/entries/hume-moral/#io |website=The Stanford Encyclopedia of Philosophy |publisher=Metaphysics Research Lab, Stanford University |access-date=21 May 2021 |date=2018 |archive-date=10 January 2018 |archive-url=https://web.archive.org/web/20180110170335/https://plato.stanford.edu/entries/hume-moral/#io |url-status=live }}</ref><ref>{{cite journal |last1=Bloomfield |first1=P. |title=Two Dogmas of Metaethics |journal=Philosophical Studies |date=2007 |volume=132 |issue=3 |pages=439โ466 |doi=10.1007/s11098-005-2509-9 |s2cid=170556071 |url=https://philpapers.org/rec/BLOTDO |access-date=2022-08-18 |archive-date=2022-08-14 |archive-url=https://web.archive.org/web/20220814062403/https://philpapers.org/rec/BLOTDO |url-status=live }}</ref> So just because a certain heuristic or cognitive bias is present in a specific case, it should not be inferred that it should be present. One approach to these problems is to hold that descriptive and normative theories talk about different types of rationality. This way, there is no contradiction between the two and both can be correct in their own field. Similar problems are discussed in so-called [[naturalized epistemology]].<ref name="Knauff2021b"/><ref>{{cite web |last1=Rysiew |first1=Patrick |title=Naturalism in Epistemology |url=https://plato.stanford.edu/entries/epistemology-naturalized/ |website=The Stanford Encyclopedia of Philosophy |publisher=Metaphysics Research Lab, Stanford University |access-date=10 August 2022 |date=2021 |archive-date=17 August 2022 |archive-url=https://web.archive.org/web/20220817114504/https://plato.stanford.edu/entries/epistemology-naturalized/ |url-status=live }}</ref> === Conservatism and foundationalism === Rationality is usually understood as conservative in the sense that rational agents do not start from zero but already possess many beliefs and intentions. Reasoning takes place on the background of these pre-existing mental states and tries to improve them. This way, the original beliefs and intentions are privileged: one keeps them unless a reason to doubt them is encountered. Some forms of epistemic [[foundationalism]] reject this approach. According to them, the whole system of beliefs is to be justified by self-evident beliefs. Examples of such self-evident beliefs may include immediate experiences as well as simple logical and mathematical [[axiom]]s.<ref name="Harman2013"/><ref name="Hasan2000"/><ref name="Christensen1994"/> An important difference between conservatism and foundationalism concerns their differing conceptions of the [[Burden of proof (philosophy)|burden of proof]]. According to conservativism, the burden of proof is always in favor of already established belief: in the absence of new evidence, it is rational to keep the mental states one already has. According to foundationalism, the burden of proof is always in favor of suspending mental states. For example, the agent reflects on their pre-existing belief that the [[Taj Mahal]] is in [[Agra]] but is unable to access any reason for or against this belief. In this case, conservatives think it is rational to keep this belief while foundationalists reject it as irrational due to the lack of reasons. In this regard, conservatism is much closer to the ordinary conception of rationality. One problem for foundationalism is that very few beliefs, if any, would remain if this approach was carried out meticulously. Another is that enormous mental resources would be required to constantly keep track of all the justificatory relations connecting non-fundamental beliefs to fundamental ones.<ref name="Harman2013"/><ref name="Hasan2000"/><ref name="Christensen1994"/>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Rationality
(section)
Add topic