Draft

From Cognitive Architecture of Reasoning to Belief and Supposition Kindhood

Joshua Mugg (Park University, Kansas City Missouri)

Belief and Imagination Workshop, University of Antwerp May 20

[Very drafty draft: please do not cite without permission]

Central Cases

Joel says that cheese made from almonds is just as real as the cheese made from dairy. When he found out his daughter was highly allergic to dairy, he and his wife decided to become a dairy-free home. He can no longer enjoy a nice brie made from a cow. He has to eat almond cheese. When his sister tells him almond cheese isn’t real cheese, Joel argues that as long as the stuff is made from milk, it counts as cheese, and (crucially) it doesn’t matter if that milk came from a cow, goat, sheep, or almond. Does Joel really believe almond cheese is real cheese? Is it wrong for him to believe so?

A person walks out of a fancy restaurant in a southern US state and sees two men wearing suits by the valet stand. One is white, the other black. They hand the black man their keys, thinking them to be a valet. Did this person believe that the well dressed black man was a valet? Apart from the action, was it wrong to think that he was the valet? 

In the Gospel of Mark, a father begs Jesus to heal his sick child, but adds the caveat “if he [Jesus] can.” When Jesus responds that everything is possible “for those who believe,” the father exclaims “I believe, help my unbelief!” (Mark 9:24). Does the father simultaneously believe and not believe? Is it permissible to push oneself one way or another for the sake of getting one’s child healed?

Here is a challenge of doing nature and ethics of belief: different people will have different intuitions about what to ‘count’ as a belief in cases like this, and this will have downstream effects for ethics of belief. A typical approach here is to start with a conceptual distinction and then go to the empirical literature to see how things line up. But where do those initial conceptual distinction coming from? Suggested Solution: Get empirical! 

[Slide Change]

Goals

  1. Lay out a method for distinguishing belief from belief-like states.
  2. Outline the Sound-Board Account of Human Reasoning.
  3. Use the Sound-Board Account of Human Reasoning to defend a version of belief-voluntarism.
  4. Distinguish between Belief-ish states and Supposition-ish states.

[Slide Change]

Cognitive Kinds

Discussion of taxonomy in the cognitive sciences is on the rise. Here’s the basic idea: take a psychological construct and consider whether it meets conditions of kindhood. This allows philosophers to home in on central features of the construct, consider whether a construct should be split into two constructs (e.g. episodic memory and working memory are distinct), or lumped with closely related constructs (e.g. perhaps episodic memory and imagination are really the same). Here’s some examples of philosophers of psychology doing exactly this 

[Slide]

-Episodic memory (e.g. Kourken 2011, Robins 2018, Colaço 2022, to name but a few)

-Working memory (Gomez-Lavin 2021) 

-Type 1/ Type 2 Processing (Samuels 2009, Mugg 2016, 2018) 

-Schizophrenia (Tekin 2016 and Sullivan 2014) 

-Concepts (Machery 2009, Khalidi 2023)

[Slide Change]

-Khalidi’s (2023) book, which defends this method and applies it to concepts, innateness, domain specificity, episodic memory, language-thought effects, heuristics, biases, and some psychological disorders.

[Slide Change]

-NB: the claim is not that these ARE all kinds, only that there is agreement on method. Gomez-Lavin, Machery, and Mugg all use this method to argue that working-memory, concepts, and type1/type2 are NOT kinds. 

-Here’s the key idea of my current project: treat belief, acceptance, and supposition as putative kinds in a cognitive ontology. Let’s do for these putative kinds what others have done for the constructs listed above. 

[Slide Change]

Ward (2023) identifies three central commitments of Property Cluster theories:

  1. Similarity: Members of a kind have many properties in common.
  2. Anti-Essentialism: For many or most kinds, there are not necessary and sufficient conditions for membership.
  3. Separation: For any two kinds, the co-occurring properties that characterize each kind are discrete rather than varying along a smooth continuum. 

Three such prominent views are Homeostatic Property Cluster (Boyd), Stable Property Cluster (Slater 2015), and Simple Causal Theory of kindhood.  Simple Causal Theory of Kinds: “certain properties or conjunctions of properties that are causally connected with others in systematic ways can be considered natural kinds” (Khalidi 2023: 5, emphasis mine, c.f. Craver 2009).

[[Slide Change]

-Discussion of causation and the central cases invites us look at cognitive architecture of human reasoning:

  • If Parallel-Competitive Dual Process Theory is true, then, (and I think only then), does something like Gendler’s alief/belief distinction look plausible. 
  • If Spinozan Theory is true, then belief is very much involuntary, but so may be acceptance.
  • But, neither of these are popular. DPT has been roundly criticized (Gigerenzer, Kruglanski, Mugg) and new versions are very much attenuated versions of the old (Evans, Stanovich, Pennycook, De Neys). So, we need a new cognitive architecture, one that allows the various properties on the ‘Standard Menu’ to cross cut one another.

-This method can render competing cog taxonomies testable against one another.

[Slide Change]

Sound-Board Account of Human Reasoning

I will focus on my Sound Board Account of Human Reasoning (S-BAR) (Mugg 2018), according to which there is one reasoning system, which can operate consciously or unconsciously, automatically or controlled, concretely or abstractly (i.e. moving from content-laden argument to attending directly to form of argument), and inductively or deductively. Think of all those opposing properties as slides and switches on a sound-mixing board. I deny that the properties previously used to distinguish type-1 from type-2 processing cluster. Thus, I am not claiming that there is one reasoning system that operates in a type-1 way sometimes and a type-2 way at other times. 

On this account, for any reasoning process and any property pair, the reasoning system will operate in a definite way; it will not, for example, operate in a working-memory involving and non-involving way at the same time. This makes the account is in principle testable: should there be evidence of contradictory beliefs that arise from simultaneously operating reasoning processes, this would be strong evidence against my account. Notice that accepting the existence of contradictory beliefs commits me to a fragmentation view of belief.

After the reasoning system outputs a representation, it can readjust and rework the reasoning process, just as a sound-mixing board operates, and can recruit additional representational states from memory, adding such representations just as a sound-mixer might add a baseline to a track. Since modes of operation do not cluster, that some of the properties (e.g. fast/slow) admit of gradation is not a problem. 

The initial settings of the reasoning system are not set by the system itself. Otherwise the reasoning system would first have to determine how to set itself, which is itself a reasoning process, thereby leading to a vicious regress. How it is set depends on external factors (e.g. wording of a question or problem) and internal factors (e.g. available mindware (Stanovich 2011), availability of working-memory). 

[Slide Change]

Whether a subject simply accepts the resulting representation and ceases the reasoning process or (effortfully) feeds the representation back through the reasoning system also depends on both internal and external factors, such as thinking dispositions and available mindware in the former and time allowance for the latter, in addition to the motivation an individual has for 1) getting the answer correct and 2) non-epistemic factors such as whether the belief would be beneficial, comforting, disconcerting, etc. 

[Slide Change]

            It will be helpful to see how this account of human reasoning explains some of the findings from the heuristics and biases literature. Belief bias is the phenomenon in which the believed truth of a conclusion (and sometimes premises) of an argument interfere with the judgment of the validity of the argument. While the reasoning system’s concrete mode is useful in determining whether a proposition is true, especially when recruiting other beliefs, it is not helpful in determining whether the argument is valid, since all that is relevant to an argument’s validity is the form of the propositions in question. Consider the following two arguments from Evans et al. (1983):

Argument 1: 

  1. No cigarettes are inexpensive.
  2. Some addictive things are inexpensive.
  3. Therefore, some addictive things are not cigarettes.

Argument 2:

  • No addictive things are inexpensive.
  • Some cigarettes are inexpensive.
  • Therefore, some addictive things are not cigarettes. 

Subjects overwhelming accepted both as valid (92% in both cases), even though only the first is valid. In reasoning concretely, subjects are attending to the truth of each proposition. According to S-BAR, correctly judging Argument 2 as invalid requires switching modes. For example, one could switch into an abstract mode, in which the subject attends solely to the form of the propositions:

  1. No A are I.
  2. Some C are I.
  3. Therefore, some A are not C.

Abstraction is the process of decoupling certain parts of the problem from others. In this case, decoupling the form of the propositions in question from their believed truth-value, but decoupling will occur in other reasoning processes as well (e.g. an individual’s race from their qualifications, consequences of an action from intent of an action, and raw probability from probability given priors).

[Slide Change]

The automatic-controlled distinction was, at one time, central to the Type 1/Type 2 distinction, and is central in debates about the nature of belief, since belief is supposed to be involuntary, which we might understand as being not-controlled in some way. A number of philosophers and psychologists have argued that we need accounts of control as graded, both in action and cognition. Psychologist Pim Haselager (2014) suggests that according to dual-process theory, reasoning is like a car that has only two gears: type-1 and type-2, but it is more plausible that the human reasoning system is like a car with multiple gears. This is so because certain processes may begin as effortful but become effortless over time with practice. For example, training in math and logic enables individuals to utilize abstract modes of reasoning more easily. In philosophy, Wayne Wu (2016) defines control and automaticity in a way that allows for more or less control of an action relative to various features. On his account, a feature of an action is controlled just in case the action has that feature because of the subject’s intention to act in that featured way. He defines ‘automaticity’ negatively as ‘not controlled.’ This allows for an action to have more or less controlled features. Jennings goes further, arguing that to account for skilled behavior, we must distinguish between two kinds of control: flexible control and reliable control, which exist along distinct and opposing continuums (2020: 172). 

S-BAR allows for the automatic-controlled distinction to be a matter of degree rather than of kind. I suggest that S-BAR fits naturally with a tripart structure of cognitive control in human reasoning. A process is more controlled: 

1) the more times a subject reruns a process, 

2) the more modes of operation are altered in the re-running of the process, and 

3) the more doxastic states or heuristics that are recruited. 

The more a process has these feature, the more controlled it is.[1] Conversely, subjects who accept the initial output of the reasoning system are engaged in a process with little or no control.

[Slide Change]

Belief Voluntarism

The discussion on control supports a sort of voluntarism about belief. According to S-BAR, if the reasoning system is operating in a low working-memory and fast mode, then the subject will have a response to the question quickly. The subject may then rerun the reasoning process. For example, if the stakes are high enough, the subject may effortfully double-check their response. Suppose I am on a train that is running late, and I have a tight connection on the other end. Perhaps the initial response from my reasoning system is that I will not make my connection. An initial fast mode of operation moves from the belief that the train is late to the belief that I will miss my connection. Of course, I don’t want this to be the case, and so I engage in more careful reasoning, determining the amount of time left on my current train, adding that amount to the current time along with the 5 minutes I will need to get to the next platform, and then finding that I actually will just make the train. I could also reason inductively here: if my train is delayed, it may be that other trains are too, including my next train. After all, if I am having to make this tight connection, surely others are too. Maybe the railway will delay the leaving of my train by a few minutes to give us extra time. I might recruit other doxastic states. I might recall that I have often feel anxious about making my train connection, but that I generally have made the connection. I might reason heuristically: train companies make things work out for the customers (ignoring the added information that the train is running late!).  

On S-BAR, I suggested a process is more controlled: 1) the more times a subject reruns a process, 2) the more modes of operation are altered in the re-running of the process, and 3) the more doxastic states or heuristics that are recruited. In the train case, all three of these conditions can be met. First, I need not rerun the process in any or all of these ways. I could stick with the initial output of the reasoning system: I will be late. As soon as I rerun the process, the process is somewhat controlled. Second, I could rerun just one of these processes. For example, I might repeat the heuristic “train companies make connections work out” over and over again like a mantra. Alternatively, I alter the mode of operation, moving from the mathematical mode to the inductive mode. The more modes I alter, the more controlled the process. Third, notice that I can recruit more doxastic states, as when I use memories. 

            The voluntariness of belief I am after can helpfully be elucidated by considering the case (previously) common practices among social psychologists. A decade ago, some psychologists used of multiple statistical tools, opting for the statistical analysis that provided researchers with significance (see Simmons, Nelson, and Simonshohnn 2011, Francis 2012a, 2012b, John, Loewenstein, & Prelec 2012). Some psychology labs would run their data through one statistical model, and, if they did not get significance, would run it through another, and then another, until they did get significance. The choice to rerun the reasoning problem is conscious (ask “given this data, is there a significant finding?”), as is mode determination, since the researchers actively chose the statistical model. Why rerun the statistics? Because the researchers wanted to find an effect. Plausibly, many psychologists believed that they had found an effect after the final rerun of the statistics. We know now that this is not good scientific practice because rerunning data on multiple models greatly increases the likelihood of false positives, but the use of multiple statistical analysis supports the voluntariness of some beliefs because at least some psychologists believed their results. Furthermore, S-BAR suggests that humans are like psychologists who rerun analyses because they can alter the mode of operation and control when they stop the reasoning process.

[Change Slide]

Belief, Acceptance, and Supposition

Typically, Belief, Acceptance, and Supposition are distinguished along the following lines. 

 BeliefAcceptanceSupposition
1Always responsive to evidential considerations Never responsive to evidential considerationsNever to evidential considerations
2Never to pragmatic considerationsAlways responsive to pragmatic considerationsAlways responsive to pragmatic considerations
3InvoluntaryVoluntaryVoluntary
4Feeling of RightnessNo Feeling of RightnessNo Feeling of Rightness
5Action-guiding in a wide domainAction-guiding in a narrow domain.Not action-guiding
6Ideal of integrationNo ideal of integrationNo ideal of integration

A paradigmatic case of acceptance is that of a lawyer who treats the world as though their client is innocent, even though they believe that their clint is guilty. While we can think of acceptance as an action, we can also think of it as a mental state. Presumable the result of a token acceptance action is the tokening of an acceptance mental state. For this reason, and because I am concerned with cognitive taxonomy, I am concerned with the mental state sense of acceptance. So, in the paradigmatic case of a lawyer accepting their client’s innocence, I am interested in cognitive attitude toward the content “my client is innocent.”  

Let’s walk through these properties as they appear in the paradigmatic case of the lawyer. I will argue that we should retain an attenuated version of properties, 3 and 5 but reject properties 1, 2, 3 and 6. There are philosophers who interpret some these properties as normative rather than constitutive condition on belief (especially 1, 2 and 6). Because my method attempts to stick closely with actual (as opposed to ideal) human psychology, belief and acceptance shouldn’t be distinguished primarily by normative considerations. Rather, normative application should come as a secondary project after completing the descriptive project of cognitive taxonomy. 

The first and second properties are often taken to be two sides of the same coin and the central feature of acceptance, with the subsequent properties issuing from it, and so I will treat them together here. As Bratman puts it: “reasonable belief is shaped primarily by evidence for what is believed and concern for the truth of what is believed. Thus the slogan: belief aims at truth” (1992: 3). Most employing the belief/acceptance distinction follow Bratman (e.g. Van Leeuween 2014, 2017, 2023, Soter 2023). The lawyer accepts their client’s innocence for the pragmatic reason that it is their job to do so, rather than on the basis of their evidence. Suppose the client is guilty and tells the lawyer as much. The lawyer will now believe their clint is guilty on the bias of this testimonial evidence, but the lawyer may simultaneously recognize that the admissible evidence to the court will not be enough to convince a jury of their client’s guilt. In that case the lawyer will accept against the evidence. On the other hand, philosophers usually think believing against the evidence is impossible.

However, one thing that determinate whether an individual re-runs a problem is motivation, which will include pragmatic reasons. Subjects who are motivated to believe a certain way for pragmatic reasons can rerun reasoning processes until they get the desired result. On my account, Joel believes almond-cheese is just as much cheese as that which comes from a cow. It’s clear that he believes this way for pragmatic reasons coming from his daughter’s allergy: believing that his almond-cheese is just as real helps him not feel left out. You may think that the way in which the pragmatic bears on the mental state is different though. In the case of acceptance, the pragmatic reasons may lead directly to the lawyer accepting his client’s innocent, whereas Joel seems to engage in some mental gymnastics. I agree that there is a difference in how the pragmatic and evidential bears on the mental states, but it concerns difference in the kind of voluntariness of acceptance and belief. 

Looking at the third property, it seems the lawyer can form the acceptance that their client is innocent through a sheer at of will, without any need to rerun reasoning processing, avoid counterevidence, etc., whereas the belief is thought to be involuntary. Again, my discussion above outlined a version of voluntarism about belief supported by S-BAR. The voluntarism about belief I take myself to have shown is not as strong as that which seems true of acceptance, namely, the ability to accept ‘at will.’ Within the framework of S-BAR, we see that subjects can initiate a reasoning process with a newly formed supposition (suppose you just won the lottery), but that a newly formed belief would be the output of the reasoning process. So although beliefs can be voluntarily formed, since they are the output of the reasoning process the sort of voluntariness at work here is different than the sort that is supposed to exist for acceptance.  

However, I want to push the just how voluntary acceptance really is. Acceptance is supposed to be responsive to practical reasons. To what extent are my practical reasons under my control? It seems answering that question requires determining to what extent my motivations, desires, or values are under my control. While a treatment of conative attitudes lies outside the scope of this work, it seems to me that we cannot form desires, values, or motivations ‘at will’ any more than we can form belief ‘at will.’  A standard picture of the control over desire seems to be that we can diachronically reform our dispositions to have certain desires, even if we cannot synchronically do so. This seems to me analogous to the widely accepted claim that while we can work indirectly over time to alter our beliefs, we cannot form a belief ‘at will.’ Those adopting a ‘guise of the good’ view of desire—that are desires are responsive to what we take to be goodness of things in the world in a way analogous to perception—seem committed to a strong version of desire involuntarism (Oddie 2005: chapter 8). Schapiro (2021) and Korsgaard (2008) seem to think desires (or ‘inclinations’ as they call them) to be involuntary. A perennial problem for Humeanism about desire is to explain how it is possible for basic desires to be brought under rational control at all (e.g. Hubin 2003 and Schroeder 2007: chapter 5). This would not be a problem if the Humean if they thought that we can simply choose to desire.[2] I take all of this to indicate that the difference in voluntariness of belief and acceptance is a matter of degree rather than kind.

The next property is phenomenological. When the lawyer considers the proposition “my client is innocent”, they will not feel as though this proposition is true. I take it that the ‘feeling that p is true’ in the case of belief to be a metacognitive attitude which some psychologists call a ‘feeling of rightness’ (or FOR) (see Thompson 2009, Stanovich 2011). Others, such as Alston (1996) and Howard-Snyder (2013), also take the disposition for a FOR as a necessary condition for belief, and use this condition as a way to distinguish belief from acceptance (Alston 1996, p. 3-4). My worry about using FOR to distinguish belief from acceptance concerns the causal profile of this feeling. What causal work is the feeling doing? If FOR is epiphenomenal, then it cannot be used to distinguish kinds, since, on the account of kinds I have adopted, kinds are clusters of causal properties. On the other hand, one might think that a FOR is causally important because this feeling causes beliefs, more so than acceptances, to enter into reasoning processes when considered. I am sympathetic to thinking of FOR as having this functional profile, but notice the property of having a FOR becomes a more distinct property of being inferentially promiscuous.[3]

On my view, the fifth property is the main way belief and acceptance are distinguished: beliefs are typically held in a wide context while acceptances are held in a limited domain.[4] However, this property has to be qualified since I adopt a fragmented view of belief. Basically, the two are not so widely separated as is thought in Bratman, Cohen, Soter, Van Leeuween because those accounts overestimate the context-independence of belief. Still, I think the way belief and acceptance are fragmented are importantly different. Returning to our paradigmatic case, the lawyer accepts their client’s innocence only within the context of the courtroom. Once the lawyer leaves the courtroom, they abandon the acceptance. Elsewhere, I captured this idea by saying that, on S-BAR, acceptances are cognitive quarantined, whereas beliefs are inferentially promiscuous. Relevant beliefs will typically be recruited (often automatically) for reasoning processes, but this will not be the case for acceptances. 

Now, because I accept a fragmented account of belief, beliefs will not activate in every context in which they could activate. What seems relevant is that the fragmentation of acceptance occurs voluntarily. Whereas a fragmented account of belief says that beliefs a ‘tagged’ by context, the context is determined by the content of the belief, but the acceptance is ‘tagged’ willfully on the part of the subject. The fragmentation itself is voluntarily formed. Acceptances are therefore even more context-specific than beliefs, since the beliefs recruited to interact with that acceptance will presumably also be depended on how they are tagged. 

[Slide Change]

To summarize, belief and acceptance are distinguished along the following lines:

 BeliefAcceptance/ Supposition/Pretense
1Can be voluntarily formed through process rerunning.Typically formed directly and basically voluntarily.
2Inferentially promiscuous. Cognitive quarantined.
3Fragmented based on contentFragmented based on voluntary choice

[Slide Change]

Acceptance and Supposition are not Distinct Kinds

Many philosophers claim that supposing is also distinct from acceptance because suppositions work only in the mind without being tied to action. Whereas acceptance cognitive quarantines beliefs from all of these areas, a supposition occurs only within a reasoning process and does not affect action. Here is how Cohen argues for the distinction:

“But acceptance is not the same as supposition, assumption, presumption, or hypothesizing, in the standard senses of those terms. Thus the verb ‘to suppose’ commonly denotes an inherently temporary act of imagination, as in ‘Let’s suppose we’re on a desert island’, whereas acceptance implies commitment to a pattern, system, or policy—whether long or short term—of premising that p as a basis for a decision” (Cohen 1992: 12)

[Slide Change]

Bratman and Howard-Snyder say similar things.

Are these different kinds? The cognitive quarantining of the acceptance and supposing looks very similar, as does their volitional character. The difference is what one does with the resulting output. In the case of a supposition, one does not act on the output whereas in the case of an acceptance one does. So it seems acceptance is just supposition plus action, but just how much action is required to turn a supposition into an acceptance? And isn’t it odd to think that the state changed in kind rather than thinking it just entering into a new relation? Here’s an example to illustrate.

[Slide Change]

I am a homebrewer of beer. Hops are expensive, and you can save money by buying in bulk and sharing with other brewers, some of whom frequently forget to properly label the bags of hops. On one occasion, I was given a large amount of unlabeled hops at a homebrew club. As I hope you know, different varieties of hops makes a big difference to style of beer and at the point in the brewing process you add the ingredient. I might suppose my hops are Willamete (ideal for pilsners, but horrible for IPAs) or suppose that they are Cashmere (Great for late-addition or dry-hopping for an IPAs, but not great for most beer styles). I can engage in all of this as supposition, but whichever one I act on gets a special place on the standard taxonomy of mental kinds: the one I act is really an acceptance rather than a supposition. But this seems wrong. Maybe I act on the supposition that they are Cashmere, since I’m making an IPA next. My letting this supposition be the one that enters into action and using it as such has to do with the motivation rather than with any cognitive differences between my supposition that the hops I was given are Cashsmere or Willamete. 

Here’s a way to highlight the oddity of distinguishing acceptance and supposition even more: I suppose that the hops are cashmere and plan out the exact recipe for a smash IPA using these hops, intending to brew on Saturday. At this point, the supposition should count as an acceptance, right? But suppose further that my plans change, I don’t brew on Saturday, and by the time I get around to brewing the hops have gone bad. In this case, I never act using the representation, and so it actually looks like a supposition after all. Again, my point is that no difference can be drawn between acceptance and supposition, just that the distinction is a smudge rather than a kind.

[Slide Change: Wrapping Up]

[Slide Change: thank you]


[1] One might wonder whether mode alteration, process rerunning, and doxastic recruitment really should be grouped together under the heading of ‘control.’ Here I group them together because these are the features of altering initial responses on S-BAR.

[2] Thank you to Kael McCormack for helpful reference and discussion on the nature of desire. 

[3] Elsewhere (Mugg 2016b) I offer cases where beliefs lacks the disposition of a FOR. First, I suggest that there are counterintuitive scientific and probabilistic claims that need not be accompanied by a feeling of rightness (e.g. that the Copenhagen interpretation of quantum mechanics is true, or that it is more likely that an activist philosophy student grew up to be a bank-teller than a feminist bank-teller). Second, I consider cases in which a subjects believe all races are equal but are subject to implicit bias. This second case would require that implicit biases involve beliefs.

[4] On this point, I am in agreement with Frankish (2007): “What distinguishes belief-like attitudes (quasi-beliefs, we might say) from genuine beliefs? The crucial factor, I suggest, is context. We are guided by our hypotheses and professional beliefs only in certain special deliberative contexts—social, professional, academic—in which it may be in our interest to reason from propositions that are not true” (463).