Published & Forthcoming
UPDATED: MAY 2024
Moral Kombat: Analytic Naturalism and Moral Disagreement | Forthcoming, Canadian Journal of Philosophy (coauthored with Jessica Isserow)
Moral naturalists are often said to have trouble making sense of inter-communal moral disagreements. The most familiar incarnation of this challenge is the Moral Twin Earth argument. The culprit is typically thought to be the naturalist’s metasemantics and its implications for shared meanings and concepts across communities. We address this challenge from the perspective of analytic naturalism. We show that (contrary to a popular opinion) analytic naturalism does not trap language users within conceptual prisons that cordon off all possibility of inter-communal moral communication and disagreement. Moreover, and drawing on more general insights we develop concerning the relationship between meaning and disagreement — or better: the lack thereof — we can see that analytic naturalists are in a position to accommodate intuitions concerning the possibility of genuine inter-communal moral disagreements.
How to Read a Representor | Forthcoming, Ergo
Imprecise probabilities are most often modelled in philosophy by means of sets of probability functions, known as representors. In the recent literature, two ways of interpreting representors have emerged as especially prominent: vagueness interpretations, according to which each probability function in the set represents how the agents beliefs would be if any vagueness were precisified away; and comparativist interpretations, according to which the set represents those comparative confidence relations that are common to all functions therein. I argue that both of these interpretations come with significant limitations. I also propose a different interpretation, the functional interpretation, according to which representors are best interpreted by reference to the roles they play in the theories that make use of them.
The Measurement of Subjective Probability | 2024, Cambridge University Press (Elements series)
Beliefs come in degrees, and we often represent those degrees with numbers. We might say, for example, that we are 90% confident in the truth of some scientific hypothesis, or only 30% confident in the success of some risky endeavour. But what do these numbers mean? What, in other words, is the underlying psychological reality to which the numbers correspond? And what constitutes a meaningful difference between numerically distinct representations of belief? In this Element, we discuss the main approaches to the measurement of belief. These fall into two broad categories--epistemic and decision-theoretic—with divergent foundations in the theory of measurement. Epistemic approaches explain the measurement of belief by appeal to relations between belief states themselves, whereas decision-theoretic approaches appeal to relations between beliefs and desires in the production of choice and preferences.
Radical Misinterpretation | 2023, Pacific Philosophical Quarterly
This paper provides an exposition and defence of Lewis' theory of radical interpretation. The first part explains what Lewis' theory was; the second part then explains what Lewis' theory wasn't, and in so doing addresses a number of common objections that arise as a result of widespread myths and misunderstandings about how Lewis' theory is supposed to work.
What is 'Real' in Interpersonal Comparisons of Confidence | 2022, Australasian Journal of Philosophy
According to comparativism, comparative confidence is more fundamental than absolute confidence. In a pair of recent papers, Stefansson has argued that comparativism is capable of explaining interpersonal confidence comparisons. In this paper, I will argue that Stefansson's proposed explanation is inadequate, that we have good reasons to think that comparativism cannot handle intepersonal comparisons, and that the best explanation of interpersonal comparisons requires thinking about confidence in a fundamentally different way than that which comparativists propose — viz., as a dimensionless quantity.
Comparativism and the Measurement of Belief | 2022, Erkenntnis
According to comparativism, degrees of belief are reducible to a system of purely ordinal comparisons of relative confidence. (For example, being more confident that P than that Q, or being equally confident that P and that Q.) In this paper, I raise several general challenges for comparativism, relating to its capacity to illuminate apparently meaningful claims regarding intervals and ratios of strengths of belief, its capacity to draw enough intuitively meaningful and theoretically relevant distinctions between doxastic states, and its capacity to handle common instances of irrationality.
Don't Stop Believing (Hold on to that Warm Fuzzy Feeling) | 2021, Ethics (coauthored with Jessica Isserow)
It's natural to think that there's value to improving the accuracy of our beliefs. If beliefs are a map by which we steer our efforts to bring the world in line with our preferences, then, ceteris paribus, we should want a more accurate map. However, it is easy to see that this cannot be true in general. The world could be structured so as to punish learning with respect to certain topics; i.e., by coming into possession of new information, an agent's situation could be made worse than it otherwise would have been. In this paper, we investigate whether the world is structured so as to punish learning with respect to moral nihilism. We ask: If an ordinary human agent had the option to learn whether or not moral nihilism is true, then ought she to take it? We argue that, given intuitively plausible and empirically grounded assumptions about ordinary human preferences, she (probably) should not.
Betting Against the Zen Monk: On Preferences and Belief | 2021, Synthese
According to the preference-centric approach to understanding partial belief, the connection between partial beliefs and preferences is key to understanding what partial beliefs are and how they're measured. As Ramsey put it, the 'degree of a belief is a causal property of it, which we can express vaguely as the extent to which we are prepared to act on it'. But this idea is not as popular as it once was. Nowadays, the preference-centric approach is frequently dismissed out-of-hand as behaviouristic, unpalatably anti-realist, and/or prone to devastating counterexamples. Cases like Eriksson and Hajek's (2007) preferenceless Zen monk and Christensen's (2001) other roles argument have suggested to many that any account of partial belief that ties them too closely to preferences is irretrievably flawed. In this paper I provide a defence of preference-centric accounts of partial belief.
'Ramseyfying' Probabilistic Comparativism | 2020, Philosophy of Science
Comparativism is the view that comparative confidences (e.g., being more confident that P than that Q) are more fundamental than degrees of belief (e.g., believing that P with some strength x). In this paper, I outline the basis for a new, non-probabilistic version of comparativism inspired by a suggestion made by Frank Ramsey in `Probability and Partial Belief'. I show how, and to what extent, `Ramseyan comparativism' might be used to weaken the (unrealistically strong) probabilistic coherence conditions that comparativism traditionally relies on.
Impossible Worlds and Partial Belief | 2019, Synthese
One response to the problems of logical omniscience is to extend the space of possible worlds to include impossible worlds. It is natural to think that essentially the same strategy can be applied to standard probabilistic models of partial belief, for which parallel problems also arise. In this paper, I note a difficulty with the inclusion of impossible worlds into probabilistic models. With only very weak assumptions about the space of worlds, most of the propositions which can be constructed from possible and impossible worlds are in an important sense inexpressible; whereas the probabilistic model seems committed to saying that agents in general have at least as many attitudes towards inexpressible propositions as they do towards expressible propositions. If it is reasonable to think that our attitudes are generally expressible, then a model with such commitments looks problematic.
Recent Work in Normative Decision Theory | 2019, Analysis Reviews
A review of some major topics of debate in normative decision theory from circa 2007 to 2019. Topics discussed include the ongoing debate between causal and evidential decision theory, decision instability, risk-weighted expected utility theory, decision-making with incomplete preferences, and decision-making with imprecise credences. (NB: the version uploaded here is very slightly longer than the published version.)
Ramsey Without Ethical Neutrality | 2017, Mind
Frank Ramsey’s ‘Truth and Probability’ sketches a proposal for the empirical measurement of credences, along with a corresponding set of axioms for a (somewhat incomplete) representation theorem intended to characterize the preference conditions under which this measurement process is applicable. There are several features of Ramsey’s formal system which make it attractive and worth developing. However, in specifying his measurement process and his axioms, Ramsey introduces the notion of an ethically neutral proposition, the assumed existence of which plays a key role throughout Ramsey’s system. A number of later representation theorems have also appealed to ethically neutral propositions. The notion of ethical neutrality has often been called into question—in fact, there seem to be good reasons to suppose that no ethically neutral propositions exist. In this paper, I present several new, Ramsey-inspired representation theorems that avoid any appeal to ethical neutrality. These theorems preserve the benefits of Ramsey’s system, without paying the cost of ethical neutrality.
A Representation Theorem for Frequently Irrational Agents | 2017, Journal of Philosophical Logic
The standard representation theorem for expected utility theory tells us that if a subject's preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities -- and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility (given her preferences). However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being (amongst other things) always probabilistically coherent with infinitely precise degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board — that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles.
Probabilism, Representation Theorems, and Whether Deliberation Crowds out Prediction | 2017, Erkenntnis
Decision-theoretic representation theorems have been developed and appealed to in the service of two important philosophical projects: (i) in attempts to characterise credences in terms of preferences, and (ii) in arguments for probabilism. Theorems developed within the formal framework that Savage developed have played an especially prominent role here. I argue that the use of these 'Savagean' theorems create significant difficulties for both projects, but particularly the latter. The origin of the problem directly relates to the question of whether we can have credences regarding acts currently under consideration and the consequences which depend on those acts; I argue that such credences are possible. Furthermore, I argue that attempts to use Jeffrey's non-Savagean theorem (and similar theorems) in the service of these two projects may not fare much better.
Epistemic Two-Dimensionalism and Arguments from Epistemic Misclassification | 2013, Australasian Journal of Philosophy (coauthored with Kelvin McQueen and Clas Weber)
Epistemic Two-Dimensional Semantics (E2D) claims that expressions have a counterfactual intension and an epistemic intension. Epistemic intensions reflect cognitive significance such that sentences with necessary epistemic intensions are a priori. We defend E2D against an influential line of criticism: arguments from epistemic misclassification. We focus in particular on the arguments of Speaks [2010] and Schroeter [2005]. Such arguments conclude that E2D is mistaken from (i) the claim that E2D is committed to classifying certain sentences as a priori and (ii) the claim that such sentences are a posteriori. We aim to show that these arguments are unsuccessful as (i) and (ii) undercut each other. One must distinguish the general framework of E2D from a specific implementation of it. The framework is flexible enough to avoid commitment to the apriority of any particular sentence; only specific implementations are so committed. Arguments from epistemic misclassification are therefore better understood as arguments for favouring one implementation of E2D over another, rather than as refutations of E2D.
Moral naturalists are often said to have trouble making sense of inter-communal moral disagreements. The most familiar incarnation of this challenge is the Moral Twin Earth argument. The culprit is typically thought to be the naturalist’s metasemantics and its implications for shared meanings and concepts across communities. We address this challenge from the perspective of analytic naturalism. We show that (contrary to a popular opinion) analytic naturalism does not trap language users within conceptual prisons that cordon off all possibility of inter-communal moral communication and disagreement. Moreover, and drawing on more general insights we develop concerning the relationship between meaning and disagreement — or better: the lack thereof — we can see that analytic naturalists are in a position to accommodate intuitions concerning the possibility of genuine inter-communal moral disagreements.
How to Read a Representor | Forthcoming, Ergo
Imprecise probabilities are most often modelled in philosophy by means of sets of probability functions, known as representors. In the recent literature, two ways of interpreting representors have emerged as especially prominent: vagueness interpretations, according to which each probability function in the set represents how the agents beliefs would be if any vagueness were precisified away; and comparativist interpretations, according to which the set represents those comparative confidence relations that are common to all functions therein. I argue that both of these interpretations come with significant limitations. I also propose a different interpretation, the functional interpretation, according to which representors are best interpreted by reference to the roles they play in the theories that make use of them.
The Measurement of Subjective Probability | 2024, Cambridge University Press (Elements series)
Beliefs come in degrees, and we often represent those degrees with numbers. We might say, for example, that we are 90% confident in the truth of some scientific hypothesis, or only 30% confident in the success of some risky endeavour. But what do these numbers mean? What, in other words, is the underlying psychological reality to which the numbers correspond? And what constitutes a meaningful difference between numerically distinct representations of belief? In this Element, we discuss the main approaches to the measurement of belief. These fall into two broad categories--epistemic and decision-theoretic—with divergent foundations in the theory of measurement. Epistemic approaches explain the measurement of belief by appeal to relations between belief states themselves, whereas decision-theoretic approaches appeal to relations between beliefs and desires in the production of choice and preferences.
Radical Misinterpretation | 2023, Pacific Philosophical Quarterly
This paper provides an exposition and defence of Lewis' theory of radical interpretation. The first part explains what Lewis' theory was; the second part then explains what Lewis' theory wasn't, and in so doing addresses a number of common objections that arise as a result of widespread myths and misunderstandings about how Lewis' theory is supposed to work.
What is 'Real' in Interpersonal Comparisons of Confidence | 2022, Australasian Journal of Philosophy
According to comparativism, comparative confidence is more fundamental than absolute confidence. In a pair of recent papers, Stefansson has argued that comparativism is capable of explaining interpersonal confidence comparisons. In this paper, I will argue that Stefansson's proposed explanation is inadequate, that we have good reasons to think that comparativism cannot handle intepersonal comparisons, and that the best explanation of interpersonal comparisons requires thinking about confidence in a fundamentally different way than that which comparativists propose — viz., as a dimensionless quantity.
Comparativism and the Measurement of Belief | 2022, Erkenntnis
According to comparativism, degrees of belief are reducible to a system of purely ordinal comparisons of relative confidence. (For example, being more confident that P than that Q, or being equally confident that P and that Q.) In this paper, I raise several general challenges for comparativism, relating to its capacity to illuminate apparently meaningful claims regarding intervals and ratios of strengths of belief, its capacity to draw enough intuitively meaningful and theoretically relevant distinctions between doxastic states, and its capacity to handle common instances of irrationality.
Don't Stop Believing (Hold on to that Warm Fuzzy Feeling) | 2021, Ethics (coauthored with Jessica Isserow)
It's natural to think that there's value to improving the accuracy of our beliefs. If beliefs are a map by which we steer our efforts to bring the world in line with our preferences, then, ceteris paribus, we should want a more accurate map. However, it is easy to see that this cannot be true in general. The world could be structured so as to punish learning with respect to certain topics; i.e., by coming into possession of new information, an agent's situation could be made worse than it otherwise would have been. In this paper, we investigate whether the world is structured so as to punish learning with respect to moral nihilism. We ask: If an ordinary human agent had the option to learn whether or not moral nihilism is true, then ought she to take it? We argue that, given intuitively plausible and empirically grounded assumptions about ordinary human preferences, she (probably) should not.
Betting Against the Zen Monk: On Preferences and Belief | 2021, Synthese
According to the preference-centric approach to understanding partial belief, the connection between partial beliefs and preferences is key to understanding what partial beliefs are and how they're measured. As Ramsey put it, the 'degree of a belief is a causal property of it, which we can express vaguely as the extent to which we are prepared to act on it'. But this idea is not as popular as it once was. Nowadays, the preference-centric approach is frequently dismissed out-of-hand as behaviouristic, unpalatably anti-realist, and/or prone to devastating counterexamples. Cases like Eriksson and Hajek's (2007) preferenceless Zen monk and Christensen's (2001) other roles argument have suggested to many that any account of partial belief that ties them too closely to preferences is irretrievably flawed. In this paper I provide a defence of preference-centric accounts of partial belief.
'Ramseyfying' Probabilistic Comparativism | 2020, Philosophy of Science
Comparativism is the view that comparative confidences (e.g., being more confident that P than that Q) are more fundamental than degrees of belief (e.g., believing that P with some strength x). In this paper, I outline the basis for a new, non-probabilistic version of comparativism inspired by a suggestion made by Frank Ramsey in `Probability and Partial Belief'. I show how, and to what extent, `Ramseyan comparativism' might be used to weaken the (unrealistically strong) probabilistic coherence conditions that comparativism traditionally relies on.
Impossible Worlds and Partial Belief | 2019, Synthese
One response to the problems of logical omniscience is to extend the space of possible worlds to include impossible worlds. It is natural to think that essentially the same strategy can be applied to standard probabilistic models of partial belief, for which parallel problems also arise. In this paper, I note a difficulty with the inclusion of impossible worlds into probabilistic models. With only very weak assumptions about the space of worlds, most of the propositions which can be constructed from possible and impossible worlds are in an important sense inexpressible; whereas the probabilistic model seems committed to saying that agents in general have at least as many attitudes towards inexpressible propositions as they do towards expressible propositions. If it is reasonable to think that our attitudes are generally expressible, then a model with such commitments looks problematic.
Recent Work in Normative Decision Theory | 2019, Analysis Reviews
A review of some major topics of debate in normative decision theory from circa 2007 to 2019. Topics discussed include the ongoing debate between causal and evidential decision theory, decision instability, risk-weighted expected utility theory, decision-making with incomplete preferences, and decision-making with imprecise credences. (NB: the version uploaded here is very slightly longer than the published version.)
Ramsey Without Ethical Neutrality | 2017, Mind
Frank Ramsey’s ‘Truth and Probability’ sketches a proposal for the empirical measurement of credences, along with a corresponding set of axioms for a (somewhat incomplete) representation theorem intended to characterize the preference conditions under which this measurement process is applicable. There are several features of Ramsey’s formal system which make it attractive and worth developing. However, in specifying his measurement process and his axioms, Ramsey introduces the notion of an ethically neutral proposition, the assumed existence of which plays a key role throughout Ramsey’s system. A number of later representation theorems have also appealed to ethically neutral propositions. The notion of ethical neutrality has often been called into question—in fact, there seem to be good reasons to suppose that no ethically neutral propositions exist. In this paper, I present several new, Ramsey-inspired representation theorems that avoid any appeal to ethical neutrality. These theorems preserve the benefits of Ramsey’s system, without paying the cost of ethical neutrality.
A Representation Theorem for Frequently Irrational Agents | 2017, Journal of Philosophical Logic
The standard representation theorem for expected utility theory tells us that if a subject's preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities -- and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility (given her preferences). However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being (amongst other things) always probabilistically coherent with infinitely precise degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board — that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles.
Probabilism, Representation Theorems, and Whether Deliberation Crowds out Prediction | 2017, Erkenntnis
Decision-theoretic representation theorems have been developed and appealed to in the service of two important philosophical projects: (i) in attempts to characterise credences in terms of preferences, and (ii) in arguments for probabilism. Theorems developed within the formal framework that Savage developed have played an especially prominent role here. I argue that the use of these 'Savagean' theorems create significant difficulties for both projects, but particularly the latter. The origin of the problem directly relates to the question of whether we can have credences regarding acts currently under consideration and the consequences which depend on those acts; I argue that such credences are possible. Furthermore, I argue that attempts to use Jeffrey's non-Savagean theorem (and similar theorems) in the service of these two projects may not fare much better.
Epistemic Two-Dimensionalism and Arguments from Epistemic Misclassification | 2013, Australasian Journal of Philosophy (coauthored with Kelvin McQueen and Clas Weber)
Epistemic Two-Dimensional Semantics (E2D) claims that expressions have a counterfactual intension and an epistemic intension. Epistemic intensions reflect cognitive significance such that sentences with necessary epistemic intensions are a priori. We defend E2D against an influential line of criticism: arguments from epistemic misclassification. We focus in particular on the arguments of Speaks [2010] and Schroeter [2005]. Such arguments conclude that E2D is mistaken from (i) the claim that E2D is committed to classifying certain sentences as a priori and (ii) the claim that such sentences are a posteriori. We aim to show that these arguments are unsuccessful as (i) and (ii) undercut each other. One must distinguish the general framework of E2D from a specific implementation of it. The framework is flexible enough to avoid commitment to the apriority of any particular sentence; only specific implementations are so committed. Arguments from epistemic misclassification are therefore better understood as arguments for favouring one implementation of E2D over another, rather than as refutations of E2D.
Works in Progress
Unawareness and Implicit Belief: Possible Worlds Models of Informational Content | Revise & Resubmit
Possible worlds models of belief have difficulties accounting for unawareness, the inability an agent may have to entertain (and hence believe) certain propositions. Accommodating the possibility of unawareness is important for adequately modelling epistemic states, and representing the informational content to which agents have access given their explicit beliefs. In this paper, I use neighbourhood structures to develop an original multi-agent model of explicit belief, awareness, and informational content, along with an associated sound and complete axiom system. I also defend the model against the seminal impossibility result of Dekel et al. (1998), according to which three intuitive conditions preclude non-trivial unawareness on any 'standard' model of knowledge or belief.
The Other Problem of Radical Underdetermination | Being revised
Anti-individualistic varieties of functionalism focus not on the causal role a state plays within each individual, but rather on the role a state typically plays in members of that individual's kind. An overlooked problem of indeterminacy arises for anti-individualistic functionalism when there are too many physical states with which mental states can be identified. Taking Lewis' analytic functionalism as my main stalking horse, I show that the (unmodified) Lewisean position implies under reasonable assumptions that for each individual and any system of mental states, there is a maximally 'fitting' scheme of interpretation according to which that individual has those mental states. The result, in other words, is not just one of radical indeterminacy, but potentially even maximal indeterminacy: anyone can be interpreted as believing and desiring anything, as being in pain or not in pain, and so on. Any solution to this problem requires altering the constraints on 'fit'. I discuss several potential solutions open to the Lewisean, including (i) a new potential role for naturalness in Lewis' theory, and (ii) an alternative approach to understanding of the kind of 'typicality' most relevant to folk psychology.
Representation Theorems and Radical Interpretation | Maybe I'll get around to revising this eventually?
David Lewis' theory of radical interpretation for mental content is founded on two key ideas: that beliefs and desires can be understood in terms of their causal-functional roles within folk psychology, and that folk psychology is more or less Bayesian in outline. This paper concerns a puzzle for Lewis' theory. On the one hand, Lewis argued that the facts about an agent's sensory evidence and choice dispositions will always underdetermine the facts about her beliefs and desires. On the other hand, we have various representation theorems (e.g., in Ramsey 1931; Savage 1954) that are widely taken to show that if an agent's choice dispositions satisfy certain structural conditions, then those dispositions alone suffice to determine her beliefs and desires. Here, I will argue that Lewis' conclusion is correct — any tension with representation theorems is merely apparent, and relates primarily to the difference between how 'choice dispositions' are treated in Lewis' theory and the problematic way they're understood in the context of the representation theorems. Indeed, there's no plausible sense in which theorems like Ramsey's or Savage's show that beliefs and desires can be determined by choice dispositions, even in principle — ultimately, they're of limited relevance to functionalism and to the project of radical interpretation.
Possible worlds models of belief have difficulties accounting for unawareness, the inability an agent may have to entertain (and hence believe) certain propositions. Accommodating the possibility of unawareness is important for adequately modelling epistemic states, and representing the informational content to which agents have access given their explicit beliefs. In this paper, I use neighbourhood structures to develop an original multi-agent model of explicit belief, awareness, and informational content, along with an associated sound and complete axiom system. I also defend the model against the seminal impossibility result of Dekel et al. (1998), according to which three intuitive conditions preclude non-trivial unawareness on any 'standard' model of knowledge or belief.
The Other Problem of Radical Underdetermination | Being revised
Anti-individualistic varieties of functionalism focus not on the causal role a state plays within each individual, but rather on the role a state typically plays in members of that individual's kind. An overlooked problem of indeterminacy arises for anti-individualistic functionalism when there are too many physical states with which mental states can be identified. Taking Lewis' analytic functionalism as my main stalking horse, I show that the (unmodified) Lewisean position implies under reasonable assumptions that for each individual and any system of mental states, there is a maximally 'fitting' scheme of interpretation according to which that individual has those mental states. The result, in other words, is not just one of radical indeterminacy, but potentially even maximal indeterminacy: anyone can be interpreted as believing and desiring anything, as being in pain or not in pain, and so on. Any solution to this problem requires altering the constraints on 'fit'. I discuss several potential solutions open to the Lewisean, including (i) a new potential role for naturalness in Lewis' theory, and (ii) an alternative approach to understanding of the kind of 'typicality' most relevant to folk psychology.
Representation Theorems and Radical Interpretation | Maybe I'll get around to revising this eventually?
David Lewis' theory of radical interpretation for mental content is founded on two key ideas: that beliefs and desires can be understood in terms of their causal-functional roles within folk psychology, and that folk psychology is more or less Bayesian in outline. This paper concerns a puzzle for Lewis' theory. On the one hand, Lewis argued that the facts about an agent's sensory evidence and choice dispositions will always underdetermine the facts about her beliefs and desires. On the other hand, we have various representation theorems (e.g., in Ramsey 1931; Savage 1954) that are widely taken to show that if an agent's choice dispositions satisfy certain structural conditions, then those dispositions alone suffice to determine her beliefs and desires. Here, I will argue that Lewis' conclusion is correct — any tension with representation theorems is merely apparent, and relates primarily to the difference between how 'choice dispositions' are treated in Lewis' theory and the problematic way they're understood in the context of the representation theorems. Indeed, there's no plausible sense in which theorems like Ramsey's or Savage's show that beliefs and desires can be determined by choice dispositions, even in principle — ultimately, they're of limited relevance to functionalism and to the project of radical interpretation.
PHD Thesis
Representation Theorems and the Grounds of Intentionality | Australian National University, 2015
This work evaluates and defends the idea that decision-theoretic representation theorems can play an important role in showing how credences and utilities can be characterised, at least in large part, in terms of their connection with preferences. Roughly, a decision-theoretic representation theorem tells us that if an agent’s preferences satisfy certain constraints, then that agent can be represented as maximising her expected utility under a unique set of credences (modelled by a credence function) and utilities (modelled by a utility function). Such theorems have been thought by many to not only show how credences and utilities can be understood via their relation to preferences, but also to show how credences and utilities can be naturalised — that is, characterised in wholly non-mental, non-intentional, and non-normative terms.
There are two broad questions that are addressed. The first (and more specific) question is whether any version of characterisational representationism, based on one of the representation theorems that are currently available to us, will be of much use in directly advancing the long-standing project of showing how representational mental states can exist within the natural world. I answer this first question in the negative: no current representation theorem lends itself to a plausible and naturalistic interpretation suitable for the goal of reducing facts about credences and utilities to a naturalistic base. A naturalistic variety of characterisational representationism will have to await a new kind of representation theorem, quite distinct from any which have yet been developed.
The second question is whether characterisational representationism in any form (naturalistic or otherwise) is a viable position — whether, in particular, there is any value to developing representation theorems with the goal of characterising what it is to have credences and utilities in mind. This I answer in the affirmative. In particular, I defend a weak version of characterisational representationism against a number of philosophical critiques. With that in mind, I also argue that there are serious drawbacks with the particular theorems that decision theorists have developed thus far; particularly those which have been developed within the four basic formal frameworks developed by Savage, Anscombe and Aumann, Jeffrey, and Ramsey.
In the final part of the work, however, I develop a new representation theorem, which I argue goes some of the way towards resolving the most troubling issues associated with earlier theorems. I first show how to construct a theorem which is ontologically similar to Jeffrey’s, but formally more similar to Ramsey’s — but which does not suffer from the infamous problems associated with Ramsey’s notion of ethical neutrality, and which has stronger uniqueness results than Jeffrey’s theorem. Furthermore, it is argued that the new theorem’s preference conditions are descriptively reasonable, even for ordinary agents, and that the credence and utility functions associated with this theorem are capable of representing a wide range of non-ideal agents — including those who: (i) might have credences and utilities only towards non-specific propositions, (ii) are probabilistically incoherent, (iii) are deductively fallible, and (iv) have distinct credences and utilities towards logically equivalent propositions
This work evaluates and defends the idea that decision-theoretic representation theorems can play an important role in showing how credences and utilities can be characterised, at least in large part, in terms of their connection with preferences. Roughly, a decision-theoretic representation theorem tells us that if an agent’s preferences satisfy certain constraints, then that agent can be represented as maximising her expected utility under a unique set of credences (modelled by a credence function) and utilities (modelled by a utility function). Such theorems have been thought by many to not only show how credences and utilities can be understood via their relation to preferences, but also to show how credences and utilities can be naturalised — that is, characterised in wholly non-mental, non-intentional, and non-normative terms.
There are two broad questions that are addressed. The first (and more specific) question is whether any version of characterisational representationism, based on one of the representation theorems that are currently available to us, will be of much use in directly advancing the long-standing project of showing how representational mental states can exist within the natural world. I answer this first question in the negative: no current representation theorem lends itself to a plausible and naturalistic interpretation suitable for the goal of reducing facts about credences and utilities to a naturalistic base. A naturalistic variety of characterisational representationism will have to await a new kind of representation theorem, quite distinct from any which have yet been developed.
The second question is whether characterisational representationism in any form (naturalistic or otherwise) is a viable position — whether, in particular, there is any value to developing representation theorems with the goal of characterising what it is to have credences and utilities in mind. This I answer in the affirmative. In particular, I defend a weak version of characterisational representationism against a number of philosophical critiques. With that in mind, I also argue that there are serious drawbacks with the particular theorems that decision theorists have developed thus far; particularly those which have been developed within the four basic formal frameworks developed by Savage, Anscombe and Aumann, Jeffrey, and Ramsey.
In the final part of the work, however, I develop a new representation theorem, which I argue goes some of the way towards resolving the most troubling issues associated with earlier theorems. I first show how to construct a theorem which is ontologically similar to Jeffrey’s, but formally more similar to Ramsey’s — but which does not suffer from the infamous problems associated with Ramsey’s notion of ethical neutrality, and which has stronger uniqueness results than Jeffrey’s theorem. Furthermore, it is argued that the new theorem’s preference conditions are descriptively reasonable, even for ordinary agents, and that the credence and utility functions associated with this theorem are capable of representing a wide range of non-ideal agents — including those who: (i) might have credences and utilities only towards non-specific propositions, (ii) are probabilistically incoherent, (iii) are deductively fallible, and (iv) have distinct credences and utilities towards logically equivalent propositions
bits and pieces
1. Frank P. Ramsey: Partial Belief and Subjective Probability | Stanford Encyclopedia entry
2. Ramsey and the Ethically Neutral Proposition | Exposition piece for Leeds NatRep group
3. The Instability of Savage's Foundations: The Constant Acts Problem | Exposition piece for Leeds NatRep group
4. 'Ramseyfying' Probabilistic Comparativism | 2018 FEW submission
5. Non-Imposition Non-Imposed | Short paper outlining a minor tweak to Wilson's 1972 social choice theorem
2. Ramsey and the Ethically Neutral Proposition | Exposition piece for Leeds NatRep group
3. The Instability of Savage's Foundations: The Constant Acts Problem | Exposition piece for Leeds NatRep group
4. 'Ramseyfying' Probabilistic Comparativism | 2018 FEW submission
5. Non-Imposition Non-Imposed | Short paper outlining a minor tweak to Wilson's 1972 social choice theorem