Bodily Self-Ownership is Resilient to Exogenous Coercion & Bodily Insult

INTRODUCTION

Is bodily self-ownership fragile or resilient in the face of coercion, violence, or threat of violence? Does the fact that government agencies world-over are conditionally entitled to make any use of a citizen’s body, in whole or part, with or without consent entail that citizens don’t own their own bodies? Many answer to the first question with ‘of course, bodily self-ownership is fragile’, and to the second with ‘obviously that entails people don’t own their own bodies’. But as is demonstrated below, Government agencies’ conditional entitlement to make any use of my body, in whole or part, with or without my consent is a matter irrelevant to whether or not I own my own body.  In other words: If Ox = x owns his body; and, Gx = Government agencies are conditionally entitled to make any use of x’s body, in whole or part, with or without consent, then (~ ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx)) → ((~ ∀x((Ox ˅ ~ Ox) → ~ ∀x(Gx ˅ ~ Gx)) & ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))) → ∀x(Gx → ~ Ox))).

PRELIMINARIES

Let Ox = x owns his body; and, Gx = Government agencies are conditionally entitled to make any use of x’s body, in whole or part, with or without consent.

The vacuous validities i.-ii. below obtain, as the diligent but distrustful reader can verify:

i. ∀x (Ox ˅ ~ Ox), and ii. ∀x (Gx  ˅  ~Gx).

But, i. and ii. also commit us to iii. and iv.

iii. (∀x ((Ox  ˅  ~ Ox)  &  ∀x (Gx  ˅  ~ Gx))); and,

iv. (∀x ((Ox ˅ ~ Ox)  &  ∀x (Gx  ˅  ~ Gx)))  →  ~ (∀x ((Ox  ˅  ~Ox)  →  ~ ∀x (Gx  ˅  ~ Gx)))

If one thinks government agencies’ entitlements to people’s bodies implies people don’t own their own bodies, one thinks something very much like the invalidity v., below

v. ∀x (Gx → ~Ox).

But premises i.-iv. do not entail v; below is a proof of the fact, vi: the negation of the premise-conclusion commitments i-v.with which I began this discussion. It facilitates an appreciation of the fact that body self-ownership is resilient in the face of exogenous coercion, and violent, non-consensual interference with bodily functioning. Or, that

ARGUMENT VI .(~ ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx)) → ((~ ∀x((Ox ˅ ~ Ox) → ~ ∀x(Gx ˅ ~ Gx)) & ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))) → ∀x(Gx → ~ Ox)))

0. PROOF:  (~ ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx)) →
((~ ∀x((Ox ˅ ~ Ox) → ~ ∀x(Gx ˅ ~ Gx)) & ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))) →
∀x(Gx → ~ Ox))) is valid.

  1. ~ (~ ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx)) →
    ((~ ∀x((Ox ˅ ~ Ox) → ~ ∀x(Gx ˅ ~ Gx)) & ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))) →
    ∀x(Gx → ~ Ox)))
  2. ~ ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))
  3. ~ ((~ ∀x((Ox ˅ ~ Ox) → ~ ∀x(Gx ˅ ~ Gx)) & ∀x((Ox ˅ ~ Ox) & ∀x(Gx ˅ ~ Gx))) → ∀x(Gx → ~ Ox))
  4. ~ ((Oa ˅ ~ Oa) & ∀x(Gx ˅ ~ Gx))
  5. ~ (Oa ˅ ~ Oa)
  6. ~ ∀x(Gx ˅ ~ Gx)
  7. ~ Oa
  8. ~ ~ Oa
  9. ~ (Gb ˅ ~ Gb)
  10. ~ Gb
  11. ~ ~ Gb ■


DISCUSSION

Argument VI. illustrates, of course, that either the antecedent is false or the consequent is true—but not both. It’s clear the premises are innocuous, uncontroversial, and jointly valid, but that their conclusion is the negation of the intuitive position, encapsulated by the premise-conclusion commitments i-v.  The burden of showing which if any auxiliary premises make the asserted consequent false just when they make the ampliated* antecedent true lies with proponents of the invalidity v.

CONCLUSION

Bodily self-ownership is not fragile, but resilient, in the face of exogenous coercion, violence, or threat of violence. The fact that government agencies world-over are conditionally entitled to make any use of a citizen’s body, in whole or part, with or without consent simply does not entail that citizens don’t own their own bodies.

Notes

*[containing the relevant auxiliary premises and premise-conclusion commitments i-v] .

What do Actions have to do with Norms, Reasons, and Intentions?

INTRODUCTION

A swathe of philosopher’s views on the relationship between norms of an action, reasons for an action, and intentions to perform an action take the form of one or more of the following claims:

i. If one is committed to norms of A-performance then one has reason to deliver an A-performance;
ii. If one has reason to deliver an A-performance, then one is committed to norms of A-performance
; and,
iii. If one is committed to norms of A-performance, and has reason to deliver an A-performance, then one intends to deliver an A-performance.

Let An = One is committed to norms of A-performance; Ar = One has reason to deliver an A-performance;  and, Ai = One intends to deliver an A-performance.

Now, i.-iii. amount to the invalid claims ∀n∀r(An → Ar), ∀n∀r(Ar → An), and ∀n∀r∀i((An & Ar) → Ai), as the reader can verify. The analysis here offered gives a fuller account of why that must be. As I’ll show, the antecedent of i. only implies one does or does not have reasons for delivering an A-performance, that of ii. only implies one is or is not committed to norms of A-performance, and of iii. only implies one either intends or does not intend to deliver an A-performance.

ANALYSIS

I show that ∀n∀r∀i((((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An))) & ((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An)))) & ((An & Ar) → (Ai ˅ ~Ai))) is valid.

ARGUMENT: ∀n∀r∀i((((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An))) & ((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An)))) & ((An & Ar) → (Ai ˅ ~Ai)))
PROOF
:

  1. ∀n∀r∀i((((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An))) & ((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An)))) & ((An & Ar) → (Ai ˅ ~Ai)))
  2. ~∀n∀r∀i((((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An))) & ((An → (Ar ˅ ~Ar)) & (Ar → (An ˅ ~An)))) & ((An & Ar) → (Ai ˅ ~Ai)))
  3. ~∀r∀i((((Aa → (Ar ˅ ~Ar)) & (Ar → (Aa ˅ ~Aa))) & ((Aa → (Ar ˅ ~Ar)) & (Ar → (Aa ˅ ~Aa)))) & ((Aa & Ar) → (Ai ˅ ~Ai)))
  4. ~∀i((((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa))) & ((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa)))) & ((Aa & Ab) → (Ai ˅ ~Ai)))
  5. ~((((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa))) & ((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa)))) & ((Aa & Ab) → (Ac ˅ ~Ac)))
  6. ~(((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa))) & ((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa))))
  7. ~((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa)))
  8. ~(Aa → (Ab ˅ ~Ab))
  9. Aa
  10. ~(Ab ˅ ~Ab)
  11. ~Ab
  12. ~~Ab
  13. ~(Ab → (Aa ˅ ~Aa))
  14. Ab
  15. ~(Aa ˅ ~Aa)
  16. ~Aa
  17. ~~Aa
  18. ~((Aa → (Ab ˅ ~Ab)) & (Ab → (Aa ˅ ~Aa)))
  19. ~(Aa → (Ab ˅ ~Ab))
  20. Aa
  21. ~(Ab ˅ ~Ab)
  22. ~Ab
  23. ~~Ab
  24. ~(Ab → (Aa ˅ ~Aa))
  25. Ab
  26. ~(Aa ˅ ~Aa)
  27. ~Aa
  28. ~~Aa ■

SYNTHESIS

Loosely speaking, commitment to norms of action doesn’t oblige one to possess reasons for action; it merely permits one to do so. Likewise, reasons for action don’t oblige commitment to norms for action; they only permit such commitment. Commitment to norms for action along with possession of reasons for performing it doesn’t oblige one to intend a performance; it only permits one to intend it.

Strictly speaking, the antecedent of i. only implies one does or does not have reasons for delivering an A-performance, that of ii. . only implies one is or is not committed to norms of A-performance, and that of iii. only implies one either intends or does not intend to deliver an A-performance.

CONCLUSION

For all norms, reasons, actions, intentions:
i.i. If one is committed to norms of A-performance then one either has reason to deliver an A-performance or one doesn’t;

i.ii. If one has reason to deliver an A-performance, then one either is committed to norms of A-performance, or one isn’t; and,

i.iii. If one is committed to norms of A-performance, and has reason to deliver an A-performance, then, either one intends to deliver an A-performance or one doesn’t.

Logical Obstacles to Doing Good Better

INTRODUCTION

Philosopher William MacAskill says in an interview with Sam Harris, people who want to do good must use their spare time and money to minimize illbeing, and maximize wellbeing of the greatest number as best they can. He cautions, however, that not all charitable uses of given amounts of time and money achieve the same amount of good. The upshot here is that you can spend all your time and money to minimize global illbeing, and maximize global wellbeing while failing to do a commensurate amount, or any, of good.
Though you may spend time and money available for charity ineffectively, doing the ineffective little you can to do good is better than doing nothing. Even if the best you could do ended up not achieving much, or any, good, by using your money and time charitably, you’d have increased the likelihood of reducing some illbeing, and increasing some wellbeing, globally so you do good.

Using evidence, logic, and high level reasoning to assess whether your charitable expenditures of time and money achieve the most good you can at that cost helps ensure you do do the most good you can when you do the most good you can. If evidence, logic, and high level reasoning show your time and money could have been spent better, i.e. created greater expected future value, on other charitable tasks than the one you took on, then—presuming you did take on the expenses you did—you could have done good better. But, if these showed you’d really maximized the expected future value of your charitable expenditures, then you’d have done good the best you could.

The foregoing sounds great, and fits nicely with our intuitions about how to do good, and how to do it the best we can. But holding to these ideas, jointly, commits us to the following preposterous invalidity:

If you use your spare time and money to minimize illbeing and maximize wellbeing of the greatest number, and you use evidence, logic, and high level reasoning to determine whether your charitable aid achieves for the greatest number of prospects the greatest magnitude of reduction in illbeing and increase in wellbeing possible, then you do the most good you can and you do not do the most good you can.

In the subsequent sections I’ll demonstrate that this non-obvious, and somewhat preposterous sounding charge is true.

PRELIMINARIES

Let

G = You do the most good you can.

T = You use your spare time and money to minimize illbeing and maximise wellbeing of the greatest number

E = You use evidence, logic, and high level reasoning to ensure your charitable uses of time and money achieve the greatest magnitude of reduction in illbeing and increase in wellbeing.

THE TARGET ARGUMENT

MacAskill (2016) makes the following claims:

Axiom 1. (T → G)
Axiom 2. ((T & E) → G)
Axiom 3. (((T & E) → ~G)→ ~(T & E))

ANALYSIS

He does not say so, but MacAskill is obviously [see PROOF] committed to the following claims as well:

Axiom 2.1. (E → G)
PROOF: From Axiom 2. We know that i. ((T & E) → G).
ii. (((T & E) → G) → ((T → G) & (E → G)))
iii. (E → G)■

Axiom 4. ((T) → G) & ((T & E) → G) & (((T & E) → ~G) → ~(T & E)).
PROOF: i. ~ (((T) → G) & ((T & E) → G) & (((T & E) → ~G) → ~(T & E)))
ii. ~(T & E)
iii. (~(T v ~E) & ~(~T v ~E))
iv. (~T v ~E)
v. (T → ~E)
vi. ((T → ~E) & ~(~T v ~E))
vii. (~(T → ~E) & ~~(~T v ~E))
viii. (~T → ~E)
ix. (~T v  ~E)
x. ((~T v ~E)→ (T → ~E))
xi. (T → ~E) & (~T → ~E)
xii. (T & ~T)→ (~E & E)
xiii. ~(T v ~~T) → ~(~E v ~~E)
xiv. (~T v T) → (E v E)
xv. (T → T) → (~E → E)
xvi. (~T → T) v  (~E → E)
xvii. (~T v  ~E) → (T v E)

It is easy to see MacAskill is not committed to line xvii.  (~T v ~E) → (T v E), so he must be committed to ((T) → G) & ((T & E) → G) & (((T & E) → ~G) → ~(T & E)), contra line i. ■

THE COUNTERARGUMENT TO THE TARGET ARGUMENT

0. (((T → G) & ((E → G) & (((T & E) → G) & ((T & E) → ~G)))) → (((T & E) → G) & ((T & E) → ~G)))
PROOF:
1. ((T → G) & (E → G) & ((T & E) → G) & ((T & E) → ~G))
2. ~ (((T & E) → G) & ((T & E) → ~G))
3. (T → G)
4. (E → G)
5. ((T & E) → G)
6. ((T & E) → ~ G)
7. (((T & E) → G) → ~ ((T & E) v G))
8. (((T & E) → ~G) → ~~((T & E) v G))
9. (~ ((T & E) v G) & ~~((T & E) v G))
10. (((~T & E) v G) & ((T & E) v G))
11. (((~T & T) v G) & (E v G))
12. (((~T & T) v G) → G)
13. G
14. (~T → G)
15. (~E → G)
16. ((T → G) & (~T → G))
17. ((E → G) & (~E → G))
18. (((T → G) & (~T → G) & (E → G) & (~E → G)) → ~G)
19. ~G
20. (G & ~G)
21. (((G & ~G) & ((T & E) → G) & (((T & E) → ~G) → ~(T & E))) → (((T & E → G) & (T & E)) → ~G))
22. (((T & E → G) & (T & E)) → ~G)
23. ~~ ( (T & E → G) & ((T & E) → ~G))■

Finally, to deliver on my claim in the last paragraph of the introduction, I’ll show:

((T & E) → G) & (((T & E) → ~G) ⊢ ((T & E) → (G & ~G))
PROOF: i. ((T & E) → G) & (((T & E) → ~G)
ii. ((T & E) → G)
iii. ((T → G) & (E → G))
iv. (T → G)
v. (E → G)
vi. ((T & E) → ~G)
vii. ((T → ~G) & (E → ~G))
viii. (T → ~G)
ix. (E → ~G)
x. ((T → G) & (T → ~G ))
xi. (T → (G & ~G))
xii. ((E → G) & (E → ~G ))
xiii. (E → (G & ~G))
xiv. ((T & E) → (G & ~G))■

CONCLUSION

MacAskill’s recommendations for doing good better-if followed-guarantee that one does and does not do the most good one can. The guidelines make no difference to whether or not one does the most good one can, so they aren’t a sound basis for making decisions about what charitable efforts are worth the while, and how best to go about them.

 

Why Begging the Question isn’t a Fallacy

§1. Introduction

Informal fallacy theoretic labels for putative errors in reasoning are seldom informative, and often fail to distinguish errorless from erroneous reasonings, validities from invalidities. In this post the informal fallacy theoretic label petitio principii, also known as begging the question, is used as a case study demonstrating the label’s failure to identify a corresponding error in reasoning. Formal logical analysis is marshalled as a superior strategy for detecting potential errors of reasoning in the vicinity—errors undetected by the petitio principii label.

Previously, <i> arguments liable to appear valid notwithstanding invalidity, and <ii> arguments liable to be taken-true notwithstanding falsity[1] were identified as fallacies. However, the concept of fallacy also has a more capacious extension which enjoys currency in informal fallacy theoretic and indeed even lay fallacy talk.[2] The semantic riches of the target notion in informal fallacy theoretic and lay fallacy talk, however, admit reduction[3] to the following comprehensive fallacy trait inventory (Definition 1.).

DEFINITION 1: A Fallacy is any error in reasoning which may or may not exhibit one or more of the following traits[4]

(i.i.) Attractiveness

(i.ii.) Ubiquity

(i.iii.) Deleteriousness to argument

(i.iv.) Incorrigibility

Call the list of traits (i.i.- i.iv.) enumerated in definition 1 the AUDI fallacy trait inventory. To say fallacies are (i.i.) attractive is to say they appear to be good arguments, though they aren’t. To say they are (i.ii.) ubiquitous is to say they occur across languages and cultures with high frequency.(i.iii.) Deleteriousness to argument implies that the presence of the error weakens or nullifies the argument. Finally, to say they are (i.iv.) incorrigible is to say that reasoners’ awareness of diagnostic criteria fails to reduce incidence. An argument is a fallacy whenever it contains an error in reasoning, and it has none, or one, or more than one of traits (i.i.- i.iv.).

§2. Case Study: Petitio Principii AKA Begging the Question

DEFINITION 1.2: B begs the question against A, or perpetrates the petitio principii, if for A’s thesis “(λ)” B offers as a refutation “(μ)”, “(μ → ¬(λ))” implying “(¬λ)” and one or more of conditions <i.>, <ii.>, and <iii.> as listed below hold.

<i.> A doesn’t maintain that “(μ)”, and “(μ)” isn’t a consequence of anything A does maintain
<ii.> A doesn’t maintain that “(μ → ¬(λ))”, and it isn’t a consequence of anything A does maintain
<iii.> Either “(μ)” or “(μ → ¬ (λ))”, or both “(μ)” and “(μ → ¬(λ)) are not reasonable presumptions, or defaults.

ARGUMENT 1.2.1: If <i.>, <ii.>, and <iii.> are the case and it is the case that (μ) and (μ → ¬(λ)) then B begs the question against A. But, none of <i.>, <ii.>, and <iii.> are errors in reasoning, so, B begs the question against A but does not commit a fallacy.
PROOF: If it is the case that (μ), and (μ → ¬(λ)), and A doesn’t maintain “(μ)”, and “(μ → ¬(λ))”, and neither are “(μ)” or “(μ → ¬(λ))” consequences of anything A maintains, it follows that (¬λ). Suppose, it is the case that (μ), and (μ → ¬(λ)), and A doesn’t maintain “(μ)”, and “(μ → ¬(λ))”, and neither are “(μ)” or “(μ → ¬(λ))” consequences of anything A maintains, and neither “(μ)” nor “(μ → ¬(λ))” are reasonable presumptions/ defaults. Then, it follows that (¬λ).■

ARGUMENT 1.2.2: If it is not the case that <i.>, <ii.>, and <iii.>, and it is not the case that (μ) and not case that (μ → ¬ (λ)), or equivalently it is the case that (¬μ) and ¬(μ → ¬(λ)) then B does not beg the question against A. But, B commits an error in reasoning.
PROOF: Suppose A maintains “(μ)” and “(μ → ¬(λ))”, and “(μ)” and “(μ → ¬(λ))” are consequences of some theses A maintains, and both “(μ)” and “(μ → ¬(λ))” are reasonable presumptions/defaults. Then, if it is not the case that (μ) and not that (μ → ¬(λ)), or equivalently it is the case that (¬μ) and ¬(μ → ¬(λ)), it follows that (λ).■

§3. Discussion

In argument 1.2.1 conditions <i.>, <ii.>, or <iii.> are met, and so B begs the question against A. Furthermore, according to definition 1, B’s refutation is a putatively fallacious argument because it is attractive to B as a refutation of A’s thesis. B’s refutation makes no errors in reasoning; given that it is the case that (μ), and (μ → ¬(λ)), by modus ponens it follows that (¬λ). Formal logical analysis reveals, contrary to what one would expect from the definition of petitio principii (definition 1.2), B’s reasoning is errorless and, so, simply, not a fallacy (definition 1).

In argument 1.2.2 conditions <i.>, <ii.>, or <iii.> are not met, so B doesn’t beg the question against A (definition 1.2). However, B’s refutation is unambiguously a fallacy since it is the case that (¬μ) and ¬(μ → ¬(λ)). And, so, it follows that ¬(¬λ) or simply (λ). As formal logical analysis reveals, contrary to what one would expect from the definition of petitio principii, B’s reasoning is erroneous (definition 1), and, so, constitutes a fallacy.

§4. Concluding Remarks

It is tempting to argue that since A doesn’t in fact maintain “(μ)”, and “(μ → ¬(λ))” in argument 1.2.1 B commits an error in reasoning simply by taking “(μ)”, and “(μ → ¬(λ))” as premises for refuting “(λ)”. However, this is not an effective argument against the analysis prosecuted here because selecting premises not maintained by A is not an error in B’s reasoning; even if it may be counted an argumentative misstep. Reasoning is not itself argument, it is a rule governed procedure variously employed in argument.

Alternatively, one may object that B is simply misattributing premises A doesn’t in fact maintain to A. Even so, premise misattribution is not an error in reasoning. The reasoning from premises B maintains to the negation of A’s thesis is impeccable, so, one cannot say B commits a fallacy—unless one is willing to stretch unreasonably the definition of fallacy to include premise misattributions, and/or inapt premise selection.[5]

Petitio principii, AKA begging the question, is not a fallacy because inapt premise selection and premise misattribution are not errors in reasoning.

NOTES
[1] Arguments may be valid and yet false owing to the presence of [a] false premise[s], or false conclusion: such arguments are “unsound.” Note, unsound arguments are fallacies on both formal and informal theoretic accounts of fallacies.

[2] The AUDI fallacy trait inventory which captures the notion of fallacy used in informal fallacy theoretic and lay fallacy talk is but a subset of the two-pronged definition of fallacy introduced in the previous post. After all, invalid arguments are liable to appear valid because they are attractive, ubiquitous, and incorrigible; and, they’re bad as their occurrence is deleterious to argument. Furthermore, false and unsound arguments are liable to be taken-true for all the same reasons, singly or in various possible combinations, and their utilisation is just as deleterious to argument. Human reasoners tend in general to judge argument merit by argument attractiveness, and argument goodness by familiarity—a function of frequency with which an argument is encountered. Human reasoners tend to make fallacious arguments despite awareness of what makes them fallacious. Not only are fallacies ubiquitous, unfortunately, they are also incorrigible.

[3] This reduction irritates many a lay and professional champion of the informal fallacy theoretic analyses of fallacies. Regardless, it is a high-fidelity reduction which accurately and exhaustively captures the target notions doing duty in the various available informal fallacy theoretic analyses of fallacies.

[4] Woods, John. “Begging the Question is not a Fallacy.” <http://bit.ly/1ivNdmQ&gt;. In this work Woods maintains that all conditions listed under AUDI, here, must be satisfied for an argument to be fallacious. We relax this condition, accepting that even one satisfied condition can suffice in principle for an argument to count as fallacious; so long as it contains an error in reasoning.

[5] We don’t argue for this claim here, but those who think it objectionable may consult Wood’s argument in the paper cited here.

Why the No True Scotsman Fallacy isn’t a Fallacy

§1. Introduction

On the informal fallacy theoretic view of fallacies a fallacy is any argument that appears valid notwithstanding invalidity, or any argument liable to be taken-true notwithstanding falsity. However, the informal fallacy theoretic approach to fallacy detection is wrongheaded and self-defeating. Taxonomies of informal fallacies overwhelmingly either misidentify true, valid, and sound arguments as false, invalid, and unsound, or fail to detect genuine falsities, invalidities, and unsoundness.

In this post we make a case for formal fallacy theory contra informal fallacy theory; and, using the No True Scotsman Fallacy as a case study, show that informal fallacy theoretic analysis does much worse at fallacy detection than formal fallacy theoretic analysis. In particular, we show that the informal fallacy theoretic label No True Scotsman Fallacy misidentifies valid arguments as invalid.

§2. Case Study: No True Scotsman Fallacy

On the informal fallacy theoretic analysis of the No True Scotsman Fallacy <NTSF> no argument with the form of argument 1.1. is valid. Alternatively, arguments with the form of argument 1.1 are always invalid, and, so, argument 1.2 is always valid.

ARGUMENT 1.1.: ∀S: ((S) ↔ ((α(S)) & (β(S)))). <Never valid on NTSF>.

ARGUMENT 1.2.: ∀S=:⊥ ⇒ ((S) ↔ ((α(S)) & (β(S)))). <Always valid on NTSF>.

However, on a formal analysis with standard first order predicate logic several arguments with the form of argument 1.1 are valid, and, so, argument 1.2 is invalid. A great many legitimate definitions and conditional assertions with universal generalizations occurring in premise position have the form of argument 1.1., and constitute countermodels for argument 1.2.

Consider the definition of a whole tone scale, given below <See Definition 1.3>. It states that any given scale, W, is a whole tone scale, (W), if and only if it is a sextuple of notes from the octave, (Oγ(W)), and each of the notes in the sextuple is two semitones apart from the other (Oδ(W)).

DEFINITION 1.3.: ∀W: ((W) ↔ ((Oγ(W)) & (Oδ(W)))).

Now, we prove a theorem showing that argument 1.2. is false. The upshot of this theorem [See Argument 1.4] is that the No True Scotsman Fallacy isn’t always invalid, and so is not a fallacy.

ARGUMENT 1.4: ∀S: ((S) ↔ ((α(S)) & (β(S)))).
PROOF: By argument 1.2. ∀S =:⊥ ⇒ ((S) ↔ ((α(S)) & (β(S)))), or ¬((S) ↔ ((α(S)) & (β(S)))). This implies ¬(((S) → ((α(S)) & (β(S)))) & (((α(S)) & (β(S))) → (S))). However, by definition 1.3., ∀W: ((W) ↔ ((Oγ(W)) & (Oδ(W)))). Substituting all W and W with S and S, and of Oγ and Oδ with α and β respectively, we get: ∀S: ((S) ↔ ((α(S)) & (β(S)))). This implies (((S) → ((α(S)) & (β(S)))) & (((α(S)) & (β(S))) → (S))). So, contra argument 1.2., we have proved ∀S: ((S) ↔ ((α(S)) & (β(S)))).■

§3. Discussion

Argument 1.4 establishes beyond all doubt that arguments identifiable as No True Scotsman Fallacies aren’t always fallacious; in other words, the No True Scotsman Fallacy isn’t a fallacy. Definition 1.3 provides one of many valid argument forms whose instances are invariably misidentified as fallacious on the informal fallacy theoretic approach to fallacies.

It may be objected by defenders of informal fallacy theory that even though not all arguments with the form of argument 1.1 are invalid some certainly are. We are inclined to agree. But, if there are variously valid and invalid instances of arguments with the form of argument 1.1., then the informal fallacy theoretic label No True Scotsman Fallacy does nothing that helps tell apart valid from invalid instances of such arguments.

By contrast, the formal analysis offered here as an alternative tells us:

i.> All argument instances with the form of argument 1.1. are biconditionals with a universal generalisation occurring in premise position.
ii.> Argument instances with the form of argument 1.1. can only fail to obtain when
a.> there exists no individual possessing all properties/predicates associated with the individual occurring in a premise that is a universal generalisation,
or
b.> there exists at least one individual lacking some or all of the the properties predicated of the individual occurring in premises that are universal generalisations.

§4. Concluding Remarks

If some conditional arguments with universal generalisations occurring in premise position, or in the conclusion, fail then they fail not because all such arguments are instances of the No True Scotsman Fallacy. Instead, the failure of such arguments is due to the fact that any one or both of conditions ii. a. and b. supplied in the previous paragraph obtain.

This post showed that the informal fallacy theoretic label ‘No True Scotsman Fallacy’ is at best pleonastic, and at worst false. It is pleonastic whenever it is blindly applied to invalid arguments premised on [an] invalid universal generalisation[s]; and, it is false whenever it is slapped onto valid arguments which happen to be premised on valid universal generalisations.

In future posts we’ll use other informal ‘fallacies’ as case studies and demonstrate the weaknesses and failings of the informal fallacy theoretic approach to fallacy detection vis-à-vis formal fallacy theoretic approaches.

Why There Aren’t Any Human Rights

§1. Introduction

Human rights proponents maintain that human rights are entitlements inherited by humans simply in virtue of being humans. These include but are not limited to freedom from unlawful imprisonment, torture, and execution. The inheritance of these entitlements, however, is contingent on whether or not people fulfill their obligation to respect the human rights of others. Accordingly, inheritance of human rights by any individual depends on positive or negative performances, i.e. acts of commission or omission, of other individuals with respect to the human rights of the individual. This conception of human rights leads quite naturally to the following definition:

DEFINITION 1.: A human right-to-φ is an entitlement of any human, HR, to any and all such positive or negative performances from other duty bound humans, HD1…HDN, that guarantee HR’s ability-to-φ, simply in virtue of HR’s being human.

This definition is the one implicitly appealed to by human rights proponents, and seems intuitively to have desirable properties. We do generally think that people ought to have human rights simply in virtue of being human, and also that people ought not to deprive other people of their human rights by acts of commission or omission. Even so, the intuitive appeal of the definition does not guarantee that individual humans in fact have human rights, or that human society in fact has a duty to safeguard any human’s human rights. In fact, it turns out that on terms set by the definition above, human individuals don’t really have human rights, and so humans individually and collectively don’t really have a duty to safeguard human rights.

§2. Analysis & Argument

As a working example of a human right, let the human right-to-work be denoted by ᵡ, and let superscripts attached to ᵡ identify the employer who guarantees an individual’s ability-to-ᵡ by hiring him. We now define the reasonable constraint that an individual only has the ability to work for one employer to be the Single Employer Rule—presuming employment to any employer is always in a full time capacity.

SINGLE EMPLOYER RULE [SER]: ∀HAx∃HDik: ((ᵡHDi(HAx)) → ¬(ᵡHDi…k(HAx))).

We now prove a lemma [Lemma 1.1] that will be useful for proving the subsequent theorem [Argument 1.2] demonstrating the incoherence of human rights. The lemma proves that if an individual Hx with the ability-to-ᵡ is hired by a duty bound employer Hi then it is not the case that the individual Hx with the right-to-ᵡ and the ability-to-ᵡ can also be hired by another duty bound employer Hk.

LEMMA 1.1: ∀HAx∃HDik: ((ᵡHDi(HAx)) → ¬((ᵡHDk(HRx)) & (ᵡHDk(HAx)))).
PROOF: Assume ∀HAx∃HDik: ((ᵡHDi(HAx)) → ¬¬((ᵡHDik(HRx)) & (ᵡHDk(HAx)))). Then ((ᵡHDi(HAx)) → ¬¬((ᵡHDk(HRx)) & ¬¬(ᵡHDk(HAx)))), or equivalently ((ᵡHDi(HAx)) → ((ᵡHDk(HRx)) & (ᵡHDk(HAx)))). By SER, ((ᵡHDi(HAx)) → ¬(ᵡHDi…k(HAx))). Substituting in, we get ((ᵡHDi(HAx)) → ((ᵡHDik(HRx)) & ¬(ᵡHDik(HAx)))). Therefore, ((ᵡHDi(HAx)) → ¬((ᵡHDk(HRx)) & (ᵡHDk(HAx)))).■

We now prove a theorem [Argument 1.2] demonstrating the incoherence of the idea of human rights.

Suppose any human, HX, has a human right-to-φ. That is, ∀HX: (φ(HRX)). On Definition 1., if HX has a human right-to-φ, all other humans H1…HN have the duty to enact the requisite positive or negative performances that guarantee HX has the ability-to-φ, or (φ(HAX)). So, the possession of a human right-to-φ by any individual HX amounts to this: ∀HRX∀HD1…N: ((φ(HRX)) ↔ ((φ(HAX)) & (φ(HD1…HDN)))). Alas, this leads to contradiction.

ARGUMENT 1.2:
∀HRX∀HD1…N: ((φ(HRX)) ↔ ((φ(HAX)) & (φ(HD1…HDN)))) ⊨ ∀HRX∃HD1…N:¬((φ(HRX)) ↔ ((φ(HAX)) & (φ(HD1…HDN)))).
PROOF
: ∀HRX∀HD1…N: ((φ(HRX)) ↔ ((φ(HAX)) & (φ(HD1…HDN)))) yields ((((φ(HRX)) → ((φ(HAX)) & (φ(HD1…HDN))))) & ((φ(HAX)) & ((φ(HD1…HDN)))) → ((φ(HRX)))). But, (φ(HD1…HDN)) ↔ (((φ(HD1)) & (φ(HD2)))…&…(φ(HDN))). By lemma 1.1, ∀HAx∃HDik: ((ᵡHDi(HAx)) → ¬((ᵡHDk(HRx)) & (ᵡHDk(HAx)))). Substituting all instances of ᵡ with φ, and all instances of HDi…Dk with HD1…HDN we see that: ∀HAx∃HD1…N: ((φHD1(HRX)) → ¬((φHD2(HRx)) & (φHD2(HAx)))). Likewise, ¬((φHDM(HRx)) & (φHDM(HAx)))). So, ((φHD1(HRX)) → ¬((((φHD2(HRx)) & (φHD2(HAx))) & (φHDM(HRx))) & (φHDM(HAx)))). So,  (φHD1(HRX)) → ¬(φHD2(HAX)) & ¬(φ HD3(HRX)). Accordingly, ∀HD1…HDN:¬((φ(HD2)) & ¬(φ(HD3))), and so ¬((φ(HD2)) & (φ(HD3))). Likewise, (φHD1(HRX)) → ¬(φ(HDM)) and ¬(φ(HDN)), so ¬((φ(HDM)) & (φ(HDN))). Then, since ∀HD1…HDN: ¬(((φ(HD2)) & (φ(HD3)))…&…(φ(HDN))) it follows ∀HRX∃HD1…N: ((φ(HAX)) & ¬(φ(HD1…HDN)))) and ¬((φ(HAX)) & (φ(HD1…HDN)))). So, ¬(((φ(HRX)) → ((φ(HAX)) & (φ(HD1…HDN))))). Accordingly, ¬((((φ(HRX)) → ((φ(HAX)) & (φ(HD1…HDN))))) & ((φ(HAX)) & ((φ(HD1…HDN)))) → ((φ(HRX)))). Therefore, we have proved ¬((φ(HRX)) ↔ ((φ(HAX)) & (φ(HD2…HDN)))).■

In English, argument 1.2 says that for all right holding humans HX and all humans H1…HN duty bound to enact positive or negative performances for the benefit of right holding humans, HX has a human right-to-φ if and only if HX has the ability-to-φ and all other humans H1…HN individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ. This entails, if HX has a human right-to-φ then all other humans individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ and HX has the ability-to-φ, and  if all other humans individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ and HX has the ability-to-φ then HX has the right-to-φ. But, by Lemma 1.1., even if all other humans have such a duty, HX has neither the ability-to-φ nor the right-to-φ while utilizing the positive and negative performances of everyone individually and collectively enabling him to exercise his ability-to-φ. Consequently, other humans don’t individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ. Accordingly, it is not the case that if HX has a human right-to-φ then all other humans individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ and HX has the ability-to-φ, and it is not the case that if all other humans individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ and HX has the ability-to-φ then HX has the right-to-φ. Therefore, contrary to the model of human rights generated by definition 1., it is not the case that HX has a human right-to-φ if and only if HX has the ability-to-φ and all other humans H1…HN individually and collectively have a duty to enact the requisite positive or negative performances that enable HX to exercise his ability-to-φ.

§3. Closing Remarks

Informally speaking, argument 1.2 demonstrates that since correlative reciprocal duties enabling a human right-to-φ aren’t owed to any human HX by all humans H1…HN individually they aren’t owed by all humans to HX collectively either. Even if an individual has the ability-to-φ, if all other humans haven’t individually and collectively the duty to guarantee the individual’s ability-to-φ,  the individual hasn’t the human right-to-φ. And, since any human HX hasn’t always the human right-to-φ whenever he has the ability-to-φ, other humans H1…H haven’t a duty to protect HX‘s human right-to-φ.

Argument 1.2 mutatis mutandis militates against any “human right”. The very idea, as it is used in human rights talk, is thus demonstrably incoherent. Inasmuch as the definition seems to be on the right track intuitively, our analysis gives reason to question the [in]coherence of the intuitions which lead us to accept the definition. Human rights talk as it has been carried on by proponents must either be revised to avoid this incoherence, or be abandoned in favour of some other coherent moral vocabulary.

P.S. The SER, The Proof of Lemma 1.1, and a modified version of the Proof of Argument 1.2 have been added to the original version of the post to discharge assumptions used in the premises and conclusion. The post has been edited for clarity since its original publication, but any changes are merely cosmetic.

Why Radical Probabilism is Certainly Wrong

RETRACTION NOTICE: I’ve been informed by Tyler Foster that the claim that “if an event had probability 0 then according to the axioms of probability it would be an event certain not to occur” violates the existence of nonempty measure-0 sets. This is quite right. If an event were certain not to occur it would only be almost sure to not occur-contrary to the claim made in the post. I stand corrected.

Radical probabilism is the following thesis:

(P.) “There are no certain events, only probable events.  

But certainly, bad pun notwithstanding, radical probabilism (P.) can’t be right. PROOF: Suppose, for argument’s sake, there are no certain events, only probable events. Then no event has probability 1, or probability 0. For, if an event had probability 1 then according to the axioms of probability it would be an event certain to occur. And, if an event had probability 0 then according to the axioms of probability it would be an event certain not to occur. In other words, the existence of any events with probability 1, or with probability 0, would render radical probabilism false. But a fair coin when tossed lands heads or tails with probability 1. Furthermore, a fair coin when tossed lands heads and tails with probability 0. Thus, a fair coin when tossed either certainly lands heads or certainly lands tails, and certainly does not land heads and tails, so radical probabilism is certainly wrong.

Why Marx Can Never Pay off his Debts: A Version of Yablo’s Paradox (1993)

Marx needs more than $99 to pay off his debts, but Engels will only give him exactly $99. For any positive amount X Marx can muster over and above Engels’s $99 hand-out, under capitalism, there is a corresponding petit bourgeoisie “thug” to whom Marx must pay all but a fraction of X.

What Marx must pay each petty bourgeoisie thug Ti for every additional unit X over and above the $99 hand-out from Engels is determined as follows:

1. If Marx earns amount X, then as (X + $99) > $99 he must pay all but X/2 to T1.

2. If (X /2 + $99) > $99 then he must pay all but X/4 to T2.

3. If (X/4 + $99) > $99, then he must pay all but X/8 to T3.

And so on.

If Marx doesn’t earn anything apart from Engels’s $99 hand-out, or if his earnings along with the same $99 sum to <$99 then he has to pay nothing to any petty bourgeoisie thugs.

Given this state of affairs it turns out that Marx will never be able to clear his debts; this is unsurprising. What is surprising, however, is that Marx will always remain indebted without ever paying petty bourgeoisie thugs anything—as his income will never exceed the $99 hand-out from Engels.

PROOF: Suppose Marx made an amount X = 1 cent, so his net worth along with Engels’s hand-out amounted to $99.01. Then since that value was >$99 by $0.01 Marx would have to pay T1 all but $0.005; leaving him with $99.005. But as that too would be >$99 Marx would have to then pay T2 all but $0.0025 from that 1 cent. And, as he’d now have $99.0025 which is > $99, he’d have to pay T3 all but $0.00125 from the same 1 cent leaving him with $99.00125. Since $99.00125 > $99, Marx would further have to pay T4 $0.000625, and since $99. 000625 > $99 he’d also have to pay all but $ 0.0003125 to T5, leaving him with $99.0003125. Similarly, Marx would always have to pay all but the 1/knth part of X to thugs Tk for arbitrarily large k values, as the fraction left over would always continue to approach 0, without becoming equal to 0, as n approached infinity. So, contrary to our initial supposition, Marx couldn’t ever earn anything more than Engels’s $99 hand-out and so would stay forever in debt, but since his earning would never be >$99 he would also pay nothing to any petty bourgeoisie thugs.

Inspired by Roy T. Cook’s brief but excellent article: http://blog.oup.com/2015/07/yablo-bernardete-paradox/