### Ask Dr. Math:A Crash Course in

Symbolic Logic

Dr. Math FAQ || Classic Problems || Formulas || Search Dr. Math || Dr. Math Home

 Philosophers and Logicians Logic Takes Small Steps     Shorthand Sentences     Connectives     Parentheses Using Connectives Subsentences Complicated Sentences What the Connectives Mean The Rules of Logic       1. Assumptions       2. -> Introduction       3. ^ Elimination       4. Repetition       5. ^ Introduction       6. -> Elimination       7. <-> Introduction       8. <-> Elimination       9. ~ Introduction     10. ~ Elimination     11. v Introduction     12. v Elimination Why These 12 Rules? A Review     Mundane Rules         repetition         ^ introduction         ^ elimination         -> elimination         <-> introduction         <-> elimination         v introduction         v elimination     Special Rule         assumptions     Fun Rules         -> introduction         ~ introduction         ~ elimination Deriving a Conclusion Deriving a Sentence Rules with Latin Names     1. Modus Ponens     2. Modus Tollens     3. DeMorgan's Law (I)     4. DeMorgan's Law (II)     5. Hypothetical Syllogism     6. Disjunctive Syllogism     7. Reductio Ad Absurdum     8. Double Negation     9. Switcheroo    10. Disjunctive Addition    11. Simplification    12. Rule of Joining

Date: 06/27/2002 at 20:23:31
From: Carrie Henry
Subject: Symbolic logic

My question isn't exactly how to do a specific problem; it is to ask you if logic is a type of thing where either you get it or you don't. I recently had to drop symbolic logic because I just couldn't get it! Especially when we started doing derivations with rules of replacement like modus pollens. Derivations for SD+ are the most confusing, but I can't even get SD. Is logic something where either you get it or you don't?

Date: 06/29/2002 at 02:05:27
From: Doctor Achilles
Subject: Re: Symbolic logic

Hi Carrie,

Thanks for writing to Dr. Math.

Symbolic logic is something that you can master. The hardest thing about symbolic logic is learning how to work with the symbols. Once you know what all the symbols stand for, the logic should come more easily.

I'll try to give you a bit of a crash course in basic symbolic logic using an approach that I hope will help. Another place you can turn to is the Logic section of the Dr. Math archives.

#### Philosophers and Logicians

First, I'd like to do a bit of philosophizing as a way to lead into the logic. Philosophers and logicians have a lot of overlap in what they do. Many logicians are also philosophers, and all philosophers are logicians to some extent (some much more so than others). Given that there is such a connection between philosophers and logicians, I find it striking just how radically different the fields are. Philosophers are interested in finding deep truths about the world, be they epistemological, metaphysical, ethical, etc. Logicians (qua logicians) are only interested in using a set of rules to manipulate arbitrary symbols that have no relevance to the world.

The (sometimes difficult) marriage between philosophy and logic comes from the fact that everyone in the world (except, I would argue, people who are commonly called "crazy") accepts the truths proven by logic to be universally true and unquestionable. Philosophy needs logic because in order to establish that a philosophical doctrine is true, one needs to show that the doctrine is universally and unquestionably true. One needs to, in other words, make a demostration that everyone would accept as proof that the proposition is true. To do that, the philosopher needs logic.

#### Logic Takes Small Steps

Logic accomplishes this magical universal acceptance because it makes only little tiny steps. It does silly little things like:

ASSUMING: The dog is brown.

AND ASSUMING: The dog weighs 15 lbs.

I CONCLUDE: The dog is brown and the dog weighs 15 lbs.
which anyone who understands what the word "and" means would agree with.

Shorthand Sentences

Logicians, though, are very lazy people. They don't like to write long derivations in English because English sentences can be fairly long. So what they do instead is a kind of shorthand. If you give a logician a sentence like

The dog is brown.

he will pick a letter and assign it to that sentence. He now knows that the letter is just shorthand for the sentence. The way I learned logic, capital letters are used for sentences (with the exception of U, V, W, X, Y, and Z; I'll get to those later).

So let's just start at the beginning of the alphabet and use the letter "A" to represent the sentence "The dog is brown." While we're at it, let's use the letter "B" to represent the sentence "The dog weighs 15 lbs."

In addition to saving time and ink, this practice of using capital letters to represent whole sentences has a couple of other advantages. The first is that to a logician, not every word is as interesting as every other. Logicians are extremely interested in the following list of words:

and

or

if...then

if and only if

not

They call these words "connectives." This is because you can use them to connect sentences that you already have together to make new sentences.

When you write in English, those words don't stand out; they just get lost in the middle of sentences. Logicians want to make sure the words look special, so they take the whole rest of the sentence (the part they don't care about) and use a single letter to represent that. Then their favorite words stand out. Let's rewrite our earlier example about the dog using our logician's shorthand:

ASSUMING: A

AND ASSUMING: B

I CONCLUDE: A and B

The other advantage of using capital letters to represent sentences is that you ignore all the information that isn't relevant to what you're trying to do. For the derivation I did above, it didn't matter that the sentences were both about some dog. It didn't matter that they were about weight or color. They could have just as easily been sentences about how tall the dog is or about a cat or a person or a war or whatever. And if we can do the derivation for A and B, then we can do the same exact derivation for C and D or E and N or any other sentences we like.

Connectives

Now, as I said before, logicians are lazy. They really don't want to have anything to do with English. So instead of using the English words:

and

or

if...then

if and only if

not

they make up their own symbols for these:

For these words Logicians use this symbol
and ^
or v
if ... then ->
if and only if <->
not ~

(Sometimes they also use a triple equals sign for '<->', but I can't type that.)

Here are some examples:

You and I would write this A logician writes this
The dog is brown and the dog weighs 15 lbs. (A ^ B)
The dog is brown or the dog weighs 15 lbs. (A v B)
if the dog is brown, then the dog weighs 15 lbs. (A -> B)
The dog is brown if and only if the dog weighs 15 lbs. (A <-> B)
The dog is not brown. ~A

There are a few things to notice here:

1. The symbols: ^, v, ->, and <-> are called "two-place connectives." This is because they connect two sentences together into a more complicated sentence.

2. The symbol: ~ is called a "one-place connective" because you only add it to one sentence. (You cannot join multiple sentences together with it.) To negate a sentence, all you have to do is stick a ~ on the front.

3. When you join two sentences with a two-place connective, you ALWAYS put parentheses around it. So it is NOT appropriate to write this:

A ^ B

That makes as much sense in symbolic logic as writing:

Nn7&% mm)]mm (

Parentheses

I know that a lot of books and instructors claim that it is okay to drop the outermost parentheses in a sentence. I've done it myself many times. And 95% of the time it won't cause you trouble if you're careful. But let's say we started with this 'sentence'

A ^ B

and decided to negate it. Well, the way to negate a sentence is to stick a ~ on the front, so let's do that:

~A ^ B

But wait! What we did there was just negate the A. We wanted to negate the whole sentence. If we were really sharp, then we might notice that somebody had given us an illegitimate sentence that was missing parentheses, and so we would add the parentheses before adding the ~, to get:

~(A ^ B)
which is what we wanted.

It seems silly to make such a big deal about parentheses when we're dealing with simple sentences, but when you're doing a 30-line derivation and you're tired, it's easy to make a mistake just like that on line 17 and get yourself into real trouble. It's better to just remember the simple rule and always add parentheses when you have a two-place connective.

.   .   .

Let's take a deep breath and then go quickly over what we have so far.

#### Using Connectives

Connectives are logical terms,

 ^ (and) v (or) -> (if...then) <-> (if and only if) ~ (not)
which you can add to a sentence.

A simple sentence is one that has no connectives. For example: A (the dog is brown).

A complex sentence is a sentence which is made up of one or more simple sentences and one or more connectives. Some examples are:

(A ^ B)

(A v B)

(A -> B)

(A <-> B)

~A

You can use connectives on complex sentences just as you can on simple sentences. Let's introduce a new simple sentence "it is raining," and let's call our new sentence C. We now have a lot more sentences that we can make. (Keep in mind, we have no idea yet which of these sentences are true or false; we also don't yet know how these sentences relate.) For example:

(C ^ B)

~C

(B v C)

(C v B)

(B -> C)

(A -> C)

(C -> A)

(B <-> C)

(~B <-> C)

~(B <-> C)

C

~~C

((A ^ B) v C)

(((A ^ ~B) v ~C) -> (~(A v B) <-> C))

These can get a little complicated. That last sentence is especially scary looking; we'll come back to it in a little while. For now, here is a quick run-down of how to use connectives to make complex sentences from simple ones.

To make this complex sentence Do this We say
~C Stick a ~ on C. ~C is the negation of C.
(B v C) Use a v to join B and C. (B v C) is the disjunction of B and C.
(B ^ C) Use a ^ to join B and C. (B ^ C) is the conjunction of B and C.
(B -> C) Use a -> to join B and C. B implies C.
(B -> C) is a conditional or implication.
(B <-> C) Use a <-> to join B and C. B implies C and C implies B.
(B <-> C) is a biconditional.

#### Subsentences

Some of our sentences had more than one connective:

(~B <-> C)

~(B <-> C)

~~C

((A ^ B) v C)

(((A ^ ~B) v ~C) -> (~(A v B) <-> C))
The sentence
(~B <-> C)
is made by joining the sentences
~B

C
with
<->
The complex sentence
~B
is called a "subsentence" of the larger sentence
(~B <-> C)
because it is a smaller sentence inside the large one.

The simple sentence
C
is also a subsentence of the larger sentence
(~B <-> C)
The simple sentence
B
is a subsentence of the subsentence
~B
and so
B
is also a subsentence of
(~B <-> C)
There are two connectives used in the larger sentence
<->

~
but they are not equally important. In this case the
<->
is much more important than the
~
Remember how the sentence was made by taking the two smaller sentences
~B

C
and connecting them with a
<->

The <-> is therefore called the "main connective" of the sentence. Main connectives are, without a doubt, absolutely the most important idea in logic. The hardest skill to learn in logic is to identify the main connective of a sentence. Make sure you understand what main connectives are.

Compare the sentence we've been looking at,

(~B <-> C)
with one that looks similar,
~(B <-> C)
This new sentence is very different. It was made by negating
(B <-> C)
The main connective of
~(B <-> C)
is therefore
~
and
(B <-> C)
is just a subsentence of
~(B <-> C)

#### Complicated sentences

Now let's take a closer look at the most complicated sentence on our list and see if we can make it more manageable. The way to analyze a complicated sentence is to start at the outside and work your way in.

The outermost parentheses on this ugly sentence
(((A ^ ~B) v ~C) -> (~(A v B) <-> C))
are used to connect these two sentences
((A ^ ~B) v ~C)

(~(A v B) <-> C)
with a
->
So the way to build our ugly sentence is to start with these two less ugly sentences:
((A ^ ~B) v ~C)

(~(A v B) <-> C)
and connect them with the main connective
->
We can then analyze each subsentence if we like.

I told you before that simple sentences are represented by the capital letters A through T, and that U, W, X, Y, and Z are saved for something else. (I rarely use V because it looks to much like the symbol for 'or'.) U, W, X, Y, and Z are used as shorthand for other sentences in logic (some books use italic letters and others use Greek letters, but since I only have plain text to work with, I use the end of the alphabet). I call these "sentence variables."

So, just as we can take the English sentence
There is nothing on TV.
and use the capital letter D to represent it, we can take the sentence in logic
((A ^ ~B) v ~D)
and use the capital letter U to represent it.

[Note: it is also legal to use capital variables to stand for simple sentences. So you can take the simple sentence
B
and use the letter Z to stand for it.]

This can be useful in analyzing complicated sentences. For example, if we have the scary looking sentence
((((A ^ ~B) v (B <-> C)) -> (~(C v D) ^ ~(~A -> ~~D))) v A)
we can start using sentence variables to stand for subsentences. So if U stands for
(A ^ ~B)
Then we have
(((U v (B <-> C)) -> (~(C v D) ^ ~(~A -> ~~D))) v A)
and if V stands for
(U v (B <-> C))
then we have
((V -> (~(C v D) ^ ~(~A -> ~~D))) v A)
If W stands for
~(C v D)
we have
((V -> (W ^ ~(~A -> ~~D))) v A)
and if X stands for
~(~A -> ~~D)
we have
((V -> (W ^ X)) v A)
and if Y stands for
(V -> (W ^ X))
we have
(Y v A)

So we know where our main connective is. And by substituting back in for the sentence variables, we can recreate our sentence in managable chunks.

It is very important to keep track of what sentence variables stand for when you're doing this kind of substitution. This can be a major source of error if you're not keeping close track of what every letter stands for.

.   .   .

Now that we know all the details of the language of symbolic logic, it's time to actually do symbolic logic.

The first step with every sentence is to identify the main connective. The reason is simple:

In symbolic logic, the main connective of a sentence is the only thing that you can work with.
Let's look at our complicated sentence from earlier

(((A ^ ~B) v ~C) -> (~(A v B) <-> C))
is fundamentally an implication between these two subsentences:
((A ^ ~B) v ~C)

(~(A v B) <-> C)
There is no
->
in either subsentence, but the sentence as a whole is still first and foremost an implication because of what its main connective is. So when you're trying to figure out how in the heck you can work with this ugly sentence
(((A ^ ~B) v ~C) -> (~(A v B) <-> C))
you need to remember that it is an implication and treat it just as one.

#### What the Connectives Mean

Here's a quick course on what the connectives mean. (I assume you have some familiarity with them.)

The sentence is TRUE whenever is FALSE whenever
A A is true A is false
~A A is false A is true
(A ^ B) A is true and B is true A is false; or B is false; or both A and B are false
(A v B) A is true; or B is true; or both A and B are true A is false and B is false
(A <-> B) A and B are both true; or A and B are both false A is false and B is true, or A is true and B is false
(A -> B) A is false; or B is true; or A is false and B is true A is true and B is false

This last one is a little weird, so let's think about it. If we translate it back into English, we get

If the dog is brown then the dog weighs 15 lbs.

How would we go about proving that this sentence is false?

Let's say that the dog is brown and the dog weighs 15 lbs. Does that disprove the if...then statement? Certainly not!

What if the dog is brown but the dog weighs 25 lbs.? That does disprove the statement.

What if the dog turns out to be white? Then we cannot disprove the inference because it only makes a prediction about a brown dog. If the dog isn't brown, then we can't test the prediction.

So the only way to make the sentence

(A -> B)
false is to make A true and B false at the same time. Given any other values of A and B, the sentence comes out true.

#### The Rules of Logic

Now we're finally ready to learn the rules of logic. There are exactly 12 - no more, no less. Each connective has two rules associated with it, and there are two special rules.

1. Assumptions

The first special rule is the rule of assumptions. It is deceptively easy. The rule is:

You are allowed to assume anything you want at any time.
But there is a catch:
You have to keep track of what assumptions you have made.

Well that makes sense. Let's say you and I are detectives trying to solve a mystery. I could say something like "let's assume for the time being that the dog is brown." Once I said that, we could discuss what that would mean. Anything we conclude from that assumption is perfectly okay, as long as we remember that it was under the assumption that the dog is brown. In other words, whatever we do prove under the assumption that the dog is brown must be followed by a disclaimer "assuming that the dog is brown."

Eventually, we would want to prove something about the case that doesn't depend on the dog being brown. Logicians call this "discharging" the assumption. Fortunately, some of our other rules tell us how to discharge assumptions.

1. When I do derivations, I number each new line. I start new assumptions using curly brackets
{

2. and then I indent everything after a new assumption;

3. When I discharge an assumption I close the curly brackets

}
4. And then I stop indenting.

One other very important thing to keep in mind:
Once you close off an assumption, you can no longer use any lines between the curly brackets. So since I've closed the curly brackets above, I would no longer be able to use either of the two lines between them: they are gone forever. So lines (2) and (3) above are illegal.

However, line (1) is legal because it is outside the curly brackets, and so is line (4).
This can get complicated if you have assumptions inside of assumptions.

And finally, and perhaps central to logic:
A logical truth is something that you can write with all your assumptions discharged.
Before we can do some short derivations, we need to learn two other rules. Let's start with one of the two rules that we get from the
->
connective.

2. -> Introduction

The rule is called "-> introduction." The way it works is:

If you assume
X
and then you derive
Y
then you are entitled to discharge the assumption and write
(X -> Y)
That makes sense. Let's just say that we assumed
A

[The dog is brown]
And then we did some logic and out of that we proved
E

[The killer is a man]
If we did that, we would be entitled to say to a jury
(A -> E)

[If the dog is brown then the killer is a man]
The sentence
(A -> E)
is true.

3. ^ Elimination

Let's learn one more rule for now. This one is called "^ elimination."

If you have
(X ^ Y)
then you are entitled to
X
and you are also entitled to
Y
That makes sense too. Lets say we knew for a fact that
(A ^ B)

[The dog is brown and the dog weighs 15 lbs]
Then we would certainly be entitled to conclude
A

[The dog is brown]
and we would also certainly be entitled to conclude
B

[The dog weighs 15 lbs]

A Derivation

Now let's take an example of a derivation. Suppose I wanted to prove that this is a logical truth
((A ^ B) -> A)
I would start by identifying the main connective, which is a ->. I know how to introduce a new ->, I assume the left and then derive the right. Let's try it:
```    {

1) (A ^ B)                  [assumption]

2) A                        [^elim on 1]

}

3)  ((A ^ B) -> A)              [->intro on 1-2]
```
We just used our 3 rules to derive
((A ^ B) -> A)

[If (the dog is brown and the dog weighs 15 lbs) then the dog is brown]

4. Repetition

There's one other special rule. It's called "repetition." The rule simply says that if you have
X
Then you are entitled to write
X
Provided that it was not inside a closed curly bracket.

5. ^ Introduction

The other rule with ^ is called "^ introduction." It says, if you have
X
and you also have
Y
then you are entitled to
(X ^ Y)
That makes sense too. Let's say that I have already proven
E

[The killer is a man]
And I have also proven
F

[The killer is tall]
Then I am certainly allowed to say to the jury
(E ^ F)

[The killer is a man and the killer is tall]

Continuing the Derivation

Let's continue our derivation using our new rules.

```    {

1) (A ^ B)                    [assumption]

2) A                          [^elim on 1]

}

3) ((A ^ B) -> A)                 [->intro on 1-2]

4) C                          [assumption]

5) ((A ^ B) -> A)             [repetition of 3]

6)  (((A ^ B) -> A) ^ C)      [^intro on 4 and 5]

}

7)  (C -> (((A ^ B) -> A) ^ C)    [->intro on 4-5]
```

6. -> Elimination

The other rule for -> is called "-> elimination." It says that if you have
X
and you have
(X -> Y)
then you are entitled to
Y
That makes sense too. If I know
A

[The dog is brown]
and I know
(A -> E)

[If the dog is brown then the killer is a man]
then I am certainly entitled to conclude
E

[The killer is a man]

Let's add a little more to our derivation:

```    {

1) (A ^ B)                     [assumption]

2) A                           [^elim on 1]

}

3)  ((A ^ B) -> A)                 [->intro on 1-2]

{

4) C                           [assumption]

5) ((A ^ B) -> A)              [repetition of 3]

6) (((A ^ B) -> A) ^ C)        [^intro on 4 and 5]

}

7)  (C -> (((A ^ B) -> A) ^ C)     [->intro on 4-5]

{

8) (A ^ B)                     [assumption]

9) ((A ^ B) -> A)              [repetition of 3]

[Note: Line (9) is NOT a repetition of (5) because
(5) is inside closed curly brackets. (3) is not,
so it is okay to repeat it here.]

10) A                          [->elim on 8 and 9]
```
[Note: I did not discharge the assumption I made on line (8). So
A
is not a logical truth; it is true only on the assumption that
(A ^ B)
is true.]

7. <-> Introduction

The next two rules have to do with <->. The first is called "<-> introduction." It states that if you have
(X -> Y)
and you have
(Y -> X)
then you are entitled to
(X <-> Y)
This one is a little tricky to explain, and the best way (I'm sorry to say) is truth tables. So you should try all the possible combinations for X and Y and convince yourself that if
(X -> Y)
and
(Y -> X)
are both true, then
(X <-> Y)
must be true too.

8. <-> Elimination

The next rule is called "<-> elimination." This one says that if you have
(X <-> Y)
And you have
X
Then you are entitled to
Y
OR

If you have
(X <-> Y)
And you have
Y
Then you are entitled to
X
This makes sense because if you know
(X <-> Y)
then you know that X and Y have the same truth value. So if you know one of them is true, then the other must also be true.

A New Derivation

Let's start a new derivation.

```    {

1) (A ^ B)                        [assumption]

2) A                              [^elim on 1]

3) B                              [^elim on 1]

4) (B ^ A)                        [^intro on 2 and 3]

}

5)  ((A ^ B) -> (B ^ A))              [->intro on 1-4]

{

6) (B ^ A)                        [assumption]

7) B                              [^elim on 6]

8) A                              [^elim on 6]

9) (A ^ B)                        [^intro on 7 and 8]

}

10) ((B ^ A) -> (A ^ B))              [->intro on 6-9]

11) ((A ^ B) -> (B ^ A))              [repetition of 5]

12) ((A ^ B) <-> (B ^ A))             [<->intro on 10 and 11]
```

9. ~ Introduction

Next we have "~ introduction." It says that if you assume
X
And then you derive a contradiction, you are entitled to discharge the assumption and write
~X
Y
followed on the next line by the negation of that sentence
~Y
This rule is the familiar "reductio ad absurdum." An easy way to think of it is this. If we assume
~F

[The killer does not have red hair]
And we prove from that
A

[The dog is brown]
and
~A

[The dog is not brown]
then something is wrong with our assumption.

10. ~ Elimination

"~ elimination" is almost identical. It says that if you assume
~X
and derive a contradiction, then you are entitled to discharge the assumption and write
X

A Quick Derivation

```    {

1) (A ^ ~A)                    [assumption]

2) A                           [^elim on 1]

3) ~A                          [^elim on 1]

}

4)  ~(A ^ ~A)                      [~intro on 1-3]
```

Lastly, let's look at the rules for v.

11. v Introduction

The first is "v introduction." It says that if you have
X
then you are entitled to write
(X v Y)
no matter what Y is.

That seems a little strange. Normally you wouldn't think you can just go throwing any old sentence into a derivation. But remember
(X v Y)
is true as long as X is true OR Y is true OR both are true. So if you already know that X is true, then the disjunction of X and anything else will be true.

A Short Derivation

```    {

1) A                             [assumption]

2) A                             [repetition of 1]

}

3)  (A -> A)                         [->intro on 1-2]

4)  ((A -> A) v B)                   [vintro on 3]
```

12. v Elimination

The last rule is a little tricky. It's "v elimination." It says if you have
(X v Y)
and you have
(X -> Z)
and you have
(Y -> Z)
Then you are entitled to
Z

[Most of the time this means that when you have a disjunction that you don't know what to do with, you have to derive an implication for each side of the disjunction before you can go on.]

The rule is hard to do with derivations, but it is actually not too hard to understand if you take an example.

Let's say we know
(A v B)

[The dog is brown or the dog weighs 15 lbs]
And we know
(A -> E)

[If the dog is brown, then the killer is a man]
And we know
(B -> E)

[If the dog weighs 15 lbs, the killer is a man]
Then we don't have to bother figuring out whether A is true or B is true; either way we are entitled to
E

[The killer is a man]

Continuing Our Last Derivation

Let's continue our last derivation to get a demonstration of "velim."

```    {

1)  A                             [assumption]

2)  A                             [repetition of 1]

}

3)  (A -> A)                          [->intro on 1-2]

4)  ((A -> A) v B)                    [vintro on 3]

{

5)  (A -> A)                      [assumption]

{

6)  C                         [assumption]

7)  C                         [repetition of 6]

}

9)  (C -> C)                      [->intro on 6-7]

}

10) ((A -> A) -> (C -> C))            [->intro on 5-9]

{

11) B                             [assumption]

{

12) C                         [assumption]

13) C                         [repetition of 12]

}

14) (C -> C)                      [->intro on 12-13]

}

15) (B -> (C -> C))                   [->intro on 11-14]

16) ((A -> A) v B)                    [repetition of 4]

17) ((A -> A) -> (C -> C))            [repetition of 10]

18) (B -> (C -> C))                   [repetition of 15]

19) (C -> C)                          [velim on 16, 17, 18]

[Not the most efficient way to prove (C -> C), but it is valid.]
```

There are a lot of other rules people try to tell you, but anything you can do with those, you can do with these 12 rules.

#### Why These 12 Rules? A Review

The reason I like these rules is that with these rules you can do any derivation using the same five steps:

Step 1: Find the main connective of the sentence you are trying to derive.

Step 2: Apply the rule for introducing that main connective.

Step 3: When you're in the middle of a derivation and you don't know what to do, find the main connective of the sentence you have and eliminate it.

Step 4: Along the way you may have to derive subsentences using steps 1 through 3.

Step 5: If all else fails, you may have to do a "~ elimination" [I'll explain this step a little later].
If you use those five steps, you should always know which rule to use. The reason is that there are ONLY four things you are ever allowed to do in a derivation:
1. Eliminate the main connective of the sentence you are on.

2. Use the sentence you are on to eliminate the main connective of another sentence (AS LONG AS THAT OTHER SENTENCE ISN'T CLOSED OFF IN CURLY BRACKETS).

3. Repeat an earlier line that isn't closed off in curly brackets.

4. Make a new assumption.

Mundane Rules: What Do You Have?

Now that we have the steps for doing derivations, let me try to explain that confusing business about discharging assumptions. I'm going to approach this from a slightly different angle this time.

Of the 12 rules I gave you, 8 are pretty straightforward. They are what I would call the "Mundane Rules." The way Mundane Rules work is: they say "if you have X and Y and Z, then you are entitled to U."

The tricky thing with Mundane Rules is knowing what you "have."

You "have" any sentence that is written down on a line of the derivation except those which are closed off in curly brackets (which are gone forever once the brackets close).

Being "entitled" to something just means that you can legally write it down as the next line of the derivation.

The easiest Mundane Rule is repetition:

If you have
X
then you are entitled to
X

Another Mundane Rule is ^ introduction:

If you have
X
and you have
Y
then you are entitled to
(X ^ Y)

Simple enough. (I went into more detail on _why_ this is a sound rule in the last e-mail.)

Another pretty easy Mundane Rule is ^ elimination:

If you have
(X ^ Y)
then you are entitled to
X
Or, if you prefer, you are also entitled to
Y

So far so good.

Another Mundane Rule is -> elimination:

If you have
X
and you have
(X -> Y)
then you are entitled to
Y

This is actually the same thing as Modus Ponens, so you can call it that if you prefer. Since I don't speak Latin, I prefer calling it "-> elimination" because that is more descriptive of what the rule is doing.

Another Mundane Rule is <-> introduction:

If you have
(X -> Y)
and you have
(Y -> X)
then you are entitled to
(X <-> Y)
This one is a little tricky to explain. Let's assume somehow we have
(X -> Y)
Under what conditions could that be true? There are 3 possibilities:
X is true and Y is true

X is false and Y is true

X is false and Y is false
Also, we have
(Y -> X)
That can only be true under these conditions:
X is true and Y is true

X is true and Y is false

X is false and Y is false
Since we "have" both of these sentences, then they must both be true. So under what conditions are they both true? Well, only these two:
X is true and Y is true

X is false and Y is false
Which are exactly the conditions for:
(X <-> Y)

Which means we are entitled to write that.

Another Mundane rule is <-> elimination:

If we have
(X <-> Y)
and we have
X
then we are entitled to
Y
OR:

If we have
(X <-> Y)
and we have
Y
then we are entitled to
X

Another Mundane Rule is v introduction:

If you have
X
then you are entitled to
(X v Y)
and you are also entitled to
(Y v X)

This is a little tricky too. We "have" X, which means X must be true. Now we can just, out of the blue, pick any sentence we like and put it into a disjunction with X. Why can we do that? Well, let's say we pick a FALSE sentence. Is that still okay?

Yes, it is! Even if Y is false, the disjunction with X is still true, so we haven't written a false sentence, and we are still okay.

The last Mundane Rule is v elimination:

If you have
(X v Y)
and you have
(X -> Z)
and you have
(Y -> Z)
then you are entitled to
Z
So much for the Mundane Rules. Mundane Rules are useful in derivations because they let you move from one step to the next. They tell you what you can do with the sentences you have. They also can give you a hint as to what you need to do next. For example, if you have
(X v Y)
and you want to eliminate the v, but you don't have
(X -> Z)

(Y -> Z)
yet, then you'd better go get those two sentences.

The problem with the Mundane Rules is that they only let you play around with sentences you already HAVE. You can't get anything NEW out of them.

So far we've gone over the 8 Mundane Rules. There are 12 rules in total. Of the 4 remaining, one is a 'Special Rule' and three are 'Fun Rules'.

Special Rule

The Special Rule is the rule of assumptions:

You are free to assume anything you like at any time as long as you do these things:
1. Use curly brackets and indentation to keep track of what you have assumed.

2. Only discharge the assumption using one of the Fun Rules.

[Discharging an assumption just means you close the curly brackets and stop indenting. So you can forget about the assumption.]

Three 'Fun' Rules

The three Fun Rules all have this form:

If you assume
X
and then, on that assumption, you derive
Y
You can discharge the assumption you made at X and then you are entitled to
Z

The first Fun Rule is -> introduction:

If you assume
X
And then, on that assumption, you derive
Y
You can discharge the assumption you made at X and then you are entitled to
(X -> Y)

This is, in my opinion, the most important and fundamental rule in logic. It is the foundation of all logic. [It's also really important for derivations. If you look up at the Mundane Rules, a lot of them require you to have sentences of the form (X -> Y) to apply them.]

The justification is that if you assume (but don't prove)
A

[The dog is brown]
and then, on that assumption, you derive
P

[The floor is wet]
then you HAVE NOT proven that the floor is wet, but you have PROVEN (no assumptions required) that
(A -> P)

[If the dog is brown, then the floor is wet.]

The last two Fun Rules are closely related. One is ~ introduction:

If you assume
X
And then, on that assumption, you derive
Y
and
~Y
you can discharge the assumption you made at X; then you are entitled to
~X

[You may have been taught this rule as a reductio ad absurdum.] The idea is that if assuming X leads you to a contradiction, then there must've been something contradictory ABOUT X ITSELF. So X must be false. If X is false, then by definition ~X is true: no ifs, ands, buts, or assumptions about it.]

The last Fun Rule is ~ elimination:

If you assume
~X
and then, on that assumption, you derive
Y
and
~Y
you can discharge the assumption you made at ~X and then you are entitled to:
X

The idea here is basically the same. ~X is contradictory and therefore false, so X is proven true.

I have another nickname for ~ elimination. It is what I call the "Fallback Rule." With every other rule, the way it works is by getting what you want by introducing the main connective or by using what you have by eliminating the main connective. But take a look back at what I just did with ~ elimination. I just proved X is true. There's no way to tell by looking at X that you can prove it by eliminating a ~, but you can. [Actually, ANY sentence you like can be proven with ~ elimination, but it is sometimes hard to do.]

So this is where Step 5 of the derivations comes from. If you are trying to prove some sentence X, the first thing to try is to try to introduce the main connective of X. But if you run into a dead end doing that, then assume ~X and try to derive a contradiction.

Those are the rules reviewed and better organized so that they make sense, and the difficult bit about discharging assumptions is (I hope) a little clearer.

One other thing to watch out for. Some logic problems ask you to prove that a certain sentence is a logical truth. On those problems, you have to discharge all your assumptions and prove that the sentence is true with no assumptions (that is, write it without indenting and outside of all the curly brackets). I'll do an example of a derivation like that in a minute.

#### Deriving a Conclusion

Other logic problems give you a list of "givens" or "hypotheses" and ask you to derive a conclusion from them. In those problems, what they are saying is that you need to assume the hypotheses, but not discharge those assumptions. Let me give you an example:

Given:
A

(A -> B)

(~B v C)
Prove:
C

Here we go:

```    {

1)  A                                 [assumption, given]

{

2)  (A -> B)                      [assumption, given]

{

3) (~B v C)                   [assumption, given]

4) A                          [repetition of 1]

5) (A -> B)                   [repetition of 2]

6) B                          [->elim on 4&5]
```

Now what do we do? We have a disjunction on 3 that we don't know what to do with, so we need to eliminate it. But in order to eliminate it, we need to get (~B -> something) and (C -> something).

Let's first work on getting (~B -> something).]

```                {       [Note: this assumption isn't given,
so we're going to have to discharge it]

7) ~B                     [assumption]
```

Let's see if we can derive C. If we derive (~B -> C) then we'll be most of the way to finishing the problem. How can we derive C? Well, we should try to introduce the main connective. But wait! There is no main connective. C is just a simple sentence. So what can we do? I guess we have to do step 5, try ~elimination.

```                    {    [another assumption we'll have to discharge]

8) ~C                 [assumption]

9) B                  [repetition of 6]

10) ~B                 [repetition of 7]

}                     [Closes off lines 8-10]

11) C                     [~elim on 8-10]
```

Up to this point we haven't closed off any assumptions. That means that all of the lines up to this point were sentence that we "have" and can use. But now we just closed off lines 8-10 by discharging the assumption at 8. That means that lines 8-10 are gone, they are off-limits and illegal forever. The good news, though is that we derived C, so now we can discharge the assumption we made at line 7.

```                }                         [Closes off 7-11]

12) (~B -> C)                 [->intro on 7-12]
```

This may seem like a bit of sleight of hand, like I'm trying to pull the wool over your eyes. How can I use the assumption I made at line 7 as part of the contradiction? I just did a ~elimination to prove C, but there was nothing contradictory about ~C itself; the contradiction was that I had B and then I assumed ~B. This is the familiar refrain "anything can be proven from a contradiction." Once I assumed ~B, I could've proven (~B -> anything-I-want), I chose to prove (~B -> C) because I eventually want to get C.

Now we have made some progress on eliminating the disjunction we had on line 3: (~B v C). We have (~B -> C), now we need (C -> C), so let's go get it.

```                {

13) C                     [assumption]

14) C                     [repetition of 13]
```

Notice that I can repeat 13 because I have not yet discharged that assumption. However, I cannot repeat the C on line 11 because I closed off line 11 already, so it is gone forever.

```                }       [Closes off 13-14]

15) (C -> C)                  [->intro on 13-14]

16) (~B v C)                  [repetition of 3]

17) (~B -> C)                 [repetition of 12]

18) (C -> C)                  [repetition of 15]

19) C                         [velim on 16,17,&18]
```

Not all of those repetitions were necessary, since we "had" those lines already (they hadn't been closed off), but I added them for clarity.

You should go back and double-check the derivation to make sure that I never broke the rules by using a line that was closed off and that I didn't break any other rules. Also, make sure that I discharged all the assumptions except the three I was given at the start.

#### Deriving a Sentence

As I said before, the other type of problem is where you are handed a sentence and told to derive it. In this problem, you can make any assumptions you need, but you have to discharge all of them and end up with the sentence you're looking for at the end. This usually involves finding the main connective of the sentence you're supposed to prove and then introducing it (sometimes you have to find other sentences too: for example if the main connective is ^, you need to prove each half of the sentence and then do an ^intro).

Sometimes trying to do this will get you to a dead end, and then you may try to assume the negation of the sentence you're trying to get and see if you can find a contradiction.

Let's do an example. Let's try to prove

(((~A ^ ~C) v (~C <-> B)) -> (B -> ~C))

The main connective is a ->, so let's introduce it. To do that we need to assume the left and derive the right.

```    {

1)  ((~A ^ ~C) v (~C <-> B))           [assumption]
```

Here we have a disjunction, so we need to eliminate it. That means we need to find two entailments. It would be great if we had ((~A ^ ~C) -> (B -> ~C)) and ((~C <-> B) -> (B -> ~C)), so let's try to get those. First, let's work on ((~A ^ ~C) -> (B -> ~C)).

```        {

2) (~A ^ ~C)                       [assumption]

3) ~A                              [^elim on 2]

4) ~C                              [^elim on 2]
```

We actually don't need line 3, but it's good practice to get both sides of an ^ while you can just in case you might need them later. Now we have ~C, but what we want is (B -> ~C), so let's work toward getting that.

```            {

5) B                           [assumption]

6) ~C                          [repetition of 4]

}           [closes 5-6]

7)  (B -> ~C)                      [->intro on 5-6]

}               [closes 2-7]

8)  ((~A ^ ~C) -> (B -> ~C))           [->intro on 2-7]
```

So now what we need to finish the velimination on line 1 is to derive ((~C <-> B) -> (B -> ~C)).

```        {

9)  (~C <-> B)                     [assumption]
```

So now we need (B -> ~C).

```            {

10) B                          [assumption]

11) ~C                         [<->elim on 9-10]
```

If you wanted to, you could make this a little more clear by repeating (~C <-> B) and then doing the <->elim, but it's not necessary since we have both (~C <-> B) and B we are entitled to ~C.

```            }             [closes 10-11]

12) (B -> ~C)                      [->intro on 10-11]

}                 [closes 9-12]

13) ((~C <-> B) -> (B -> ~C))          [->intro on 9-12]
```

Now we can do our velimination on line 1 because we have ((~C <-> B) -> (B -> ~C)) and ((~A ^ ~C) -> (B -> ~C)). If you want, for clarity, you can repeat line 1 and line 8, but it's not necessary.

```    14) (B -> ~C)                          [velim on 1,8,13]

}

15) (((~A ^ ~C) v (~C <-> B)) -> (B -> ~C))
```

As always, you should go back and double-check the derivation once you're done.

The hardest thing about doing derivations is figuring out what to do next. When you have a lot of random rules with Latin names to choose from, it's difficult. This set of rules helps you to know what to do by either introducing what you're trying to get or eliminating what you have.

My advice to you is to try to do some of the derivations in your book or that you had for your class using these rules. Any derivation is possible with them. It takes a lot of time to learn logic and have it sink in, but if you take it slowly enough and practice, it will become easier. Get comfortable with these 12 basic rules and the 5-step method for doing derivations.

#### Rules with Latin Names

Many (probably most) places, you don't learn these 12 rules with logic. Even though I think these rules make the most sense and allow a straightforward approach to solving any problem in symbolic logic, more advanced students may want to study the rules such as Modus Tollens and DeMorgan's Law.

The twelve rules I've presented here are systematic and straightforward, and all of them move by baby steps. The way I think of these rules (in some cases this is not historically accurate) is that logicians noticed that when doing derivations, they often repeated the same steps over and over. Eventually, someone decided that rather than doing these same five or ten steps, you can take shortcuts.

Modus Tollens serves as an instructive example. Let's say I have:

(A -> B)
And I have:
~B
And I am trying to get:
~A
Here's what I'd have to do. Since I'm trying to find ~A, I'll do a ~introduction on A:
```    {

1)  (A -> B)           [assumption, given]

{

2)  ~B             [assumption, given]

{

3)  A          [new assumption]

4)  B          [->elim on 1 and 3]

5)  ~B         [repetition of 2]

}              [closes off 3-5]

6)  ~A             [~intro 3-5]
```

We have to do this so often that we just call these five steps "Modus Tollens." Modus Tollens is a shortcut rule. There are several others too, some more involved.

The following is a list of the major rules, together with a justification of why each of them is valid and a short example of how you might use some of the more challenging ones.

1. Modus Ponens

This is one of the most straightforward laws in logic. It states that if you have

(X -> Y)

and you have

X

then you are entitled to:

Y

This is just what we've been calling "-> elimination."

The reason it works is that we are given (X -> Y). Which means that X cannot be true at the same time Y is false. So if X is true (which is the other given), then Y must be true as well, so we are free to conclude Y is true.

Example: "If it is raining, then there are clouds" and "it is raining" together imply "there are clouds."

2. Modus Tollens

This law is just the flip side of modus ponens. It states that if you have

(X -> Y)

and you have

~Y

then you are entitled to

~X

The reason this works is that we are again given (X -> Y). This means that X cannot be true at the same time Y is false. So if Y is false (which is the other given), then X must be false as well. So we are free to conclude X is false (or ~X is true).

Example: "If it is raining, then there are clouds" and "there are no clouds" together imply "it is not raining."

3. DeMorgan's Law (I)

DeMorgan came up with a couple sets of equivalencies. The first is that if you have

~(X ^ Y)

then you can conclude

(~X v ~Y)

and if you have

(~X v ~Y)

then you can conclude

~(X ^ Y)

The reason this works is that our starting point is ~(X ^ Y), which is the negation of (X ^ Y). Now, (X ^ Y) can only be true if X is true and Y is also true. So (X ^ Y) will be false if X is false or if Y is false. That is, (X ^ Y) will be false if (~X v ~Y) is true. So ~(X ^ Y) is equivalent to (~X v ~Y).

Example: "My dog is fat, or my cat is fat" is equivalent to "It is not true that both my dog and cat are thin."

4. DeMorgan's Law (II)

The second equivalence which bears DeMorgan's name is that

~(X v Y)

is interchangeable with

(~X ^ ~Y)

The only way in which ~(X v Y) can be true is if X and Y are both false. So the two expressions can be interchanged just like in the first law.

Example: "My dog is fat and my cat is fat" is equivalent to "It is not true that my dog or cat is thin."

5. Hypothetical Syllogism

The rule here is that if you have

(X -> Y)

and you have

(Y -> Z)

then you can conclude

(X -> Z)

Here's why:

We know that "if X is true, then Y is true." And we know that "if Y is true, then Z is true." But we don't know anything about whether any of the letters are actually true or not.

Let's assume (or hypothesize) for a second that X is true. Then, by modus ponens, Y is true. And then by modus ponens again, Z is true. So: If we assume X is true, then we conclude Z is true. Since we didn't know X was true, we cannot take Z home with us, but we can say that "If X was true, then Z would be true." This is equivalent to saying "If X, then Z" or (X -> Z).

Example: "If it is raining, then there are clouds" together with "if there are clouds, then the sun will be blocked" imply "if it is raining, then the sun will be blocked."

6. Disjunctive Syllogism

The rule here is that if you have

(X v Y)

and you have

~X

then you can conclude

Y

Here's why:

We know first of all that "X or Y is true." We also know that X is false. If X or Y is true, and X is false, then Y has no choice but to be true. So we can conclude that Y is true.

Example: "My dog is fat or my cat is fat" together with "my dog is thin" imply "my cat is fat."

This rule states that if you assume

X

and, from that, you conclude a contradiction, such as

(Y ^ ~Y)

then you can conclude that your assumption was false, and

~X

must be true. You can find a more complete explanation of this at

8. Double Negation

This rule simply states that if you have

~~X

then you can interchange that with

X

which should be apparent based on what the ~ is.

9. Switcheroo

(I've heard that this was actually named after a person, but I don't know that for certain.)

This is a shortcut rule which states that if you have

(X v Y)

then you can interchange that with

(~X -> Y)

To understand why, let's think about (~X -> Y). This says that ~X cannot be true at the same time that Y is false. Or, to put that another way, X cannot be false at the same time Y is false.

So (~X -> Y) can only be false when X and Y are both false. Similarly, the only way for (X v Y) to be false is to have X and Y both false. So the two expressions are true unless X and Y are both false, so they have the same "truth conditions" and are therefore equivalent (i.e. interchangeable).

Example: "My dog is fat, or my cat is fat" is equivalent to "If my dog is thin, then my cat is fat."

(This one is hard to wrap your mind around, but think about what must be true/false about the world in order to make each statement true or false and it should eventually become clear.)

This is just what we've been calling "v introduction."

11. Simplification

This is just what we've been calling "^ elimination."

12. Rule of Joining

This is just what we've been calling "^ introduction.""

The problem with shortcut rules is that they're easy to misuse. In my opinion, the best way to learn them is to practice with the twelve systematic rules and if you find yourself doing the same steps over and over, you may have found a shortcut rule.

If there's a rule you don't understand, try to use the twelve systematic rules to figure out how the rule works. Once you see the steps in deriving the rule and you know why it is a valid shortcut, you won't have any trouble using it. And remember, if you get stuck and don't know what to do, you can always fall back on the twelve systematic rules.

Another topic which comes up often in logic is how to translate complicated English sentences into logical notation. There are some pages in the Dr. Math archive which can help with that:

This is really just an introduction to logic. There are more detailed systems that allow you to systematize properties of objects, other possible worlds, the relation between knowledge and belief, and even the logic of obligations. You can find more on how to go beyond the basics here:

Best of luck,

- Doctor Achilles, The Math Forum