Dr. Math FAQ ||
Classic Problems ||
Formulas ||
Search Dr. Math ||
Dr. Math Home
Date: 06/27/2002 at 20:23:31 My question isn't exactly how to do a specific problem; it is to ask you if logic is a type of thing where either you get it or you don't. I recently had to drop symbolic logic because I just couldn't get it! Especially when we started doing derivations with rules of replacement like modus pollens. Derivations for SD+ are the most confusing, but I can't even get SD. Is logic something where either you get it or you don't? Date: 06/29/2002 at 02:05:27 Hi Carrie, Thanks for writing to Dr. Math. Symbolic logic is something that you can master. The hardest thing about symbolic logic is learning how to work with the symbols. Once you know what all the symbols stand for, the logic should come more easily. I'll try to give you a bit of a crash course in basic symbolic logic using an approach that I hope will help. Another place you can turn to is the Logic section of the Dr. Math archives. Philosophers and LogiciansFirst, I'd like to do a bit of philosophizing as a way to lead into the logic. Philosophers and logicians have a lot of overlap in what they do. Many logicians are also philosophers, and all philosophers are logicians to some extent (some much more so than others). Given that there is such a connection between philosophers and logicians, I find it striking just how radically different the fields are. Philosophers are interested in finding deep truths about the world, be they epistemological, metaphysical, ethical, etc. Logicians (qua logicians) are only interested in using a set of rules to manipulate arbitrary symbols that have no relevance to the world. The (sometimes difficult) marriage between philosophy and logic comes from the fact that everyone in the world (except, I would argue, people who are commonly called "crazy") accepts the truths proven by logic to be universally true and unquestionable. Philosophy needs logic because in order to establish that a philosophical doctrine is true, one needs to show that the doctrine is universally and unquestionably true. One needs to, in other words, make a demostration that everyone would accept as proof that the proposition is true. To do that, the philosopher needs logic. Logic Takes Small StepsLogic accomplishes this magical universal acceptance because it makes only little tiny steps. It does silly little things like:
AND ASSUMING: The dog weighs 15 lbs. I CONCLUDE: The dog is brown and the dog weighs 15 lbs. Shorthand Sentences Logicians, though, are very lazy people. They don't like to write long derivations in English because English sentences can be fairly long. So what they do instead is a kind of shorthand. If you give a logician a sentence like
he will pick a letter and assign it to that sentence. He now knows that the letter is just shorthand for the sentence. The way I learned logic, capital letters are used for sentences (with the exception of U, V, W, X, Y, and Z; I'll get to those later). So let's just start at the beginning of the alphabet and use the letter "A" to represent the sentence "The dog is brown." While we're at it, let's use the letter "B" to represent the sentence "The dog weighs 15 lbs." In addition to saving time and ink, this practice of using capital letters to represent whole sentences has a couple of other advantages. The first is that to a logician, not every word is as interesting as every other. Logicians are extremely interested in the following list of words:
or if...then if and only if not They call these words "connectives." This is because you can use them to connect sentences that you already have together to make new sentences. When you write in English, those words don't stand out; they just get lost in the middle of sentences. Logicians want to make sure the words look special, so they take the whole rest of the sentence (the part they don't care about) and use a single letter to represent that. Then their favorite words stand out. Let's rewrite our earlier example about the dog using our logician's shorthand:
AND ASSUMING: B I CONCLUDE: A and B The other advantage of using capital letters to represent sentences is that you ignore all the information that isn't relevant to what you're trying to do. For the derivation I did above, it didn't matter that the sentences were both about some dog. It didn't matter that they were about weight or color. They could have just as easily been sentences about how tall the dog is or about a cat or a person or a war or whatever. And if we can do the derivation for A and B, then we can do the same exact derivation for C and D or E and N or any other sentences we like. Connectives Now, as I said before, logicians are lazy. They really don't want to have anything to do with English. So instead of using the English words:
or if...then if and only if not they make up their own symbols for these:
(Sometimes they also use a triple equals sign for '<->', but I can't type that.) Here are some examples:
There are a few things to notice here:
Parentheses I know that a lot of books and instructors claim that it is okay to drop the outermost parentheses in a sentence. I've done it myself many times. And 95% of the time it won't cause you trouble if you're careful. But let's say we started with this 'sentence'
and decided to negate it. Well, the way to negate a sentence is to stick a ~ on the front, so let's do that:
But wait! What we did there was just negate the A. We wanted to negate the whole sentence. If we were really sharp, then we might notice that somebody had given us an illegitimate sentence that was missing parentheses, and so we would add the parentheses before adding the ~, to get:
It seems silly to make such a big deal about parentheses when we're dealing with simple sentences, but when you're doing a 30-line derivation and you're tired, it's easy to make a mistake just like that on line 17 and get yourself into real trouble. It's better to just remember the simple rule and always add parentheses when you have a two-place connective. Let's take a deep breath and then go quickly over what we have so far. Using ConnectivesConnectives are logical terms,
A simple sentence is one that has no connectives. For example: A (the dog is brown). A complex sentence is a sentence which is made up of one or more simple sentences and one or more connectives. Some examples are:
(A v B) (A -> B) (A <-> B) ~A You can use connectives on complex sentences just as you can on simple sentences. Let's introduce a new simple sentence "it is raining," and let's call our new sentence C. We now have a lot more sentences that we can make. (Keep in mind, we have no idea yet which of these sentences are true or false; we also don't yet know how these sentences relate.) For example:
~C (B v C) (C v B) (B -> C) (A -> C) (C -> A) (B <-> C) (~B <-> C) ~(B <-> C) C ~~C ((A ^ B) v C) (((A ^ ~B) v ~C) -> (~(A v B) <-> C)) These can get a little complicated. That last sentence is especially scary looking; we'll come back to it in a little while. For now, here is a quick run-down of how to use connectives to make complex sentences from simple ones.
SubsentencesSome of our sentences had more than one connective:
~(B <-> C) ~~C ((A ^ B) v C) (((A ^ ~B) v ~C) -> (~(A v B) <-> C))
C
The simple sentence
~
C
The <-> is therefore called the "main connective" of the sentence. Main connectives are, without a doubt, absolutely the most important idea in logic. The hardest skill to learn in logic is to identify the main connective of a sentence. Make sure you understand what main connectives are. Compare the sentence we've been looking at,
Complicated sentencesNow let's take a closer look at the most complicated sentence on our list and see if we can make it more manageable. The way to analyze a complicated sentence is to start at the outside and work your way in. The outermost parentheses on this ugly sentence
(~(A v B) <-> C)
(~(A v B) <-> C)
I told you before that simple sentences are represented by the capital letters A through T, and that U, W, X, Y, and Z are saved for something else. (I rarely use V because it looks to much like the symbol for 'or'.) U, W, X, Y, and Z are used as shorthand for other sentences in logic (some books use italic letters and others use Greek letters, but since I only have plain text to work with, I use the end of the alphabet). I call these "sentence variables." So, just as we can take the English sentence
[Note: it is also legal to use capital variables to stand for simple sentences. So you can take the simple sentence
This can be useful in analyzing complicated sentences. For example, if we have the scary looking sentence
So we know where our main connective is. And by substituting back in for the sentence variables, we can recreate our sentence in managable chunks. It is very important to keep track of what sentence variables stand for when you're doing this kind of substitution. This can be a major source of error if you're not keeping close track of what every letter stands for. Now that we know all the details of the language of symbolic logic, it's time to actually do symbolic logic. The first step with every sentence is to identify the main connective. The reason is simple:
(~(A v B) <-> C)
What the Connectives MeanHere's a quick course on what the connectives mean. (I assume you have some familiarity with them.)
This last one is a little weird, so let's think about it. If we translate it back into English, we get
How would we go about proving that this sentence is false? Let's say that the dog is brown and the dog weighs 15 lbs. Does that disprove the if...then statement? Certainly not! What if the dog is brown but the dog weighs 25 lbs.? That does disprove the statement. What if the dog turns out to be white? Then we cannot disprove the inference because it only makes a prediction about a brown dog. If the dog isn't brown, then we can't test the prediction. So the only way to make the sentence
The Rules of LogicNow we're finally ready to learn the rules of logic. There are exactly 12 - no more, no less. Each connective has two rules associated with it, and there are two special rules. Let's start with one of the special rules first. 1. Assumptions The first special rule is the rule of assumptions. It is deceptively easy. The rule is:
Well that makes sense. Let's say you and I are detectives trying to solve a mystery. I could say something like "let's assume for the time being that the dog is brown." Once I said that, we could discuss what that would mean. Anything we conclude from that assumption is perfectly okay, as long as we remember that it was under the assumption that the dog is brown. In other words, whatever we do prove under the assumption that the dog is brown must be followed by a disclaimer "assuming that the dog is brown." Eventually, we would want to prove something about the case that doesn't depend on the dog being brown. Logicians call this "discharging" the assumption. Fortunately, some of our other rules tell us how to discharge assumptions. 1. When I do derivations, I number each new line. I start new assumptions using curly brackets
2. and then I indent everything after a new assumption; 3. When I discharge an assumption I close the curly brackets } One other very important thing to keep in mind:
However, line (1) is legal because it is outside the curly brackets, and so is line (4). And finally, and perhaps central to logic:
2. -> Introduction The rule is called "-> introduction." The way it works is: If you assume
[The dog is brown]
[The killer is a man]
[If the dog is brown then the killer is a man]
3. ^ Elimination Let's learn one more rule for now. This one is called "^ elimination." If you have
[The dog is brown and the dog weighs 15 lbs]
[The dog is brown]
[The dog weighs 15 lbs] A Derivation Now let's take an example of a derivation. Suppose I wanted to prove that this is a logical truth
{ 1) (A ^ B) [assumption] 2) A [^elim on 1] } 3) ((A ^ B) -> A) [->intro on 1-2]We just used our 3 rules to derive
[If (the dog is brown and the dog weighs 15 lbs) then the dog is brown] 4. Repetition There's one other special rule. It's called "repetition." The rule simply says that if you have
5. ^ Introduction The other rule with ^ is called "^ introduction." It says, if you have
[The killer is a man]
[The killer is tall]
[The killer is a man and the killer is tall] Continuing the Derivation Let's continue our derivation using our new rules. { 1) (A ^ B) [assumption] 2) A [^elim on 1] } 3) ((A ^ B) -> A) [->intro on 1-2] 4) C [assumption] 5) ((A ^ B) -> A) [repetition of 3] 6) (((A ^ B) -> A) ^ C) [^intro on 4 and 5] } 7) (C -> (((A ^ B) -> A) ^ C) [->intro on 4-5] 6. -> Elimination The other rule for -> is called "-> elimination." It says that if you have
[The dog is brown]
[If the dog is brown then the killer is a man]
[The killer is a man] Adding to the Derivation Let's add a little more to our derivation: { 1) (A ^ B) [assumption] 2) A [^elim on 1] } 3) ((A ^ B) -> A) [->intro on 1-2] { 4) C [assumption] 5) ((A ^ B) -> A) [repetition of 3] 6) (((A ^ B) -> A) ^ C) [^intro on 4 and 5] } 7) (C -> (((A ^ B) -> A) ^ C) [->intro on 4-5] { 8) (A ^ B) [assumption] 9) ((A ^ B) -> A) [repetition of 3] [Note: Line (9) is NOT a repetition of (5) because (5) is inside closed curly brackets. (3) is not, so it is okay to repeat it here.] 10) A [->elim on 8 and 9][Note: I did not discharge the assumption I made on line (8). So
7. <-> Introduction The next two rules have to do with <->. The first is called "<-> introduction." It states that if you have
8. <-> Elimination The next rule is called "<-> elimination." This one says that if you have
If you have
A New Derivation Let's start a new derivation. { 1) (A ^ B) [assumption] 2) A [^elim on 1] 3) B [^elim on 1] 4) (B ^ A) [^intro on 2 and 3] } 5) ((A ^ B) -> (B ^ A)) [->intro on 1-4] { 6) (B ^ A) [assumption] 7) B [^elim on 6] 8) A [^elim on 6] 9) (A ^ B) [^intro on 7 and 8] } 10) ((B ^ A) -> (A ^ B)) [->intro on 6-9] 11) ((A ^ B) -> (B ^ A)) [repetition of 5] 12) ((A ^ B) <-> (B ^ A)) [<->intro on 10 and 11] 9. ~ Introduction Next we have "~ introduction." It says that if you assume
[The killer does not have red hair]
[The dog is brown]
[The dog is not brown] 10. ~ Elimination "~ elimination" is almost identical. It says that if you assume
A Quick Derivation { 1) (A ^ ~A) [assumption] 2) A [^elim on 1] 3) ~A [^elim on 1] } 4) ~(A ^ ~A) [~intro on 1-3] Lastly, let's look at the rules for v. 11. v Introduction The first is "v introduction." It says that if you have
That seems a little strange. Normally you wouldn't think you can just go throwing any old sentence into a derivation. But remember
A Short Derivation { 1) A [assumption] 2) A [repetition of 1] } 3) (A -> A) [->intro on 1-2] 4) ((A -> A) v B) [vintro on 3] 12. v Elimination The last rule is a little tricky. It's "v elimination." It says if you have
[Most of the time this means that when you have a disjunction that you don't know what to do with, you have to derive an implication for each side of the disjunction before you can go on.] The rule is hard to do with derivations, but it is actually not too hard to understand if you take an example. Let's say we know
[The dog is brown or the dog weighs 15 lbs]
[If the dog is brown, then the killer is a man]
[If the dog weighs 15 lbs, the killer is a man]
[The killer is a man] Continuing Our Last Derivation Let's continue our last derivation to get a demonstration of "velim." { 1) A [assumption] 2) A [repetition of 1] } 3) (A -> A) [->intro on 1-2] 4) ((A -> A) v B) [vintro on 3] { 5) (A -> A) [assumption] { 6) C [assumption] 7) C [repetition of 6] } 9) (C -> C) [->intro on 6-7] } 10) ((A -> A) -> (C -> C)) [->intro on 5-9] { 11) B [assumption] { 12) C [assumption] 13) C [repetition of 12] } 14) (C -> C) [->intro on 12-13] } 15) (B -> (C -> C)) [->intro on 11-14] 16) ((A -> A) v B) [repetition of 4] 17) ((A -> A) -> (C -> C)) [repetition of 10] 18) (B -> (C -> C)) [repetition of 15] 19) (C -> C) [velim on 16, 17, 18] [Not the most efficient way to prove (C -> C), but it is valid.] There are a lot of other rules people try to tell you, but anything you can do with those, you can do with these 12 rules. Why These 12 Rules? A ReviewThe reason I like these rules is that with these rules you can do any derivation using the same five steps:
Step 2: Apply the rule for introducing that main connective. Step 3: When you're in the middle of a derivation and you don't know what to do, find the main connective of the sentence you have and eliminate it. Step 4: Along the way you may have to derive subsentences using steps 1 through 3. Step 5: If all else fails, you may have to do a "~ elimination" [I'll explain this step a little later].
Mundane Rules: What Do You Have? Now that we have the steps for doing derivations, let me try to explain that confusing business about discharging assumptions. I'm going to approach this from a slightly different angle this time. Of the 12 rules I gave you, 8 are pretty straightforward. They are what I would call the "Mundane Rules." The way Mundane Rules work is: they say "if you have X and Y and Z, then you are entitled to U." The tricky thing with Mundane Rules is knowing what you "have." You "have" any sentence that is written down on a line of the derivation except those which are closed off in curly brackets (which are gone forever once the brackets close). Being "entitled" to something just means that you can legally write it down as the next line of the derivation. The easiest Mundane Rule is repetition: If you have
Another Mundane Rule is ^ introduction: If you have
Simple enough. (I went into more detail on _why_ this is a sound rule in the last e-mail.) Another pretty easy Mundane Rule is ^ elimination: If you have
So far so good. Another Mundane Rule is -> elimination: If you have
This is actually the same thing as Modus Ponens, so you can call it that if you prefer. Since I don't speak Latin, I prefer calling it "-> elimination" because that is more descriptive of what the rule is doing. Another Mundane Rule is <-> introduction: If you have
X is false and Y is true X is false and Y is false
X is true and Y is false X is false and Y is false
X is false and Y is false
Which means we are entitled to write that. Another Mundane rule is <-> elimination: If we have
If we have
Another Mundane Rule is v introduction: If you have
This is a little tricky too. We "have" X, which means X must be true. Now we can just, out of the blue, pick any sentence we like and put it into a disjunction with X. Why can we do that? Well, let's say we pick a FALSE sentence. Is that still okay? Yes, it is! Even if Y is false, the disjunction with X is still true, so we haven't written a false sentence, and we are still okay. The last Mundane Rule is v elimination: If you have
(Y -> Z) The problem with the Mundane Rules is that they only let you play around with sentences you already HAVE. You can't get anything NEW out of them. So far we've gone over the 8 Mundane Rules. There are 12 rules in total. Of the 4 remaining, one is a 'Special Rule' and three are 'Fun Rules'. Special Rule The Special Rule is the rule of assumptions: You are free to assume anything you like at any time as long as you do these things:
Three 'Fun' Rules The three Fun Rules all have this form:If you assume
The first Fun Rule is -> introduction: If you assume
This is, in my opinion, the most important and fundamental rule in logic. It is the foundation of all logic. [It's also really important for derivations. If you look up at the Mundane Rules, a lot of them require you to have sentences of the form (X -> Y) to apply them.] The justification is that if you assume (but don't prove)
[The dog is brown]
[The floor is wet]
[If the dog is brown, then the floor is wet.] The last two Fun Rules are closely related. One is ~ introduction: If you assume
[You may have been taught this rule as a reductio ad absurdum.] The idea is that if assuming X leads you to a contradiction, then there must've been something contradictory ABOUT X ITSELF. So X must be false. If X is false, then by definition ~X is true: no ifs, ands, buts, or assumptions about it.] The last Fun Rule is ~ elimination: If you assume
The idea here is basically the same. ~X is contradictory and therefore false, so X is proven true. I have another nickname for ~ elimination. It is what I call the "Fallback Rule." With every other rule, the way it works is by getting what you want by introducing the main connective or by using what you have by eliminating the main connective. But take a look back at what I just did with ~ elimination. I just proved X is true. There's no way to tell by looking at X that you can prove it by eliminating a ~, but you can. [Actually, ANY sentence you like can be proven with ~ elimination, but it is sometimes hard to do.] So this is where Step 5 of the derivations comes from. If you are trying to prove some sentence X, the first thing to try is to try to introduce the main connective of X. But if you run into a dead end doing that, then assume ~X and try to derive a contradiction. Those are the rules reviewed and better organized so that they make sense, and the difficult bit about discharging assumptions is (I hope) a little clearer. One other thing to watch out for. Some logic problems ask you to prove that a certain sentence is a logical truth. On those problems, you have to discharge all your assumptions and prove that the sentence is true with no assumptions (that is, write it without indenting and outside of all the curly brackets). I'll do an example of a derivation like that in a minute. Deriving a ConclusionOther logic problems give you a list of "givens" or "hypotheses" and ask you to derive a conclusion from them. In those problems, what they are saying is that you need to assume the hypotheses, but not discharge those assumptions. Let me give you an example: Given:
(A -> B) (~B v C)
Here we go: { 1) A [assumption, given] { 2) (A -> B) [assumption, given] { 3) (~B v C) [assumption, given] 4) A [repetition of 1] 5) (A -> B) [repetition of 2] 6) B [->elim on 4&5] Now what do we do? We have a disjunction on 3 that we don't know what to do with, so we need to eliminate it. But in order to eliminate it, we need to get (~B -> something) and (C -> something). Let's first work on getting (~B -> something).] { [Note: this assumption isn't given, so we're going to have to discharge it] 7) ~B [assumption] Let's see if we can derive C. If we derive (~B -> C) then we'll be most of the way to finishing the problem. How can we derive C? Well, we should try to introduce the main connective. But wait! There is no main connective. C is just a simple sentence. So what can we do? I guess we have to do step 5, try ~elimination. { [another assumption we'll have to discharge] 8) ~C [assumption] 9) B [repetition of 6] 10) ~B [repetition of 7] } [Closes off lines 8-10] 11) C [~elim on 8-10] Up to this point we haven't closed off any assumptions. That means that all of the lines up to this point were sentence that we "have" and can use. But now we just closed off lines 8-10 by discharging the assumption at 8. That means that lines 8-10 are gone, they are off-limits and illegal forever. The good news, though is that we derived C, so now we can discharge the assumption we made at line 7. } [Closes off 7-11] 12) (~B -> C) [->intro on 7-12] This may seem like a bit of sleight of hand, like I'm trying to pull the wool over your eyes. How can I use the assumption I made at line 7 as part of the contradiction? I just did a ~elimination to prove C, but there was nothing contradictory about ~C itself; the contradiction was that I had B and then I assumed ~B. This is the familiar refrain "anything can be proven from a contradiction." Once I assumed ~B, I could've proven (~B -> anything-I-want), I chose to prove (~B -> C) because I eventually want to get C. Now we have made some progress on eliminating the disjunction we had on line 3: (~B v C). We have (~B -> C), now we need (C -> C), so let's go get it. { 13) C [assumption] 14) C [repetition of 13] Notice that I can repeat 13 because I have not yet discharged that assumption. However, I cannot repeat the C on line 11 because I closed off line 11 already, so it is gone forever. } [Closes off 13-14] 15) (C -> C) [->intro on 13-14] 16) (~B v C) [repetition of 3] 17) (~B -> C) [repetition of 12] 18) (C -> C) [repetition of 15] 19) C [velim on 16,17,&18] Not all of those repetitions were necessary, since we "had" those lines already (they hadn't been closed off), but I added them for clarity. You should go back and double-check the derivation to make sure that I never broke the rules by using a line that was closed off and that I didn't break any other rules. Also, make sure that I discharged all the assumptions except the three I was given at the start. Deriving a SentenceAs I said before, the other type of problem is where you are handed a sentence and told to derive it. In this problem, you can make any assumptions you need, but you have to discharge all of them and end up with the sentence you're looking for at the end. This usually involves finding the main connective of the sentence you're supposed to prove and then introducing it (sometimes you have to find other sentences too: for example if the main connective is ^, you need to prove each half of the sentence and then do an ^intro). Sometimes trying to do this will get you to a dead end, and then you may try to assume the negation of the sentence you're trying to get and see if you can find a contradiction. Let's do an example. Let's try to prove
The main connective is a ->, so let's introduce it. To do that we need to assume the left and derive the right. { 1) ((~A ^ ~C) v (~C <-> B)) [assumption] Here we have a disjunction, so we need to eliminate it. That means we need to find two entailments. It would be great if we had ((~A ^ ~C) -> (B -> ~C)) and ((~C <-> B) -> (B -> ~C)), so let's try to get those. First, let's work on ((~A ^ ~C) -> (B -> ~C)). { 2) (~A ^ ~C) [assumption] 3) ~A [^elim on 2] 4) ~C [^elim on 2] We actually don't need line 3, but it's good practice to get both sides of an ^ while you can just in case you might need them later. Now we have ~C, but what we want is (B -> ~C), so let's work toward getting that. { 5) B [assumption] 6) ~C [repetition of 4] } [closes 5-6] 7) (B -> ~C) [->intro on 5-6] } [closes 2-7] 8) ((~A ^ ~C) -> (B -> ~C)) [->intro on 2-7] So now what we need to finish the velimination on line 1 is to derive ((~C <-> B) -> (B -> ~C)). { 9) (~C <-> B) [assumption] So now we need (B -> ~C). { 10) B [assumption] 11) ~C [<->elim on 9-10] If you wanted to, you could make this a little more clear by repeating (~C <-> B) and then doing the <->elim, but it's not necessary since we have both (~C <-> B) and B we are entitled to ~C. } [closes 10-11] 12) (B -> ~C) [->intro on 10-11] } [closes 9-12] 13) ((~C <-> B) -> (B -> ~C)) [->intro on 9-12] Now we can do our velimination on line 1 because we have ((~C <-> B) -> (B -> ~C)) and ((~A ^ ~C) -> (B -> ~C)). If you want, for clarity, you can repeat line 1 and line 8, but it's not necessary. 14) (B -> ~C) [velim on 1,8,13] } 15) (((~A ^ ~C) v (~C <-> B)) -> (B -> ~C)) As always, you should go back and double-check the derivation once you're done. The hardest thing about doing derivations is figuring out what to do next. When you have a lot of random rules with Latin names to choose from, it's difficult. This set of rules helps you to know what to do by either introducing what you're trying to get or eliminating what you have. My advice to you is to try to do some of the derivations in your book or that you had for your class using these rules. Any derivation is possible with them. It takes a lot of time to learn logic and have it sink in, but if you take it slowly enough and practice, it will become easier. Get comfortable with these 12 basic rules and the 5-step method for doing derivations. Rules with Latin NamesMany (probably most) places, you don't learn these 12 rules with logic. Even though I think these rules make the most sense and allow a straightforward approach to solving any problem in symbolic logic, more advanced students may want to study the rules such as Modus Tollens and DeMorgan's Law. The twelve rules I've presented here are systematic and straightforward, and all of them move by baby steps. The way I think of these rules (in some cases this is not historically accurate) is that logicians noticed that when doing derivations, they often repeated the same steps over and over. Eventually, someone decided that rather than doing these same five or ten steps, you can take shortcuts. Modus Tollens serves as an instructive example. Let's say I have:
{ 1) (A -> B) [assumption, given] { 2) ~B [assumption, given] { 3) A [new assumption] 4) B [->elim on 1 and 3] 5) ~B [repetition of 2] } [closes off 3-5] 6) ~A [~intro 3-5] We have to do this so often that we just call these five steps "Modus Tollens." Modus Tollens is a shortcut rule. There are several others too, some more involved. The following is a list of the major rules, together with a justification of why each of them is valid and a short example of how you might use some of the more challenging ones. 1. Modus Ponens This is one of the most straightforward laws in logic. It states that if you have
and you have
then you are entitled to:
This is just what we've been calling "-> elimination." The reason it works is that we are given (X -> Y). Which means that X cannot be true at the same time Y is false. So if X is true (which is the other given), then Y must be true as well, so we are free to conclude Y is true. Example: "If it is raining, then there are clouds" and "it is raining" together imply "there are clouds." 2. Modus Tollens This law is just the flip side of modus ponens. It states that if you have
and you have
then you are entitled to
The reason this works is that we are again given (X -> Y). This means that X cannot be true at the same time Y is false. So if Y is false (which is the other given), then X must be false as well. So we are free to conclude X is false (or ~X is true).
Example: "If it is raining, then there are clouds" and "there are no clouds" together imply "it is not raining." 3. DeMorgan's Law (I) DeMorgan came up with a couple sets of equivalencies. The first is that if you have
then you can conclude
and if you have
then you can conclude
The reason this works is that our starting point is ~(X ^ Y), which is the negation of (X ^ Y). Now, (X ^ Y) can only be true if X is true and Y is also true. So (X ^ Y) will be false if X is false or if Y is false. That is, (X ^ Y) will be false if (~X v ~Y) is true. So ~(X ^ Y) is equivalent to (~X v ~Y).
Example: "My dog is fat, or my cat is fat" is equivalent to "It is not true that both my dog and cat are thin." 4. DeMorgan's Law (II) The second equivalence which bears DeMorgan's name is that
is interchangeable with
The only way in which ~(X v Y) can be true is if X and Y are both false. So the two expressions can be interchanged just like in the first law. Example: "My dog is fat and my cat is fat" is equivalent to "It is not true that my dog or cat is thin." 5. Hypothetical Syllogism The rule here is that if you have
and you have
then you can conclude
Here's why: We know that "if X is true, then Y is true." And we know that "if Y is true, then Z is true." But we don't know anything about whether any of the letters are actually true or not. Let's assume (or hypothesize) for a second that X is true. Then, by modus ponens, Y is true. And then by modus ponens again, Z is true. So: If we assume X is true, then we conclude Z is true. Since we didn't know X was true, we cannot take Z home with us, but we can say that "If X was true, then Z would be true." This is equivalent to saying "If X, then Z" or (X -> Z).
Example: "If it is raining, then there are clouds" together with "if there are clouds, then the sun will be blocked" imply "if it is raining, then the sun will be blocked." 6. Disjunctive Syllogism The rule here is that if you have
and you have
then you can conclude
Here's why: We know first of all that "X or Y is true." We also know that X is false. If X or Y is true, and X is false, then Y has no choice but to be true. So we can conclude that Y is true. Example: "My dog is fat or my cat is fat" together with "my dog is thin" imply "my cat is fat." 7. Reductio Ad Absurdum (Proof by Contradiction) This rule states that if you assume
and, from that, you conclude a contradiction, such as
then you can conclude that your assumption was false, and
must be true. You can find a more complete explanation of this at
http://mathforum.org/library/drmath/view/62852.html 8. Double Negation This rule simply states that if you have
then you can interchange that with
which should be apparent based on what the ~ is. 9. Switcheroo (I've heard that this was actually named after a person, but I don't know that for certain.) This is a shortcut rule which states that if you have
then you can interchange that with
To understand why, let's think about (~X -> Y). This says that ~X cannot be true at the same time that Y is false. Or, to put that another way, X cannot be false at the same time Y is false. So (~X -> Y) can only be false when X and Y are both false. Similarly, the only way for (X v Y) to be false is to have X and Y both false. So the two expressions are true unless X and Y are both false, so they have the same "truth conditions" and are therefore equivalent (i.e. interchangeable). Example: "My dog is fat, or my cat is fat" is equivalent to "If my dog is thin, then my cat is fat." (This one is hard to wrap your mind around, but think about what must be true/false about the world in order to make each statement true or false and it should eventually become clear.) 10. Disjunctive Addition This is just what we've been calling "v introduction." 11. Simplification This is just what we've been calling "^ elimination." 12. Rule of Joining This is just what we've been calling "^ introduction.""
The problem with shortcut rules is that they're easy to misuse. In my opinion, the best way to learn them is to practice with the twelve systematic rules and if you find yourself doing the same steps over and over, you may have found a shortcut rule. If there's a rule you don't understand, try to use the twelve systematic rules to figure out how the rule works. Once you see the steps in deriving the rule and you know why it is a valid shortcut, you won't have any trouble using it. And remember, if you get stuck and don't know what to do, you can always fall back on the twelve systematic rules. Another topic which comes up often in logic is how to translate complicated English sentences into logical notation. There are some pages in the Dr. Math archive which can help with that:
http://mathforum.org/library/drmath/view/62435.html Exclusive or Inclusive Disjunction? http://mathforum.org/library/drmath/view/55692.html This is really just an introduction to logic. There are more detailed systems that allow you to systematize properties of objects, other possible worlds, the relation between knowledge and belief, and even the logic of obligations. You can find more on how to go beyond the basics here:
http://mathforum.org/library/drmath/view/61524.html Best of luck, - Doctor Achilles, The Math Forum |
[Privacy Policy] [Terms of Use]
Math Forum Home ||
Math Library ||
Quick Reference ||
Math Forum Search