

Crash Course in Logic This will be brief. If you're familiar with basic Algebra, almost all of this should make sense on the first go. I take the practical approach of a Computer Scientist, noting that many deliciously beautiful examples and background theories are omitted for brevity—and Pure Mathematicians (such as myself, actually) will acknowledge that some things are not technically perfectly accurate. I know. I may write another tutorial to explore these. But this is the fastest complete introduction to logic I have ever seen, and we don't have time. Let's start. You all know what a "sentence" is, in the grammatical sense. In logic, it means something that is true or false. This could be a sentence "I am bored.", or a sentence that says something about other sentences: "When you said "I'm bored", you were lying." (a "metasentence"). This is a variable: \(P\). It is a shorter way of writing a sentence. Again, it is either true or false. Logic is about using these variables to show new things. Boolean functions are similar: \(P(x)\) is a function that returns true or false for a given value of \(x\). Variables and functions—"predicates"—generally are lowercase and not specially marked, but here they are uppercase so that they stand out. Connectives and Truth Tables The simplest thing you can do is to take the logical negation, or complement, of a variable. If \(P\) means "I am bored.", then \(\neg P\) means "I am not bored.". Just as negatives in arithmetic, two negations cancel. So if \(\neg Q\) means "I do not feel well.", then \(\neg\neg Q\) means \(Q\), which is just "I feel well.". Let's introduce a truth table. Suppose I have two variables \(P\) and \(Q\), and I know that \(Q=\neg P\). A simple, and deliberately intuitively obvious question is to ask how these variables are related. I can summarize the relation in a "truth table":
"F" stands for "false" and "T" stands for "true". You can use any two symbols, but the the best are "F" and "T" or "0" and "1". You read the table across in rows. The first row says that when \(P\) is false, then \(\neg P\) must be true, and so \(Q\) must be true. To construct the table, I simply took all possible combinations of true and false. Since there's really only one variable (because \(Q\) is defined by \(P\)), there are only two possibilities. Let's introduce connectives, and thereby some more complicated truth tables. Let \(P\)="I am bored.", and \(Q\)="I like green.". Then the "conjunction" of \(P\) and \(Q\), namely \(P \wedge Q\), means "I am bored and I like green". The truth table looks like this:
Notice that since \(P\) and \(Q\) are independent of each other, there are now four possibilities. This truth table makes intuitive sense, because it says that \(P\) and \(Q\) (again, denoted \(P \wedge Q\)) is only true if both \(P\) and \(Q\) are true. The "disjunction" of \(P\) and \(Q\), namely \(P \vee Q\), means "I am bored or I like green". This usage of "or" is a little bit different from how you may be used to in language. In logic, "or" means that at least one of the inputs must be true. For example, if \(P\) is true, but not \(Q\), then \(P \vee Q\) is true. Conversely, if \(Q\) is true, but not \(P\), then \(P \vee Q\) is true. And, if \(P\) is true, and \(Q\) is true, then \(P \vee Q\) is still true! The logical connective "xor" (denoted \(\oplus\) or \(\veebar\)) functions similarly to how most people use "or" colloquially. The xor connective (the negation of equivalence, see below) is not used as much as the normal or. Both disjunctive connectives are summarized below:
Implications \(P\) implies \(Q\) is written \(P \rightarrow Q\). Here, \(P\) is called the "hypothesis", and \(Q\) is called the "conclusion". If \(P\) implies \(Q\), then that means that whenever \(P\) is true, \(Q\) must also be true. So, if \(P\) is true, but \(Q\) is false, then saying that \(P \rightarrow Q\) is false. It turns out that when \(P\) is false, it doesn't matter what \(Q\) is—since the hypothesis never "happened", whether the conclusion happened is irrelevant. As it turns out, \(P \rightarrow Q\) happens to have exactly the same truth table as \(Q \vee \neg P\), a fact that is very commonly used as a convenience when solving logical expressions.
You can write implications the other way (so \(P \rightarrow Q\) is the same as \(Q \leftarrow P\)), but this is less common. If you have two statements that imply each other, it turns out that they must be equivalent (written \(P \leftrightarrow Q\), because it's the same as \((P \rightarrow Q) \wedge (P \leftarrow Q)\)). Another way this is commonly said is "\(P\) iff \(Q\)", with "iff" standing for "if and only if" (and of course \(P\) iff \(Q\) is the same as \(Q\) iff \(P\)). Another form is \(P \equiv Q\). Equivalent statements may replace each other in proofs. If you know something about a statement, then you know the same thing about all the statements that are equivalent to it.
Quantifiers The final constructs are quantifiers. The quantifier \(\forall\) means "for all". The quantifier \(\exists\) means "there exists at least one". The quantifier \(\nexists\) means "there do not exist any", but is more commonly written \(\neg\exists\). Another useful construct is \(\exists !\), which means "there exists exactly one (you can also use \(\exists_1\) to mean the same, or change the \(1\) as needed). For deep reasons, all these must be followed by some variable in some set \(S\) (the symbol \(\in\) should be read as "in the [set]" and, like implication, can be written the other way \(\ni\)). For example, the logical statement \(\exists \, x \! \in \! \mathbb{R} \, P(x)\) reads as "There exists at least one number \(x\), in the set of real numbers, such that the Boolean function \(P(x)\) is true.". However, many people leave out the "in the set \(S\)" part—so in the above you get: \(\exists \, x P(x)\), which reads "There exists at least one number \(x\) such that the Boolean function \(P(x)\) is true.". This is generally accepted as okay, so long as everyone knows it's really there. Misc. and Conclusion Almost done. There are some common laws that are useful in the extreme. These can easily be derived by examining the relevant truth tables, but they're worth remembering. These are called De Morgan's laws, named after some guy. They are:\[ \neg (P \wedge Q) \leftrightarrow \neg P \vee \neg Q\\ \neg (P \vee Q) \leftrightarrow \neg P \wedge \neg Q\\ \neg ~\forall x P(x) \leftrightarrow \exists x \,\neg P(x)\\ \neg ~\exists x P(x) \leftrightarrow \forall x \,\neg P(x)\\ \] Some final terminology. Suppose I have a statement "I am tired, so I will sleep.". The "converse" of the statement is "I will sleep, so I am tired.". The "inverse" of the statement is "I am not tired, so I will not sleep.". The "contrapositive" of the statement is "I will not sleep, so I am not tired.". The statement and the contrapositive are equivalent statements. From that, you may be able to see that the inverse and the converse are equivalent statements. Nothing else is equivalent. A "tautology" is something that is always true. For example, \(P \vee \neg P\) is always true. Notice that \(P\)'s truth value is irrelevant to whether \(P \vee \neg P\) is true—it is for both cases! A "contradiction" is something that is always false. For example \(P \wedge \neg P\) is always false. We're done! Hope you enjoyed the tutorial, and hope it helped! Please report any suggestions/corrections to me. Thanks for reading!


