Good Math/Bad Math

Wednesday, March 29, 2006

Calculus - no, not that calculus!

This post is about more logic. It's also about a kind of calculus.

When you hear the word calculus, you probably think of derivatives and integrals, differential equations, and stuff like that. Well, that's a calculus: differential calculus. But it's not the only calculus. In fact, a calculus is really any system of rules for manipulating symbolic expressions, which has a formal mathematical or logical basis for the meanings of its rules.

That's really a fancy way of saying that a calculus is a bunch of mechanical rules with some special properties.

When I talked about propositional logic, the rules for working with propositions were really simple. In basic propositional logic, there's a finite set of propositions, and so there aren't a lot of deep things you can infer. Overall, it's really pretty trivial. When you get into more complicated logics, like first order predicate logic (FOPL), then things can get really interesting. In FOPL, there are variables, and you can find statements where you can create infinite loops using inference rules to try to prove some statement.

In logics like FOPL, the inference rules form a general symbolic/mechanic reasoning system. You can manipulate the rules and derive inferences without understanding the semantics of what you're inferring. You can treat it as a purely mechanical system. But anything you can produce with those purely mechanical manipulations of the symbols, expressions, and inference rules will be meaningful statements in FOPL.

That mechanical reasoning system is a calculus: the first order predicate calculus.

Remember a few days ago, I mentioned the idea of a model? Well, the intuitive meaning of a model is a way of assigning meanings to the symbolic statements in a calculus. For a calculus to be meaningful, you need to be able to show a model for it where every statement that you can derive in a calculus is true in the model - that is, if you take the statement, and figure out what the model says it means, it will be a true statement about something real. The reason that models are important is that it's easy to create a calculus that looks valid, that looks like it's rules should produce meaningful results, but that contains a subtle error, so that the mechanical inferences that you can build using the rules will not be true. But more on that in another post.


Let's tale a look at first order predicate logic and its calculus.

Syntax: Writing statements in FOPL



FOPL statements are made up of a few kinds of elements:


  • Variables: variables are symbols which represent values which can be reasoned about using the logic. I'll write variables as lower case letters: x, y, z

  • Constants: variables which are used as the names of specific fixed values. I'll write constants as either numbers, or inside of quotes: 1, 2, 3, "GoodMath", etc.

  • Predicates: predicates are symbols which are used to make assertions about constants and values. Each predicate has a name, which I'll write starting with an uppercase letter, and a number of parameters. A statement with a predicate is written: "Name(arg,arg,arg)", where each "arg" is either a constant or a variable.

  • Quantifiers: quantifiers introduce new variables. There are two quantifiers: "for all", written ∀ , and "there exists", written ∃.

  • Operators: operators are symbols which are used to join other statements together. Operators in first order predicate logic are AND, written ∧; OR, written ∨; IMPLIES, written ⇒; IF AND ONLY IF, written ⇔; and NOT, written ¬.

  • Parens: parenthesis are used for grouping, and for surrounding the parameters of a predicate.


A statement in FOPL is one of the following:

  • A predicate, with its parameters all filled in with either variables or constants. If it is filled with variables, it must be part of a statement with a quantifier that defines the variable. Any reference to an undefined variable makes the statement invalid.
  • Two valid statements joined by AND, OR, IMPLIES, or IFF: "A(x) ∨ B(x)"
  • A NOT placed in front of valid statement: "¬ A(x)"
  • A valid statement with a quantifier clause. A quantifier
    clause is written "quantifier variable : statement"; for example,
    "∀ x : GoesToSchool(x)".
  • A valid statement surrounded by parens.



Ok. So that's a bit about what it looks like. Let's look at a few examples. I'll translate the examples into english to help you read them, but it's important to remember that we haven't gotten to FOPL semantics yet; the translations are just to help you understand how to read the logical statements.


∀x:IsPerson(x)⇒IsMammal(x)
∀x:IsMammal(x)⇒(∃y:IsMammal(y) ∧ IsMother(y,x))


The first statement says that for every x, if x is a person, then x is a mammal. The second statement says that if x is a mammal, then there is another mammal which is its mother.


∀x:IsMammal(x)⇒¬ColdBlooded(x)



This one says that if x is a mammal, then x is not cold-blooded.


Now we're getting to the interesting part. How do we reason using predicate calculus? We don't need the semantics here: we can just work in terms of the symbols.

We need to start with a set of facts. A fact is a statement of the logic that we know is true. Using the predicate calculus, we can add to this set of facts.

We can name a set of facts; I'll use mainly F (uppercase F) to indicate the set of facts that are known. Reasoning is taking a set of facts F, and a set of inference rules, and seeing what facts you can add using those rules. When you have a set of facts F, F allows you to conclude that some statement s is true, we say that F entails s, or F :- s.

Now, we can write reasoning rules. There are two kinds of rules; inference rules (written X -> Y), which say "If I know X, then I can conclude Y"; and equivalence rules, (written X≡Y) which say "X means exactly the same thing as Y"; it's the same as two inference rules "X -> Y" and "Y -> X".

F :- A ⇒ B, F :- A -> F :- B


That's one of the implication rules from propositional logic: if A implies B, and we know A is true, then we can conclude that B is true. All of the inference rules of propositional logic work in in predicate logic as well.

For FOPL, we need to add inference rules for the new things: quantifiers and variables. There are two terms that come up in the rules about variables: free, and bound. A variable is free if the predicate which encloses it is not inside of a quantifier which defines the variable. A variable is bound if it is inside of a quantifier which defines it.

First, we'll go through the equivalences:




The equivalences basically do two things: they tell you how to switch between universal (forall) and existential (there exists) statements; and how to combine or separate compound clauses inside of a quantifier. Inference rules do pretty much the same kinds of things, but instead of being equivalences, where the two sides of the equivalence mean exactly the same thing, inference rules are implications: they say that given the left side, you can conclude the right. Given the right-hand side of an inference rule, you can't conclude anything about the left: inference is one-way.

And now, the implications:



Hopefully, they're pretty sensible. They mainly show how you can introduce or eliminate quantifiers while reasoning.

Semantics is for another day; but what we have here is enough to do full logical reasoning. There's a straightforward algorithm which you can follow using these rules to test if a particular statement can be inferred using first-order predicate logic, which is essentially why we can call this first-order predicate calculus: because without knowing the semantics, we can do a purely mechanical symbolic manipulation to figure out if a particular statement is true.

Now, this post is getting awfully long, so I think I'll take this as a break point; I'll show you the mechanics of using FOPC in another post.

1 Comments:

  • Interesting reading. Look forward to the next.

    By Anonymous Anonymous, at 11:25 PM  

Post a Comment

<< Home