A recap of where we've been,
and why we traveled there.
Why didn't we begin with quantifiers all along?
We saw three stages of logics:
- Propositional logic, with formulas like
.
While the propositions are named suggestively, nothing in the logicenforces a relation among these; it is equivalent to
.
- Predicate logic,
where variables (and constants) can express a connection betweendifferent parts of the formula:
Predicate logic introduced the idea of variables,
and required domains and interpretations to determine truth.But it can't bind variables, and thus requires an interpretation
of
and
to evaluate.
- First-order logic, which included two quantifiers to bind variables:
So why, you might ask, didn't we just start out with first-order logic
in the first lecture?One reason, clearly, is to introduce concepts one at a time:
everything you needed to know about one level was needed in thenext, and then some.
But there's more:by restricting our formalisms, we can't express all the concepts
of the bigger formalism, but we
can have automated
ways of checking statements or finding proofs.
In general, this is a common theme in the theory of any subject:
determining when and where you can (or, need to) trade offexpressibility for predictive value.
For example, …
- Linguistics:
Having a set of precise rules for (say) Tagalog grammarallows you to determine what is and isn't a valid sentence;
details of the formal grammar can revealrelations to other languages which aren't otherwise so apparent.
On the other hand, a grammar for any natural language is unlikely toexactly capture all things which native speakers say and understand.If working with a formal grammar, one needs to know what is being lost
and what is being gained.
- Dismissing a grammar as irrelevant because it
doesn't entirely reflect usageis missing the point of the grammar;
- Conversely, condemning some real-life utterances as
ungrammatical (and ignoring them) forgets that thegrammar is a model which captures
many (if not all) important properties.
Of course, any reasonable debate on this topic respects
these two poles and is actually about where the besttrade-off between them lies.
- Psychology:
Say,
Piaget might propose four stages of learning in children.It may not trade off total accuracy, for (say) clues of what to look
for in brain development.
- Physics:
Modern pedagogy must trade off quantum accuracy forNewtonian approximations.
Researchers exploring fields like particle physics musttrade off exact simulations for statistical
(
stochastic
)
approximations.
Understanding the theoretical foundations of a field
is often critical for knowing how to apply various techniques in practice.
Logic and everyday reasoning
We've looked at the impreciseness and ambiguity of natural language
statements, but these are not the only problems hidden in natural languagearguments.
The following illustrates a common form of hidden assumption:saying
the tenth reindeer of Santa Claus is …
implies the existence some tenth reindeer.
More subtly, humans use much more information than what is spoken ina conversation. Even aside from body language, consider
a friend asking you
Hey, are you hungry?
While as a formal statement this doesn't have any information, in real life
it highly suggests that your friend
is hungry.