Complexity measures A recursive program terminates iff its sub-argument relation is well founded, but usually the only way to calculate the number of iterations needed is to execute the program itself. This doesn't matter because, as the first result shows, if we can show that something is reduced at each iteration then the loop terminates. This quantity is called a loop measure.
DEFINITION 2.6.1 A function f:(X, < )® (Y,\prec ) between sets with binary relations is said to be strictly monotone if
|
PROPOSITION 2.6.2 If (Y,\prec ) is well founded then so is (X, < ).
PROOF:
omitted proofbox environment
See Exercise 3.54 for an alternative proof. []
EXAMPLES 2.6.3 For the majority of applications, including the first two of Examples 2.5.2 (factorial and compiler), termination is very easy, because we can see directly that the sub-argument is a smaller number, a shorter string of symbols or a shallower tree. Gauss's proof is shown to terminate with a little more effort: the sub-arguments are polynomials of degree 2k\numo0, 2k-1\numo1, ..., 2\numok-1, \numok, where the \numoi are odd numbers.
EXAMPLE 2.6.4 Normalisation for the l-calculus (Fact 2.3.3) is a qualitatively more difficult problem. For weak normalisation, we must define a reduction strategy, ie a way of choosing a redex of any term t, such that the relation u\prec t is well founded, where u is the result of the chosen reduction. This is a tail recursion because there is just one reduced term, which we regard as a sub-argument (in the sense of Definition 2.5.1(a)) , and whose normal form is that of the original term. The entire normalisation process happens in the parsing phase of the recursive paradigm, with trivial evaluation, ie it is tail-recursive.
The strictly monotone function used to show termination takes the term t to the set of types of its subterms, and so the proof depends on subject reduction (Definition 1.2.12), and fails for the untyped l-calculus.
|
Unions Well-foundedness is a local property.
DEFINITION 2.6.5 A subset U Ì X is called an initial segment if
|
PROPOSITION 2.6.6 Let (X,\prec ) be a set with a binary relation. Suppose for every x Î X there is a well founded initial segment X¢ Ì X with x Î X¢. Then X is itself well founded.
PROOF: (Essentially Remark 2.5.13.) Let q be a predicate on X for which the premise of the induction scheme holds, and x Î X: we have to show q[x]. Let X¢ Ì X be a well founded initial segment with x Î X¢. The premise still holds when restricted to X¢, and by well-foundedness the conclusion, "x¢.x¢ Î X¢Þ q[x¢], follows. In particular, q[x]. []
It follows, for example, that the disjoint union of sets with well founded relations is well founded.
Products There are three ways of putting a well founded relation on the product of two sets, depending on how strict we are about descending on the two sides.
PROPOSITION 2.6.7 The cartesian product of a well founded relation (Y,\prec ) with an arbitrary relation (X, < ) is well founded:
|
|
PROOF: The projection p1:XxY® Y is strictly monotone. []
The next construction was popular in classical mathematics because it preserves trichotomy; in particular this is how to multiply ordinals. See Exercise 2.40 for a more general result.
PROPOSITION 2.6.8 The lexicographic product, defined by
|
PROOF: The induction hypothesis is of the form (aÚb)Þ q; this is equivalent to (aÞ q)Ù(bÞ q), which is more convenient.
omitted proofbox environment
[]
PROPOSITION 2.6.9 The interleaved product relation is defined by
x¢,y¢\prec x,y if (x¢\prec xÙy¢\prec y) Ú (x¢\prec xÙy¢ = y) Ú (x¢ = xÙy¢\prec y),
in which descent is necessary on both sides, at least one of them being strict. This is well founded, assuming that both \prec X and \prec Y are.
PROOF: It is sparser than the lexicographic order. []
Although box proofs such as these are not difficult to find, they are nevertheless quite complicated. To express them in the vernacular would require an ah doc lemma for each box (Remark 1.6.2), but this would only take us backwards conceptually. Intuitionism has forced us to devise auxiliary predicates (y), and it is by investigating their role that we make progress in Chapter VI and [ Tay96b].
The predicate calculus has a better claim to being the ``machine code'' of mathematics than set theory or the Sheffer stroke does, but machine code is always rather clumsy in handling higher level idioms. Monotonicity,
if x:X\vdash f[x]Þ y[x] then \vdash (" x:X.f[x]) Þ ("x:X.y[x]),
which was elided from the end of the first proof, is the first of many concepts which need to be coded on top of the predicate calculus, but which have foundational status themselves. Monotonicity is the subject of the next chapter. In fact the modal operator (Definition 3.8.2)
[\succ ]q[t] º ("u.u\prec tÞ q[u]) ( necessarily q)
is the key to reducing these proofs to easy calculations; Definition 2.6.1, for example, says that [\succ ][f]qÞ [f][ > ]q. Although well-foundedness of the transitive closure and the strict induction scheme can be studied using the methods which we have described, this will be much simpler with modal logic in Theorem 3.8.11 and Exercise 3.54 . Arithmetic for the ordinals (Section 6.7) packages the techniques of this section.