Grokking recursion

September 25, 2020 // ,

If we want to use dependently typed languages as proof checkers, we better be sure they are consistent as a logic, so that we don’t accidentally prove ⊥ and, as a consequence, any proposition.

One huge source of inconsistency is non-terminating computations; hence languages like Idris or Agda go to great lengths to ensure that functions indeed do terminate. But, for deep reasons, a purely automated check having neither false positives nor false negatives just does not exist, so compromises must be made. Naturally, when talking about proofs, it’s better to be safe than sorry, so these languages strive to never label a function that doesn’t really terminate for all inputs as terminating. Consequently, this means that there are terminating functions that the termination checker does not accept. Luckily, these functions can be rewritten to make the checker happy if all the recursive calls are "smaller" in some sense.

This post emerged from me trying to persuade Agda that a bunch of mutually recursive functions are all terminating. I went through the Agda’s standard library to figure out how to do this, taking notes about what different abstractions I encountered mean and expand to. Then I figured that, if I pour some more words into my notes, it might turn out to be useful for somebody else, so, well, here it is.

Peeling the gcd

As an example, complex enough for the termination checker to fail and require our attention, but simple enough to avoid cluttering our attention span with too many unnecessary details, we’ll take the Euclidean algorithm for finding the greatest common divisor of two numbers. Luckily, Agda standard library already has one implemented, so we’ll focus on this helper function that does all the heavy lifting:

gcd′ :  m n  Acc _<_ m  n < m 
gcd′ m zero		_		 _   = m
gcd′ m n@(suc n-1) (acc rec) n<m = gcd′ n (m % n) (rec _ n<m) (m%n<n m n-1)

This function has a parameter of type Acc something something. Looks like this is the key to persuading the termination checker that gcd′, well, terminates, but what is that, exactly?

If we trace the imports chain, we’ll end up in the Induction.WellFounded module, staring at this definition:

data Acc {A : Set a} (_<_ : Rel A ℓ) (x : A) : Set (a ⊔ ℓ) where
  acc : (rs : WfRec _<_ (Acc _<_) x)  Acc _<_ x

This seems to be polymorphic on the universe levels with all those as and s, but to build our intuition, we can just ignore that and assume we’re working in the universe of usual types. Or, at least, that’s what I prefer to do — just fewer things to keep in mind and care about.

Besides that, Acc is parametrized by:

The type and relation parameters make sense — we know from logic that it’s actually relations that are well-founded, so it’s natural to expect one to pop up somewhere. But what’s the role of the element? Let’s look at how it’s used, so let’s look at the sole Acc’s constructor.

What’s WfRec? It’s defined just above in the same module:

WfRec : Rel A r   {}  RecStruct A ℓ _
WfRec _<_ P x =  y  y < x  P y

Well, we can already substitute the body of WfRec for its name in the acc constructor, but let’s figure out what exactly that Rel and some weird RecStruct mean.

We can chase those down too.

We could probably derive what all that means from the context, but let’s not cheat and look at the definitions instead.

Rel uses a more generic REL in its definition:

-- Heterogeneous binary relations

REL : Set a  Set b  (: Level)  Set (a ⊔ b ⊔ suc ℓ)
REL A B ℓ = A  B  Set

-- Homogeneous binary relations

Rel : Set a  (: Level)  Set (a ⊔ suc ℓ)
Rel A ℓ = REL A A ℓ

but it all boils down to a binary function A A Set.

RecStruct happens to be more involved:

-- A RecStruct describes the allowed structure of recursion. The
-- examples in Induction.Nat should explain what this is all about.

RecStruct :  {a}  Set a  (ℓ₁ ℓ₂ : Level)  Set _
RecStruct A ℓ₁ ℓ₂ = Pred A ℓ₁  Pred A ℓ₂

The comment refers us to Induction.Nat, but Induction.Nat is deprecated, and Data.Nat.Induction it refers doesn’t have too many examples. Welp.

So RecStruct takes a type A and returns some other type (recall we ignore levels). That other type is a function from Pred A to itself, and Pred A is basically a function A Set — that’s quite a common way to encode predicates on A in type theory after all.

All in all, RecStruct A is basically (A Set) (A Set), and Rel A is A A Set. Given that, WfRec might just as well be written as

WfRec : (A  A  Set)  (A  Set)  A  Set
WfRec _<_ P x =  y  y < x  P y

If we now expand WfRec in Acc, we’ll get

data Acc {A : Set a} (_<_ : Rel A ℓ) (x : A) : Set (a ⊔ ℓ) where
  acc : (rs :  y  y < x  Acc _<_ y)  Acc _<_ x

This now makes a tad more sense. To construct Acc _<_ x, we need to provide a function that, for each y that is smaller than x, can construct Acc _<_ y.

But how on Earth is it related to termination? Why is this Acc sufficient to prove that something always terminates?

Accessibility and structural recursion

Turns out, it’s our good old structural recursion under cover.

In structural recursion, everything’s OK if, to simplify a lot, we recur on a subterm of one of our arguments. Consider the typical Fibonacci example:

fib :
fib 0 = 0
fib 1 = 1
fib (suc (suc n)) = plus (fib n) (fib (suc n))

The last line is fine because the recursive call to fib happens either with n or with suc n, and both are subterms of the argument suc (suc n). Our favourite dependently typed languages’ metatheories guarantee that all values of any data type are "finite", so this function will eventually terminate.

Although is one of the simplest types one could think of:

data: Set where
  zero :
  suc  :

the same logic, of course, applies to more complex ones with more complex constructors. Like binary trees:

data Tree : Set where
  leaf : Tree
  node : Tree  Tree  Tree

Of course, if we recur on either subtree of a node, we’re still good as it’s still a subterm, so a function like this is total:

countBiased : Tree 
countBiased leaf = 1
countBiased (node l r) = 1 + 2 * countBiased l + countBiased r

Now there’s the trick. Note that the definition of Tree above is isomorphic to this one:

data Tree' : Set where
  leaf' : Tree'
  node' : (Bool  Tree')  Tree'

Indeed, our original node constructor carries along two subTrees. Our new node' also contains two subTree's, but this time in form of a function that, say, returns the left subtree when passed true and the right one when passed false:

countBiased' : Tree' 
countBiased' leaf' = 1
countBiased' (node' f) = 1 + 2 * countBiased' (f true) + countBiased' (f false)

Of course, our s can also be written this way, but with a function taking a singleton type:

data ℕ' : Set where
  zero' : ℕ'
  suc'  : ( ℕ')  ℕ'

Exercise to the reader: write the functions witnessing the isomorphism between Tree and Tree' and write a proof that they’re indeed an iso.

Now, if Tree and Tree' are isomorphic, one might hope that the termination checker is smart enough to see that countBiased' defined above is also total. And, turns out, it is!

The approach taken in Tree' and ℕ' might seem clumsy, but there are two really nice things about it. First, it generalizes nicely and is way more expressive. For instance, we might have a tree with countably infinitely many subtrees at each node:

data TreeN : Set where
  leafN : TreeN
  nodeN : ( TreeN)  TreeN

or we might have a tree with finitely-but-not-known-in-advance many subtrees:

data TreeF : Set where
  leafF : TreeF
  nodeF : (n :)  (Fin n  TreeF)  TreeF

Well, you get the idea.

Just as importantly, the results of the functions carried in the constructors like these are still seen as structurally smaller, so we can recur on them while remaining total. And, of course, the metatheory guarantees that it’s still logically consistent! So, for example, the termination checker sees the following slightly nonsensical function as total:

stupidCount : TreeN 
stupidCount leafN = 1
stupidCount (nodeN f) = 1 + stupidCount (f 0) + stupidCount (f 100)

For those wondering about all this stuff and wanting to learn more, the notion of strict positivity is strictly related. Unfortunately, I don’t know about any good sources on this.

Turning back to our GCD investigation, note that Acc follows the shape of all those funny types we defined above. Namely, the (sole) constructor for Acc _<_ x takes a function that, given some other element y and a witness of y < x (that is, y being smaller than x according to the relation _<_), produces another Acc, namely, a value of type Acc _<_ y. Now we know that the termination checker sees the newly constructed Acc _<_ y as something that’s structurally smaller than our original Acc _<_ x (even though the type is slightly different, having y instead of x). Thus we can pass that to the recursive call, and the checker will happily accept that even if none of the other arguments got structurally smaller.

In a sense, Acc serves to convert the notion of being-smaller-according-to-a-relation into the notion of being-structurally-smaller, and it’s up to us, the programmers, to provide the corresponding function.

Examples

But how do we do that?

Natural numbers

For a start, let’s write a proof that every x : is accessible according to the usual less-than relation on numbers (that is, _<′_ from Data.Nat.Base, which is equivalent to _<_ but slightly nicer to work with here). This proof is just an object of type Acc _<′_ x for any x:

<′-acc : (x :)  Acc _<′_ x

After some usual Agda moves, we might arrive at something like

<′-acc : (x :)  Acc _<′_ x
<′-acc x = acc (f x)
  where
	f : (x y :)  y <′ x  Acc _<′_ y
	f x y y<′x = {! !}

Let’s focus on the function f now. How do we fill the hole? When in doubt, do a case split! So let’s split on y<′x:

	f : (x y :)  y <′ x  Acc _<′_ y
	f .(suc y) y ≤′-refl = {! !}
	f .(suc _) y (≤′-step y<′x) = {! !}
How are y <′ x and ≤′-refl/≤′-step related?

Note that Agda’s stdlib defines m < n to be an alias for 1 + m ≤ n:

_<′_ : Rel ℕ 0ℓ
m <′ n = suc m ≤′ n

So, ≤′-refl and ≤′-step are merely the constructors of _≤′_:

data _≤′_ (m :) : Set where
  ≤′-refl :						 m ≤′ m
  ≤′-step :  {n} (m≤′n : m ≤′ n)  m ≤′ suc n

The first hole expects us to produce some value of type Acc _<′_ y. But here’s the exciting part: our top-level <′x-acc produces exactly that when given y! Even better, the termination checker will happily allow y as the argument: even in the worst case of f not calling itself recursively in the second hole (we’ll get to that), y is a subterm of suc y, that is, x that was passed to the original invocation of f and that is the formal parameter of <′x-acc. So we complete the first hole as

	f .(suc y) y ≤′-refl = <′-acc y

Now let’s move on to the second one. Removing the dot pattern and giving a name to the first number gives us

	f (suc x) y (≤′-step y<′x) = {! !}

with the hole context

Goal : Acc _<′_ y
--------------------------------------------------------------------------------
x : ℕ
y : ℕ
y<′x : y <′ x

This is just asking for a recursive call on f, passing x, y, and y<′x. It’s easy to see the recursive call returns the right type, and Agda agrees with us.

All in all, we get:

<′-acc : (x :)  Acc _<′_ x
<′-acc x = acc (f x)
  where
	f : (x y :)  y <′ x  Acc _<′_ y
	f .(suc y) y ≤′-refl = <′-acc y
	f (suc x) y (≤′-step y<′x) = f x y y<′x

Pairs

What about something more complicated, like a pair of natural numbers with the lexicographical ordering? All this accessibility stuff can handle this too!

For example, one way to define the ordering relation is

infix 1 _<,<_

data _<,<_ : (x y : ℕ × ℕ)  Set where
  <,<fst :  { x₁ x₂ y₁ y₂} (x₁<′y₁ : x₁ <′ y₁)  x₁ , x₂ <,< y₁ , y₂
  <,<snd :  { n x₂ y₂}	 (x₂<′y₂ : x₂ <′ y₂)   n , x₂ <,<  n , y₂

Proving that it’s well-founded is left as an exercise for the reader.

Solution.

Alright, I was just lazy to describe the process of building the proof. Here is it in its final form:

<,<acc : (x₁ x₂ :)  Acc _<,<_ (x₁ , x₂)
<,<acc x₁ x₂ = acc  where
					  (y₁ , y₂) <,<  f x₁ x₂ y₁ y₂ <,<)
  where
	f : (x₁ x₂ y₁ y₂ :)  (y₁ , y₂) <,< (x₁ , x₂)  Acc _<,<_ (y₁ , y₂)
	f _ _  y₁ y₂ (<,<fst ≤′-refl)		  = <,<acc y₁ y₂
	f _ _  _  _  (<,<fst (≤′-step x₁<′y₁)) = f _ _ _ _ (<,<fst x₁<′y₁)
	f x₁ _ _  y₂ (<,<snd ≤′-refl)		  = <,<acc x₁ y₂
	f _  _ _  _  (<,<snd (≤′-step x₂<′y₂)) = f _ _ _ _ (<,<snd x₂<′y₂)
Its structure is very similar to the one for the scalars: we recur back into our top-level function for ≤′-refl cases (because something got smaller), and otherwise we call our helper f.

Counterexamples

Alright, we’ve seen a couple of examples of types with some well-founded relations that we’ve proven to be actually well-founded. But what if we try to prove this for something not so well-founded? Where exactly does it break? Proving that some things can’t be expressed can be challenging, but we can at least try to build some intuition. And I, for one, find this kind of questions and examples to be as instructive as the "positive" ones, so let’s go find out!

Trivial type

Perhaps the simplest example is the unit type with a relation that holds for every two elements of this type (trivially, because there’s just one element of this type):

data: Set where
  tt :

data _!_ : (x y :)  Set where
  eq :  x y  x ! y

What if we try to write the corresponding well-foundedness witness?

!-acc : (x :)  Acc _!_ x
!-acc x = {! !}

What’s the type of this hole?

Goal : Acc _!_ x
--------------------------------------------------------------------------------
x : ⊤

The only way to construct an Acc is via the acc constructor:

!-acc : (x :)  Acc _!_ x
!-acc x = acc  y y!x  {! !})

Here, y!x witnesses the relation between y and x. But this relation gives us no extra information, so we might just as well ignore it!

!-acc : (x :)  Acc _!_ x
!-acc x = acc  y _  {! !})

What’s the type of the hole?

Goal : Acc _!_ y
--------------------------------------------------------------------------------
x : ⊤
y : ⊤

So we’re back at square one: we need to produce some Acc _!_ y for y that we know nothing about, and our only option is to invoke acc again, passing another lambda to it, ad infinitum.

Going circles

What about something less trivial? Let’s consider the rock-paper-scissors game, also known as Peano numbers without the axiom ∀x. suc(x) ≠ 0 on a three-elements set:

data Obj : Set where
  rock : Obj
  paper : Obj
  scissors : Obj

data _<_ : (x y : Obj)  Set where
  r<p : rock < paper
  p<s : paper < scissors
  s<r : scissors < rock

Due to Obj’s finiteness, it seems that the right way to build the well-foundedness witness is by case-splitting on the argument. Let’s do that and consider any one of the three branches. I enjoy metal, so I’ll go with rock:

<acc : (x : Obj)  Acc _<_ x
<acc rock = acc  y y<x  {! !})
<acc paper = {! !}
<acc scissors = {! !}

If we now split on y<x, we’ll get:

<acc rock = acc  where scissors s<r  {! !})

with the goal Acc _<_ scissors. We cannot progress with <acc scissors (the termination checker will reject that), nor can we manually unfold it into the hole, as we’ll eventually get back to having to produce Acc _<_ rock. Square one again.

Infinite chains of unique elements

If we get back to the natural numbers, but now consider the relation _>_, we’ll also have trouble implementing the corresponding accessibility witness.

Indeed, intuitively, the problem is that we might be given some other number y greater than x for any given number x for which we need to produce an accessibility witness. That witness, in turn, might be given some z that’s greater than y, but there are still about as many zs greater than y as ys greater than x, so we haven’t reduced our "target space" at all.

Or, for a slightly different perspective, suppose we’ve somehow obtained the witness. We can feed it with 0, then provide the result with 1, then feed the result of the result with 2, and so on. Since we effectively have structural recursion, we can write a non-terminating computation, but since Agda is consistent, we shouldn’t be able to!

Or, in code, nicely and concisely:

diverge : (x :)  Acc _>_ x 
diverge x (acc f) = diverge (suc x) (f (suc x) ≤-refl)

So, either ℕ is uninhabited (which is obviously wrong) or Acc _>_ x is uninhabited for every x.

Recursion, finally

Let’s now see how we can use accessibility recursion in practice and take another look at the definition of GCD in Agda’s standard library:

gcd′ :  m n  Acc _<_ m  n < m 
gcd′ m zero		_		 _   = m
gcd′ m n@(suc n-1) (acc rec) n<m = gcd′ n (m % n) (rec _ n<m) (m%n<n m n-1)

In the second clause, we know that the second argument, n, is less than the first one, m. Some simple arithmetic shows that m % n is then smaller than n, so we do the recursive call with the proper first two arguments. The last argument uses the m%n<n lemma with the formal considerations I just referred to. As for the third argument, we use the accessibility witness returned by rec for the number n (that’s smaller than m). This passes the termination checker for the very same reason as the stupidCount function above.

After this little excursion, I managed to write something quite more involved, recursing on a bunch of nested derivation trees of different types for a toy language of mine (as I’m working on some proofs related to the type system of that language). Hopefully, these notes will also help somebody else to express their non-trivial algorithms as terminating.

Ah, and, by the way, Idris has similar primitives called accInd and accRec. Moreover, it also has sizeInd/sizeRec wrappers for the (arguably) most common case of some notion of some size, expressed as a natural number, decreasing with each recursive call. And, given that we’ve seen how Acc is defined in Agda, it should be reasonably straightforward to express a similar primitive in any dependently typed language.