# Preparing For Combinators

Before we can define the Y-Combinator, we first need to look at how recursion can be performed using only anonymous functions. And before we can define recursion using anonymous functions, we must first understand some principles that will help us design and then refactor our code.

These principles don't focus on what a function does; instead, they show us how functions can be manipulated, combined and restructured without altering their functionality. These concepts are:

- Higher Order Functions
- Fixed Points
- Functional Refactoring

3.1 Tennent's Correspondence Principle

3.2 Function Wrapping

3.3 Inline Definition

## 1) Higher Order Functions

A higher order function is simply a function that either takes another function as a parameter, or gives back a function as a return value, or both.

In the Mindshift Part 4 and Mindshift Part 5 blogs, we saw some examples of partial functions - the `adder`

and `multiplier`

functions to be specific. In each case, when you pass only one parameter to the `adder`

or `multiplier`

functions, they give you back another function that firstly contains an incomplete definition of how to perform the operation, and secondly takes one parameter that supplies the missing value needed to complete the operation. Both `adder`

and `multiplier`

are examples of higher order functions.

Another important example of a higher order function is the concept of function composition. An example will probably help here.

If higher order functions and function composition are new concepts to you, then you should spend some extra time going over this coding in order to give your mind the time to become comfortable with these ideas.

## 2) Fixed Points

The simplest way to demonstrate the fixed point of a function is for you to have a rummage around in that box in the [cupboard under the stairs|attic|basement] or wherever you have all your college/university stuff, and dig out your scientific calculator.

In my case, this is a Casio *fx*-180P (yes, I know, such devices can now only be found either in museums of computing or on my desk...)

Set the calculator to work in radians instead of degrees, then type in any number - say 3 - and then press the `cos`

button. Now press `cos`

again, and again, and again and again and again and again and again and again and again and again...

You get the idea.

As you keep pressing the `cos`

button, you'll notice that the result starts to converge to an unchanging value. On the limited precision of my 1980's calculator, this convergence happens after about 60 recursions of `cos`

.

What's happened is that the cosine function has reached a point where the value going in is the same as the value coming out. In other words, the cosine function has reached its *"fixed point"*; which in the case of the cosine function, is approximately `0.739085133`

radians. In other words `cos(0.739085133) = 0.739085133`

.

Other well-known functions also have fixed points: for instance the sine function has one fixed point `sin(0) = 0`

, and the square root function has two fixed points `sqrt(0) = 0`

and `sqrt(1) = 1`

.

There are also some functions that have no fixed points: for instance the values coming out of the function `x => x + 1`

will never converge because whatever value of `x`

is supplied can never equal itself plus one.

This is just a simple example to illustrate the concept that a function can have a static value known as a fixed point. The way we will be using fixed points however, is not with simple values such as decimal fractions, but with entire functions. The key point to understand here is that a *function itself* can be the fixed point of some other function.

Instead of thinking of `x`

as some simple value such as a decimal fraction, we should think of it as a function that takes another function as input. This type of function is known as a **functional**.

We will call this functional `Y`

, and its purpose is to take some function `f`

as a parameter, and from it, generate that function's fixed point.

This does not change the concept of a fixed point at all, but it does restate the question.

To continue with the cosine example, instead of describing the fixed point of the `cos`

function in terms that ask: *"What value should I pass to cos in order to get the same value back?"*

We are now asking: *"What functional Y is there that when passed the cos function, will give us back the fixed point of cos?*

Here's that example written out in code. What functional `Y`

is there that when passed the function `Math.cos`

as a parameter, returns the value `0.739085133`

?

But we're jumping ahead of ourselves here...

## 3) Functional Refactoring

### 3.1) Tennent's Correspondence Principle

In his 1981 book, Principles of Programming Languages, the author, R.D. Tennent, lays out a very simple correspondence; namely that if an expression is wrapped in a function that takes no arguments and is then immediately invoked, this will have no effect on the value of the expression.

This is a true refactoring because we have altered the structure of the code without altering its functionality.

Here we can first refactor the `multiplier`

and `adder`

functions, then we can refactor the `double`

and `add5`

functions (derived from `multiplier`

and `adder`

), and finally, we can refactor the `compose`

function.

What would happen if we passed one of these self-executing functions a parameter?

Well, we would have to do two things:

- Invent a name for the parameter (say
`z`

) - Supply a parameter value when the function is invoked

*"More futility!"* I hear you cry. Well, just bear with me for a moment...

### 3.2) Function Wrapping

In much the same way that Tennent's Correspondence Principle is able to leave the value of an expression unchanged by wrapping it inside a self-executing function, we can perform a similar refactoring by wrapping a function inside another function that we then immediately execute.

For simplicity, we'll put all the functions back to their original definition, then we'll apply function wrapping.

Each time we apply a refactoring, we rearrange the code in such a way that its structure is altered without changing any functionality.

But before you run screaming from the room at the addition of all these supposedly futile layers of indirection, consider the effect function wrapping has on the function being wrapped.

Since we are wrapping a function within another function, that inner function will not be evaluated until the wrapper function is invoked. Therefore, we have implemented a delay tactic. The evaluation of the inner function is now delayed until the wrapper function is called.

For languages that employ lazy evaluation, this type of refactoring is superfluous, but in strict evaluation languages such as JavaScript or Ruby, this is a very important tool. It prevents the evaluation of functions that would otherwise cause the language interpreter to disappear down the rabbit hole of infinite recursion.

### 3.3) Inline Function Definition

Finally, let's look at the inline definition of functions. The point of this type of refactoring is more to demonstrate one of the underlying principles of Lambda Calculus, rather than to demonstrate any programming technique useful to humans. Nonetheless, it does produce some amusing (and certainly cryptic) results.^{1}

The principle of inline function definition means that wherever a function name is seen, you can replace that name with the function definition itself. In other words, we treat the function name as if it were a macro, and then substitute the macro's code every time we see the macro name.

Let's remove all the previous types of refactoring, and simply apply the concept of inline function definition.

Similarly, wherever the functions `double`

and `add5`

are used, we replace their names with their definitions.

Continue the substitution process on the `compose`

function...

Heck, we've come this far, why not get rid of `doubleAndAdd5`

?

Yeah, let's do it!

Whilst the ongoing process of inline function substitution makes the coding less and less human-readable (and therefore, less useful to us), it illustrates the point that from nothing more than simple functions that take one parameter and return a single value, we can build up much more complex calculations.

### Summary of Inline Function Definition

Apart from generating seemingly unintelligible code, the remarkable thing about inline function definition is that the final result has been constructed from nothing more than the combination of three * very* simple concepts:

- Expression evaluation
`x + 1`

- Function definition
`x => x + 1`

- Function application
`(x => x + 1)(2)`

The fact that we might give a function such as `x => x + 1`

the name `incr`

is merely a convenience for us humans.

If you've ever used a language that supports macros, the inline function substitution is exactly what happens when, at compile time, a macro name is replaced by the code it represents.

## So Where Does This Leave Us?

Now that we have some practical experience with this low-level style of functional programming, whether you realise it or not, you now have some working experience with Lambda Calculus, and specifically, this final result is an example of a *Lambda expression*.

Within the basic definition of Lambda Calculus, the only data type is the function. Furthermore, there are no variable assignments anywhere. The only binding of values to names happens when parameters are passed to a function.

We know from the derivations shown in this blog that although the above coding appears to be completely cryptic, it is actually composed from little functions that, taken individually, are in fact very simple. However, if you were to show this code to someone who did not have the background understanding contained in this blog, they might understandably state that the code is not *easy to understand*. (See my earlier blog on avoiding the popular confusion that can be created by thinking that "simple" and "easy" are synonyms)

I'm not suggesting we should start writing code like this - far from it - but the important thing to understand is that by combining (named or anonymous) functions in the way we have done, we have achieved the goal described by Tony Hoare of creating a software design so simple, that it obviously contains no errors - as opposed to creating a software design so complex that it contains no obvious errors. (See the Mindshift: Part 4 blog for details)

It is clear that in a real-world programming scenario, no one would develop a unit of code to "double a number then add 5" in the form of `(fn1=>fn2=>val=>fn2(fn1(val)))((p=>q=>q*p)(2))((p=>q=>q+p)(5))`

; that would be ridiculous. Instead, you would follow the original approach of building up function calls using placeholder names such as `multiplier`

or `add5`

or `compose`

.

But the final step of using inline function definitions is exactly what happens at runtime whenever macro expansion takes place.

So, after a considerable amount of preparatory leg-work, we now have the necessary tools to take a normal recursive function and refactor it to use only anonymous functions.

This will be the topic of my next blog.

Chris W

All source code from the Mindshift blog series can be found on GitHub

### Footnotes

1) As some of you may have noticed, up till now, I have stoically avoided any mention of mathematical concepts such as the Lambda Calculus. This is because 20 years of software training experience has taught me that students can learn a new concept far more effectively through practical, hands-on experience, than they can by seeing it only in a sterile and abstract form.

I dislike the style of teaching that starts with something abstract, and from it, tries to derive something concrete. To me, this teaching style does a great disservice to the majority of students because only about 20% of the population can grasp a new principle as a purely abstract concept. The remaining 80% need to handle it, work with it, and see it in action before the learning process becomes effective. This is why I always start with concrete examples and then from them, derive whatever abstract representations may be useful.

However, we will soon get to the point where we can look directly at the Lambda Calculus; and I trust that by the time we do, these preparatory blogs will have given you a good working understanding of how abstract functions can be used to define anything that is "effectively computable".

In other words I'm teaching you the Lambda Calculus by seeing it implemented in practice, rather than as a sterile (and therefore meaningless) abstraction.