If a function is differentiable at , then it admits good linear approximation at small scales. Precisely: for every there is and a linear function such that for all . Having multiplied by means that the deviation from linearity is small compared to the (already small) scale on which the function is considered.

For example, this is a linear approximation to near at scale .

As is done on this graph, we can always take to be the secant line to the graph of based on the endpoints of the interval of consideration. This is because if is another line for which holds, then at the endpoints, and therefore on all of the interval (the function is convex).

Here is a non-differentiable function that obviously fails the linear approximation property at .

(By the way, this post is mostly about me trying out SageMathCloud.) A nice thing about is **self-similarity**: with the similarity factor . This implies that no matter how far we zoom in on the graph at , the graph will not get any closer to linear.

I like more than its famous, but not self-similar, cousin , pictured below.

Interestingly, linear approximation property **does not** imply differentiability. The function has this property at , but it lacks derivative there since does not have a limit as . Here is how it looks.

Let’s look at the scale

and compare to the scale

Well, that was disappointing. Let’s use math instead. Fix and consider the function . Rewriting it as

shows as . Choose so that and define . Then for we have , and for the trivial bound suffices.

Thus, can be well approximated by linear functions near ; it’s just that the linear function has to depend on the scale on which approximation is made: its slope does not have a limit as .

The linear approximation property does not become apparent until extremely small scales. Here is .