Down with sines!

Suppose you have a reasonable continuous function {f} on some interval, say {f(x)=e^x} on {[-1,1]}, and you want to approximate it by a trigonometric polynomial. A straightforward approach is to write

\displaystyle    f(x) \approx \frac12 A_0+\sum_{n=1}^N (A_n\cos n \pi x +B_n \sin n \pi x)

where {A_n} and {B_n} are the Fourier coefficients:

\displaystyle    A_n= \int_{-1}^1 e^x \cos \pi n x \,dx = (-1)^n \frac{e-e^{-1}}{1+ \pi^2 n^2 }

\displaystyle    B_n= \int_{-1}^1 e^x \sin \pi n x \,dx = (-1)^{n-1} \frac{\pi n (e-e^{-1})}{1+ \pi^2 n^2 }

(Integration can be done with the standard Calculus torture device). With {N=4}, we get

\displaystyle    e^{x} \approx 1.175 -0.216\cos\pi x +0.679 \sin \pi x +0.058\cos 2\pi x -0.365 \sin 2\pi x -0.026\cos 3\pi x +0.247 \sin 3\pi x +0.015\cos 4\pi x   -0.186 \sin 4\pi x

which, frankly, is not a very good deal for the price.

Would not buy again
Would not buy again

Still using the standard Fourier expansion formulas, one can improve approximation by shifting the function to {[0,2]} and expanding it into the cosine Fourier series.

\displaystyle    f(x-1) \approx \frac12 A_0+\sum_{n=1}^N A_n\cos \frac{n \pi x}{2}

where

\displaystyle    A_n= \int_{0}^2 e^{x-1} \cos \frac{\pi n x}{2} \,dx = \begin{cases} \dfrac{e-e^{-1}}{1+ \pi^2 k^2 }  \quad & n=2k \\ {} \\   (-1)^k \dfrac{4(e+e^{-1})}{4+ \pi^2 (2k-1)^2 } \quad & n = 2k-1 \end{cases}

Then replace {x} with {x+1} to shift the interval back. With {N=4}, the partial sum is

\displaystyle    e^x \approx 1.175 -0.890\cos \frac{\pi(x+1)}{2} +0.216\cos \pi(x+1)-0.133\cos \frac{3\pi(x+1)}{2} +0.058\cos 2\pi (x+1)

which gives a much better approximation with fewer coefficients to calculate.

When the sines don't get in the way.
When the sines don’t get in the way

To see what is going on, one has to look beyond the interval on which {f} is defined. The first series actually approximates the periodic extension of {f}, which is discontinuous because the endpoint values are not equal:

Periodic extension of a continuous function may be discontinuous
Periodic extension of a continuous function may be discontinuous

Cosines, being even, approximate the symmetric periodic extension of {f}, which is continuous whenever {f} is.

Symmetric periodic extension preserves continuity
Symmetric periodic extension preserves continuity

Discontinuities hurt the quality of Fourier approximation more than the lack of smoothness does.

Just for laughs I included the pure sine approximation, also with {N=4}.

Down with sines
Down with sines

Peanut allergy and Polar interpolation

Draw a closed curve passing through the points {(\pm 3,0)} and {(0,\pm 1)}: more specifically, from (3,0) to (0,1) to (-3,0) to (0,-1) and back to (3,0).

How hard could it be?
How hard could it be?

I’ll wait.

… Done?

Okay, you may read further.

This is an interpolation problem. The geometry of the problem does not allow for curves of the form {y=f(x)}. But polar graphs {\rho=f(\theta)} should work. (Aside: in {(x,y)} the variable usually taken to be independent is listed first; in {(\rho,\theta)} it is listed second. Is consistent notation too much to ask for? Yes, I know the answer…) Since {f} must be {2\pi}-periodic, it is natural to interpolate {(3,0),(1,\pi/2),(3,\pi),(1,3\pi/2)} by a trigonometric polynomial. The polynomial {f(\theta)=2+\cos 2\theta} does the job. But does it do it well?

I did not order peanuts
I did not order peanuts

This is not what I would draw. My picture would be more like the ellipse with the given points as vertices:

Ellipse
Ellipse

The unnecessary concavity of the peanut graph comes from the second derivative {f''} being too large at the points of minimum (and too small at the points of maximum). We need a function with flat minima and sharp maxima. Here is the comparison between {f(\theta)=2+\cos 2\theta} (red) and the function that describes the ellipse in polar coordinates (blue):

We want blue, not red.
We want blue, not red.

I thought of pre-processing: apply some monotone function {\varphi\colon (0,\infty)\rightarrow \mathbb R} to the {\rho}-values, interpolate, and then compose the interpolant with {\varphi^{-1}}. The function {\varphi} should have a large derivative near {0}, so that its inverse has a small derivative there, flattening the minima.

The first two candidates {\varphi(r)=\log r} and {\varphi(r)=1/r} did not improve the picture enough. But when I tried {\varphi(r)=1/r^2}, the effect surprised me. Interpolation between {(1/9,0),(1,\pi/2),(1/9,\pi),(1,3\pi/2)} yields {(5-4\cos 2\theta)/9}, which after application of {\varphi^{-1}} produces {\rho = 3/\sqrt{5-4\cos 2\theta}}exactly the ellipse pictured above.

Next, I tried a less symmetric example: curve through {(5,0);(0,2);(-3,0);(0,-2)}.

Egg without preprocessing
Egg without preprocessing
Preprocessed egg
Preprocessed egg

Again, the second approach produces a more natural looking curve.

Finally, a curve with five-fold symmetry, with {\rho} ranging from {2} to {5}.

Star without preprocessing
Star without preprocessing
Star with preprocessing
Star with preprocessing

The interpolation and plotting were done in Scilab, by invoking the functions given below with commands polar1([3,1,3,1]) or polar2([2,5,2,5,2,5,2,5,2,5]), etc. Because I considered equally spaced interpolation nodes, the coefficients of interpolated polynomials are nothing but the (inverse) discrete Fourier transform of the given values. First function interpolates the radius {\rho} itself.

function polar1(R)
 clf()
 p = fft(R,1)  
 t = 0:.01:2*%pi
 rho = zeros(t)
 for i = 1:length(p)
     rho = rho + real(p(i)*exp(-%i*(i-1)*t))
 end
 polarplot(t,rho,style=5)
 t = resize_matrix(linspace(0,2*%pi,length(R)+1),1,length(R))
 plot(R.*cos(t), R.*sin(t), 'o')
endfunction

The second function includes preprocessing: it interpolates {1/\rho^2} and then raises the interpolant to power {-1/2}.

function polar2(R)
 clf()
 p = fft(R^(-2),1)
 t = 0:.01:2*%pi
 rho = zeros(t)
 for i = 1:length(p)
     rho = rho + real(p(i)*exp(-%i*(i-1)*t))
 end
 rho = max(rho,0.01*ones(rho))
 polarplot(t,rho^(-1/2),style=5)
 t = resize_matrix(linspace(0,2*%pi,length(R)+1),1,length(R))
 plot(R.*cos(t), R.*sin(t), 'o')
endfunction

Added: one more comparison, {\rho^{1}} vs {\rho^{-2}} vs {\rho^{-3}}; the last one loses convexity again.

Raising to -3 is going too far.
Raising to -3 is going too far.

And then to {\rho^{-7}}:

Power -7
Power -7