Down with sines!

Suppose you have a reasonable continuous function {f} on some interval, say {f(x)=e^x} on {[-1,1]}, and you want to approximate it by a trigonometric polynomial. A straightforward approach is to write

\displaystyle    f(x) \approx \frac12 A_0+\sum_{n=1}^N (A_n\cos n \pi x +B_n \sin n \pi x)

where {A_n} and {B_n} are the Fourier coefficients:

\displaystyle    A_n= \int_{-1}^1 e^x \cos \pi n x \,dx = (-1)^n \frac{e-e^{-1}}{1+ \pi^2 n^2 }

\displaystyle    B_n= \int_{-1}^1 e^x \sin \pi n x \,dx = (-1)^{n-1} \frac{\pi n (e-e^{-1})}{1+ \pi^2 n^2 }

(Integration can be done with the standard Calculus torture device). With {N=4}, we get

\displaystyle    e^{x} \approx 1.175 -0.216\cos\pi x +0.679 \sin \pi x +0.058\cos 2\pi x -0.365 \sin 2\pi x -0.026\cos 3\pi x +0.247 \sin 3\pi x +0.015\cos 4\pi x   -0.186 \sin 4\pi x

which, frankly, is not a very good deal for the price.

Would not buy again
Would not buy again

Still using the standard Fourier expansion formulas, one can improve approximation by shifting the function to {[0,2]} and expanding it into the cosine Fourier series.

\displaystyle    f(x-1) \approx \frac12 A_0+\sum_{n=1}^N A_n\cos \frac{n \pi x}{2}

where

\displaystyle    A_n= \int_{0}^2 e^{x-1} \cos \frac{\pi n x}{2} \,dx = \begin{cases} \dfrac{e-e^{-1}}{1+ \pi^2 k^2 }  \quad & n=2k \\ {} \\   (-1)^k \dfrac{4(e+e^{-1})}{4+ \pi^2 (2k-1)^2 } \quad & n = 2k-1 \end{cases}

Then replace {x} with {x+1} to shift the interval back. With {N=4}, the partial sum is

\displaystyle    e^x \approx 1.175 -0.890\cos \frac{\pi(x+1)}{2} +0.216\cos \pi(x+1)-0.133\cos \frac{3\pi(x+1)}{2} +0.058\cos 2\pi (x+1)

which gives a much better approximation with fewer coefficients to calculate.

When the sines don't get in the way.
When the sines don’t get in the way

To see what is going on, one has to look beyond the interval on which {f} is defined. The first series actually approximates the periodic extension of {f}, which is discontinuous because the endpoint values are not equal:

Periodic extension of a continuous function may be discontinuous
Periodic extension of a continuous function may be discontinuous

Cosines, being even, approximate the symmetric periodic extension of {f}, which is continuous whenever {f} is.

Symmetric periodic extension preserves continuity
Symmetric periodic extension preserves continuity

Discontinuities hurt the quality of Fourier approximation more than the lack of smoothness does.

Just for laughs I included the pure sine approximation, also with {N=4}.

Down with sines
Down with sines

Integrate by parts twice and solve for the integral

The dreaded calculus torture device that works for exactly two integrals, {\int e^{ax}\sin bx\,dx} and {\int e^{ax}\cos bx\,dx}.

Actually, no. A version of it (with one integration by parts) works for {\int x^n\,dx}:

\displaystyle    \int x^n\,dx = x^n x - \int x\, d(x^n) = x^{n+1} - n \int x^n\,dx

hence (assuming {n\ne -1})

\displaystyle  \int x^n\,dx = \frac{x^{n+1}}{n+1} +C

Yes, this is more of a calculus joke. A more serious example comes from Fourier series.

The functions {\sin nx}, {n=1,2,\dots}, are orthogonal on {[0,\pi]}, in the sense that

\displaystyle    \int_0^\pi \sin nx \sin mx \,dx =0 , \quad m\ne n

This is usually proved using a trigonometric identity that converts the product to a sum. But the double integration by parts give a nicer proof, because no obscure identities are needed. No boundary terms will appear because the sines vanish at both endpoints:

\displaystyle    \int_0^\pi \sin nx \sin mx \,dx = \frac{n}{m} \int_0^\pi \cos nx \cos mx \,dx = \frac{n^2}{m^2} \int_0^\pi \sin nx \sin mx \,dx

All integrals here must vanish because {n^2/m^2\ne 1}. As a bonus, we get the orthogonality of cosines, {\int_0^\pi \cos nx \cos mx \,dx=0}, with no additional effort.

The double integration by parts is also a more conceptual proof, because it gets to the heart of the matter: eigenvectors of a symmetric matrix (operator) that correspond to different eigenvalues are orthogonal. The trigonometric form is incidental, the eigenfunction property is essential. Let’s try this one more time, for the mixed boundary value problem {f(a)=0}, {f'(b)=0}. Suppose that {f} and {g} satisfy the boundary conditions, {f''=\lambda f}, and {g''=\mu g}. Since {fg'} and {f'g} vanish at both endpoints, we can pass the primes easily:

\displaystyle    \int_a^b fg= \frac{1}{\mu}\int_a^b fg'' = -\frac{1}{\mu}\int_a^b f'g' = \frac{1}{\mu}\int_a^b f''g = \frac{\lambda}{\mu} \int_a^b fg

If {\lambda\ne \mu}, all integrals must vanish.

Dirichlet vs Fejér

Convolution of a continuous function {f} on the circle {\mathbb T=\mathbb R/\mathbb (2\pi \mathbb Z)} with the Fejér kernel {\displaystyle F_N(x)=\frac{1-\cos (N+1)x}{(N+1)(1-\cos x)}} is guaranteed to produce trigonometric polynomials that converge to {f} uniformly as {N\rightarrow\infty}. For the Dirichlet kernel {\displaystyle D_N(x)=\frac{\sin((N+1/2)x)}{\sin(x/2)}} this is not the case: the sequence may fail to converge to {f} even pointwise. The underlying reason is that {\int_{\mathbb T} |D_N|\rightarrow \infty }, while the Fejér kernel, being positive, has constant {L^1} norm. Does this mean that Fejér’s kernel is to be preferred for approximation purposes?

Let’s compare the performance of both kernels on the function {f(x)=2\pi^2 x^2-x^4}, which is reasonably nice: {f\in C^2(\mathbb T)}. Convolution with {D_2} yields {\displaystyle \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)D_2(x-t)\,dt = \frac{7\pi^4}{15} -48 \cos x +3 \cos 2x }. The trigonometric polynomial is in blue, the original function in red:

Convolution with D_2
Convolution with D_2

I’d say this is a very good approximation.

Now try the Fejér kernel, also with {N=2}. The polynomial is {\displaystyle \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)K_2(x-t)\,dt = \frac{7\pi^4}{15} - 32 \cos x + \cos 2x }

Convolution with F_2
Convolution with F_2

This is not good at all.

And even with {N=20} terms the Fejér approximation is not as good as Dirichlet with merely {N=2}.

Convolution with F_20
Convolution with F_20

The performance of {F_{50}} is comparable to that of {D_2}. Of course, a {50}-term approximation is not what one normally wants to use. And it still has visible deviation near the origin, where the function {f} is {C^\infty} smooth:

Convolution with F_50
Convolution with F_50

In contrast, the Dirichlet kernel with {N=4} gives a low-degree polynomial
{\displaystyle \frac{7\pi^4}{15} -48 \cos x +3 \cos 2x -\frac{16}{27}\cos 3x+\frac{3}{16}\cos 4x} that approximates {f} to within the resolution of the plot:

Convolution with D_4
Convolution with D_4

What we have here is the trigonometric version of Biased and unbiased mollification. Convolution with {D_N} amounts to truncation of the Fourier series at index {N}. Therefore, it reproduces the trigonometric polynomials of low degrees precisely. But {F_N} performs soft thresholding: it multiplies the {n}th Fourier coefficient of {f} by {(1-|n|/(N+1))^+}. In particular, it transforms {\cos x} into {(N/(N+1))\cos x}, introducing the error of order {1/N} — a pretty big one. Since this error is built into the kernel, it limits the rate of convergence no matter how smooth the function {f} is. Such is the price that must be paid for positivity.

This reminds me of a parenthetical remark by G. B. Folland in Real Analysis (2nd ed., page 264):

if one wants to approximate a function {f\in C(\mathbb T)} uniformly by trigonometric polynomials, one should not count on partial sums {S_mf} to do the job; the Cesàro means work much better in general.

Right, for ugly “generic” elements of {C(\mathbb T)} the Fejér kernel is a safer option. But for decently behaved functions the Dirichlet kernel wins by a landslide. The function above was {C^2}-smooth; as a final example I take {f(x)=x^2} which is merely Lipschitz on {\mathbb T}. The original function is in red, {f*D_4} is in blue, and {f*F_4} is in green.

Dirichlet wins again
Dirichlet wins again

Added: the Jackson kernel {J_{2N}} is the square of {F_{N}}, normalized. I use {2N} as the index because squaring doubles the degree. Here is how it approximates {f(x)=2\pi^2 x^2-x^4}:

Convolution with J_2
Convolution with J_2
Convolution with J_4
Convolution with J_4
Convolution with J_20
Convolution with J_20

The Jackson kernel performs somewhat better than {F_N}, because the coefficient of {\cos x} is off by {O(1/N^2)}. Still not nearly as good as the non-positive Dirichlet kernel.