# Re: “How many sides does a circle have?”

The post is inspired by this story told by JDH at Math.SE.

My third-grade son came home a few weeks ago with similar homework questions:

How many faces, edges and vertices do the following
have?

• cube
• cylinder
• cone
• sphere

Like most mathematicians, my first reaction was that for the latter objects the question would need a precise definition of face, edge and vertex, and isn’t really sensible without such definitions.

But after talking about the problem with numerous people, conducting a kind of social/mathematical experiment, I observed something intriguing. What I observed was that none of my non-mathematical friends and acquaintances had any problem with using an intuitive geometric concept here, and they all agreed completely that the answers should be

• cube: 6 faces, 12 edges, 8 vertices
• cylinder: 3 faces, 2 edges, 0 vertices
• cone: 2 faces, 1 edge, 1 vertex
• sphere: 1 face, 0 edges, 0 vertices

Indeed, these were also the answers desired by my son’s teacher (who is a truly outstanding teacher). Meanwhile, all of my mathematical colleagues hemmed and hawed about how we can’t really answer, and what does “face” mean in this context anyway, and so on; most of them wanted ultimately to say that a sphere has infinitely many faces and infinitely many vertices and so on. For the homework, my son wrote an explanation giving the answers above, but also explaining that there was a sense in which some of the answers were infinite, depending on what was meant.

At a party this past weekend full of mathematicians and philosophers, it was a fun game to first ask a mathematician the question, who invariably made various objections and refusals and and said it made no sense and so on, and then the non-mathematical spouse would forthrightly give a completely clear account. There were many friendly disputes about it that evening.

Let’s track down this intuitive geometric concept that non-mathematicians possess. We are given a set ${E\subset \mathbb R^n}$ and a point ${v\in E}$, and try to figure out whether ${v}$ is a vertex, a part of an edge, or a part of a face. The answer should depend only on the shape of the set near ${p}$.

It is natural to say that a vector ${v}$ is tangent to ${E}$ at ${p}$ if going along ${v}$ we stay close to the set. Formally, the condition is ${\lim_{t\to 0+} t^{-1}\,\mathrm{dist}\,(p+tv,E)=0}$. Notice that the limit is one-sided: if ${v}$ is tangent, ${-v}$ may or may not be tangent.

The set of all tangent vectors to ${E}$ at ${p}$ is denoted by ${T_pE}$ and is called the tangent cone. It is indeed a cone in the sense of being invariant under scaling. This set contains the zero vector, but need not be a linear space. Let’s say that the rank of point ${p}$ is ${k}$ if ${T_pE}$ contains a linear space of dimension ${k}$ but no linear space of dimension ${k+1}$.

Finally, define a rank ${k}$ stratum of ${E}$ as a connected component of the set of all points of rank ${k}$.

If ${E}$ is the surface of a polyhedron, we get the familiar concepts of vertices (rank 0 strata), edges (rank 1) and faces (rank 2). For each of the homework solids the answer agrees with the opinion of non-mathematical crowd. Take the cone as an example:

Cone

At the vertex the tangent cone to the cone is… a cone. It contains no nontrivial linear space, hence the rank is 0. This is indeed a vertex.

Along the edge of the base the tangent cone is the union of two halfplanes:

Tangent cone at an edge point

Here the rank is 1: the tangent cone contains a line, but no planes.

Finally, at every point of smoothness the tangent cone is the tangent plane, so the rank is 2. The set of such points has two connected components, separated by the circular edge.

So much for the cone. As for the circle mentioned in the title, I regrettably find myself in agreement with Travis.

More seriously: the surface of a convex body is a classical example of an Alexandrov space (metric space of curvature bounded below in the triangle comparison sense). Perelman proved that any Alexandrov space can be stratified into topological manifolds. Lacking an ambient vector space, one obtains tangent cones by taking the Gromov-Hausdorff limit of blown-up neighborhoods of ${p}$. The tangent cone has no linear structure either — it is also a metric space — but it may be isometric to the product of ${\mathbb R^k}$ with another metric space. The maximal ${k}$ for which the tangent cone splits off ${\mathbb R^k}$ becomes the rank of ${p}$.

Recently, Colding and Naber showed that the above approach breaks down for spaces which have only Ricci curvature bounds instead of triangle-comparison curvature. More precisely, their examples are metric spaces that arise as a noncollapsed limit of manifolds with a uniform lower Ricci bound. In this setting tangent cones are no longer uniquely determined by ${p}$, and they show that different cones at the same point may have different ranks.

# Monkey tacos

State change probability: 0.8, 0.6, 0.5, 0.4, 0.2

# Angel dust of matrices

A matrix ${A}$ with real entries has positive characteristic polynomial if ${\det(tI-A)\ge 0}$ for all real ${t}$. For example,

$\displaystyle A=\begin{pmatrix} 1 & -5 \\ 3 & -2 \end{pmatrix}$

has this property: ${\det (tI-A)=(t-1)(t+2)+15=t^2+t+13}$. For brevity, let’s say that ${A}$ is a PCP matrix.

Clearly, any PCP matrix must be of even size. Incidentally, this implies that the algebraists’ characteristic polynomial ${\det (tI-A)}$ and the analysts’ characteristic polynomial ${\det (A-tI)}$ coincide.

In general, there is no reason for the PCP property to be preserved under either addition or multiplication of matrices. But there are some natural rings of PCP matrices, such as

• complex numbers ${\begin{pmatrix} a & b \\ -b & a \end{pmatrix} }$

• quaternions ${\begin{pmatrix} a & b & c & d \\ -b & a & -d & c \\ -c & d & a & -b \\ -d & -c & b & a \end{pmatrix}}$

A ${2\times 2}$ matrix is PCP if and only if its eigenvalues are either complex or repeated. This is equivalent to ${(\mathrm{tr}\, A)^2 \le 4\det A}$. In general, a matrix of even size is PCP if and only if it has no real eigenvalues of odd algebraic multiplicity.

Is this post merely an excuse for 1995 flashback: Ангельская Пыль by Ария?

# Diana the Huntress and curve-fitting

Diana the Huntress was created by Anna Hyatt Huntington in 1934.

Diana

The sculpture was moved recently, and the guardrail was added very recently (this week, I think). Walking around it, one can clearly see that the guardrail is not round. What shape does it have? Direct measurements are difficult because the sculpture gets in the way.

It is natural to conjecture that the shape is an ellipse. A wonderful property of the ellipse is that it remains an ellipse in any perspective. This is despite the fact that the rectangle bounding the ellipse can be projected into an arbitrary convex quadrilateral.

Thus, the conjecture can be tested directly on the photograph: if the curve is an ellipse, then its original form is also an ellipse. And conversely. Since the rail has nonzero thickness, I worked with its upper outer edge. The ratio of major axes was measured at ${b/a\approx 0.454}$ which corresponds to the eccentricity ${e=\sqrt{1-(b/a)^2}\approx 0.89}$. Also, the angle between the major axis and the horizontal line was measured at ${\approx 0.0956}$ radian.

I created such an ellipse in fooplot using the polar form ${\displaystyle r=\frac{a(1-e^2)}{1+e\cos (\theta-\theta_0)}}$.

Ellipse

After adjusting the pixel size, it fit the outer edge very well:

Does it fit?

And this is how math solves Real World Problems.

# Google Scholar metrics 2013

Earlier this month Google released the 2013 edition of Google Scholar metrics. Top 20 journals (and non-journals) in Mathematical Analysis category:

1. Nonlinear Analysis: Theory, Methods & Applications
2. Journal of Mathematical Analysis and Applications
3. arXiv Analysis of PDEs (math.AP)
4. Journal of Functional Analysis
5. Fixed Point Theory and Applications
6. arXiv Functional Analysis (math.FA)
7. SIAM Journal on Mathematical Analysis
8. Journal of Differential Equations
9. Abstract and Applied Analysis
10. Journal of Inequalities and Applications
11. Annales de l’Institut Henri Poincare (C) Non Linear Analysis
12. arXiv Classical Analysis and ODEs (math.CA)
13. Calculus of Variations and Partial Differential Equations
14. Discrete and Continuous Dynamical Systems
15. arXiv Operator Algebras (math.OA)
16. Indiana University Mathematics Journal
17. Journal de Mathématiques Pures et Appliquées
18. Communications in Partial Differential Equations
19. arXiv Complex Variables (math.CV)
20. ESAIM: Control, Optimisation and Calculus of Variations

That’s one weird (and Elsevier-infested) list.

The ratings are based on the total number of citations, not citations per article. The more bloated a journal is, the better it looks. Maintaining an industry of Ctrl-C Ctrl-V generalizations helps too. JMAA scores high on both counts.

# Covering points with caps

Sometimes easy problems should be solved in a hard way.

Problem 1. Given a finite subset of a circle, find a shortest arc containing the set.

Easy way. Sort the points by polar angle; find a maximal gap between points; return its complement as an answer.

Shortest arc

Hard way. Not only is this approach harder, it also requires an additional assumption: the set is contained in some open semi-circle.

Draw the tangent line at every point ${x_k}$ of the set. It divides the plane into two half-planes. Focus on the closed half-plane ${P_k}$ that does not contain the circle. Find the point ${x^*\in \bigcap P_k}$ that is closest to the center of circle. The line ${L}$ from the center to ${x^*}$ is the line of symmetry of the required arc. Its size is determined by the maximal distance of given points to ${L}$.

Intersection of halfplanes determined by tangent lines

Finding ${x^*}$ is a quadratic minimization problem ${\|x\|^2\rightarrow \min}$ with linear constraints, which is not so bad. Still, this is obviously the harder way.

Problem 2. Given a finite subset of a sphere, find the smallest spherical cap containing the set.

A moment’s reflection is enough to see that the easy way is no longer available. What was previously the hard way now becomes a reasonable solution which works in every dimension (provided that the set is contained in an open hemisphere). Assuming the sphere is a unit sphere, all you do is minimize ${\|x\|^2}$ subject to linear constraints ${\langle x,x_k\rangle\ge 1 }$ for every ${k}$.

Why this works: ${\{y:\langle x,y\rangle\ge 1 \}}$ is a half-space that intersects the sphere along a spherical cap. The smaller ${\|x\|}$ is, the greater is the distance from this half-space to the origin, and the smaller spherical cap is cut off. The constraints ensure that the cap contains the given points.

One might expect the “flat” version of Problem 2, with plane instead of sphere, to be easier. Yet, I don’t see a way to reduce the flat problem to quadratic minimization with linear constraints.

Problem 3. Given a finite subset of the plane, find the smallest disk containing the set.

This is an instance of the problem of finding the Chebyshev center of a set.

# Improving the Wallis product

The Wallis product for ${\pi}$, as seen on Wikipedia, is

${\displaystyle 2\prod_{k=1}^\infty \frac{4k^2}{4k^2-1} = \pi \qquad \qquad (1)}$

Historical significance of this formula nonwithstanding, one has to admit that this is not a good way to approximate ${\pi}$. For example, the product up to ${k=10}$ is

${\displaystyle 2\,\frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8 \cdot 10 \cdot 10\cdot 12\cdot 12\cdot 14\cdot 14\cdot 16\cdot 16\cdot 18 \cdot 18\cdot 20\cdot 20}{1\cdot 3\cdot 3\cdot 5 \cdot 5\cdot 7\cdot 7\cdot 9\cdot 9\cdot 11\cdot 11\cdot 13\cdot 13\cdot 15\cdot 15\cdot 17\cdot 17\cdot 19\cdot 19\cdot 21} =\frac{137438953472}{44801898141} }$

And all we get for this effort is the lousy approximation ${\pi\approx \mathbf{3.0677}}$.

But it turns out that (1) can be dramatically improved with a little tweak. First, let us rewrite partial products in (1) in terms of double factorials. This can be done in two ways: either

${\displaystyle 2\prod_{k=1}^n \frac{4k^2}{4k^2-1} = (4n+2) \left(\frac{(2n)!!}{(2n+1)!!}\right)^2 \qquad \qquad (2)}$

or

${\displaystyle 2\prod_{k=1}^n \frac{4k^2}{4k^2-1} = \frac{2}{2n+1} \left(\frac{(2n)!!}{(2n-1)!!}\right)^2 \qquad \qquad (3)}$

Seeing how badly (2) underestimates ${\pi}$, it is natural to bump it up: replace ${4n+2}$ with ${4n+3}$:

${\displaystyle \pi \approx b_n= (4n+3) \left(\frac{(2n)!!}{(2n+1)!!}\right)^2 \qquad \qquad (4)}$

Now with ${n=10}$ we get ${\mathbf{3.1407}}$ instead of ${\mathbf{3.0677}}$. The error is down by two orders of magnitude, and all we had to do was to replace the factor of ${4n+2=42}$ with ${4n+3=43}$. In particular, the size of numerator and denominator hardly changed:

${\displaystyle b_{10}=43\, \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8 \cdot 10 \cdot 10\cdot 12\cdot 12\cdot 14\cdot 14\cdot 16\cdot 16\cdot 18 \cdot 18\cdot 20\cdot 20}{3\cdot 3\cdot 5 \cdot 5\cdot 7\cdot 7\cdot 9\cdot 9\cdot 11\cdot 11\cdot 13\cdot 13\cdot 15\cdot 15\cdot 17\cdot 17\cdot 19\cdot 19\cdot 21\cdot 21} }$

Approximation (4) differs from (2) by additional term ${\left(\frac{(2n)!!}{(2n+1)!!}\right)^2}$, which decreases to zero. Therefore, it is not obvious whether the sequence ${b_n}$ is increasing. To prove that it is, observe that the ratio ${b_{n+1}/b_n}$ is

${\displaystyle \frac{4n+7}{4n+3}\left(\frac{2n+2}{2n+3}\right)^2}$

which is greater than 1 because

${\displaystyle (4n+7)(2n+2)^2 - (4n+3)(2n+3)^2 = 1 >0 }$

Sweet cancellation here. Incidentally, it shows that if we used ${4n+3+\epsilon}$ instead of ${4n+3}$, the sequence would overshoot ${\pi}$ and no longer be increasing.

The formula (3) can be similarly improved. The fraction ${2/(2n+1)}$ is secretly ${4/(4n+2)}$, which should be replaced with ${4/(4n+1)}$. The resulting approximation for ${\pi}$

${\displaystyle c_n = \frac{4}{4n+1} \left(\frac{(2n)!!}{(2n-1)!!}\right)^2 \qquad \qquad (5)}$

is about as good as ${b_n}$, but it approaches ${\pi}$ from above. For example, ${c_{10}\approx \mathbf{3.1425}}$.

The proof that ${c_n}$ is decreasing is familiar: the ratio ${c_{n+1}/c_n}$ is

${\displaystyle \frac{4n+1}{4n+5}\left(\frac{2n+2}{2n+1}\right)^2}$

which is less than 1 because

${\displaystyle (4n+1)(2n+2)^2 - (4n+5)(2n+1)^2 = -1 <0 }$

Sweet cancellation once again.

Thus, ${b_n<\pi for all ${n}$. The midpoint of this containing interval provides an even better approximation: for example, ${(b_{10}+c_{10})/2 \approx \mathbf{3.1416}}$. The plot below displays the quality of approximation as logarithm of the absolute error:

• yellow dots show the error of Wallis partial products (2)-(3)
• blue is the error of ${b_n}$
• red is for ${c_n}$
• black is for ${(b_n+c_n)/2}$

And all we had to do was to replace ${4n+2}$ with ${4n+3}$ or ${4n+1}$ in the right places.

# Real zeros of sine Taylor polynomials

The more terms of Taylor series ${\displaystyle \sin x = x-\frac{x^3}{3!}+ \frac{x^5}{5!}- \cdots }$ we use, the more resemblance we see between the Taylor polynomial and the sine function itself. The first-degree polynomial matches one zero of the sine, and gets the slope right. The third-degree polynomial has three zeros in about the right places.

Third degree, three zeros

The fifth-degree polynomial will of course have … wait a moment.

Fifth degree, only one zero

Since all four critical points are in the window, there are no real zeros outside of our view. Adding the fifth-degree term not only fails to increase the number of zeros to five, it even drops it back to the level of ${T_1(x)=x}$. How odd.

Since the sine Taylor series converges uniformly on bounded intervals, for every ${ A }$ there exists ${ n }$ such that ${\max_{[-A,A]} |\sin x-T_n(x)|<1 }$. Then ${ T_n }$ will have the same sign as ${ \sin x }$ at the maxima and minima of the latter. Consequently, it will have about ${ 2A/\pi }$ zeros on the interval ${[-A,A] }$. Indeed, the intermediate value theorem guarantees that many; and the fact that ${T_n'(x) \approx \cos x }$ on ${ [-A,A]}$ will not allow for extraneous zeros within this interval.

Using the Taylor remainder estimate and Stirling's approximation, we find ${A\approx (n!)^{1/n} \approx n/e }$. Therefore, ${ T_n }$ will have about ${ 2n/(\pi e) }$ real zeros at about the right places. What happens when ${|x| }$ is too large for Taylor remainder estimate to be effective, we can't tell.

Let's just count the zeros, then. Sage online makes it very easy:

sineroots = [[2*n-1,len(sin(x).taylor(x,0,2*n-1).roots(ring=RR))] for n in range(1,51)]
scatter_plot(sineroots) 

Roots of sine Taylor polynomials

The up-and-down pattern in the number of zeros makes for a neat scatter plot. How close is this data to the predicted number ${ 2n/(\pi e) }$? Pretty close.

scatter_plot(sineroots,facecolor='#eeee66') + plot(2*n/(pi*e),(n,1,100))

Compared to 2n/pi*e

The slope of the blue line is ${ 2/(\pi e) \approx 0.2342 }$; the (ir)rationality of this number is unknown. Thus, just under a quarter of the zeros of ${ T_n }$ are expected to be real when ${ n }$ is large.

The actual number of real zeros tends to exceed the prediction (by only a few) because some Taylor polynomials have real zeros in the region where they no longer follow the function. For example, ${ T_{11} }$ does this:

Spurious zero around x=7

Richard S. Varga and Amos J. Carpenter wrote a series of papers titled Zeros of the partial sums of ${ \cos z }$ and ${\sin z }$ in which they classify real zeros into Hurwitz (which follow the corresponding trigonometric function) and spurious. They give the precise count of the Hurwitz zeros: ${1+2\lfloor n/(\pi e)\rfloor }$ for the sine and ${2\lfloor n/(\pi e)+1/2\rfloor }$ for the cosine. The total number of real roots does not appear to admit such an explicit formula. It is the sequence A012264 in the OEIS.

# Condition number and maximal rotation angle

The condition number ${K(A)}$ of an invertible matrix ${A}$ is the product of norms ${\|A\|\,\|A^{-1}\|}$, or, in deciphered form, the maximum of the ratio ${|Au|/|Av|}$ taken over all unit vectors ${u,v}$. It comes up a lot in numerical linear algebra and in optimization problems. One way to think of the condition number is in terms of the image of the unit ball under ${A}$, which is an ellipsoid. The ratio of longest and shortest axes of the ellipsoid is ${K(A)}$.

But for positive definite matrices ${A}$ the condition number can also be understood in terms of rotation. First, note that the positivity of ${\langle Av,v\rangle }$ says precisely that that the angle between ${v}$ and ${Av}$ is always less than ${\pi/2}$. Let ${\gamma}$ be the maximum of such angles taken over all ${v\ne 0}$ (or over all unit vectors, the same thing). Then

$\displaystyle K(A)=\frac{1+\sin\gamma }{1-\sin\gamma} \ \ \ \ \ \ \ \ \ (1)$

One can also make (1) a little less geometric by introducing ${\delta=\delta(A)}$ as the largest number such that ${\langle Av,v\rangle \ge \delta|Av|\,|v|}$ holds for all vectors ${v}$. Then ${\delta=\cos \gamma}$, and (1) takes the form

$\displaystyle K=\frac{1+\sqrt{1-\delta^2}}{1-\sqrt{1-\delta^2}} = \left(\frac{1+\sqrt{1-\delta^2}}{\delta}\right)^2 \ \ \ \ \ \ \ \ \ \ (2)$

Could this be of any use? The inequality

$\displaystyle \langle Av,v\rangle \ge \delta\,|Av|\,|v| \ \ \ \ \ \ \ \ \ \ (3)$

is obviously preserved under addition of matrices. Therefore, it is preserved by integration. In particular, if the Hessian of a twice differentiable convex function ${u}$ satisfies (3) at every point, then integration along a line segment from ${x}$ to ${y}$ yields

$\displaystyle \langle \nabla u(x)- \nabla u(y),x-y\rangle \ge \delta\,|\nabla u(x)-\nabla u(y)|\,|x-y| \ \ \ \ \ \ \ \ \ \ (4)$

Conversely, if ${u}$ is a twice differentiable convex function such that (4) holds, then (by differentiation) its Hessian satisfies (3), and therefore admits a uniform bound on the condition number by virtue of (2). Thus, for such functions inequality (4) is equivalent to uniform boundedness of the condition number of the Hessian.

But the Hessian itself does not appear in (4). Condition (4) expresses “uniform boundedness of the condition number of the Hessian” without requiring ${u}$ to be twice differentiable. As a simple example, take ${u(x_1,x_2)=|x|^{4/3}}$. The Hessian matrix is

$\displaystyle \frac{4}{9}|x|^{-4/3} \begin{pmatrix} x_1^2+3x_2^2 & -2x_1x_2 \\ -2x_1x_2 & 3x_1^2+x_2^2 \end{pmatrix}$

The eigenvalues are ${\frac{4}{3}|x|^{-1/3}}$ and ${\frac{4}{9}|x|^{-1/3}}$. Thus, even though the eigenvalues blow up at the origin and decay at infinity, the condition number of the Hessian remains equal to ${3}$. Well, except that the second derivatives do not exist at the origin. But if we use the form (4) instead, with ${\delta = \sqrt{3}/2}$, then non-differentiability becomes a non-issue.

Let’s prove (2). It suffices to work in two dimensions, because both ${K(A)}$ and ${\delta(A)}$ are determined by the restrictions of ${A}$ to two-dimensional subspaces. In two dimensions we can represent the linear map ${A}$ as ${z\mapsto \alpha z+\beta \bar z}$ for some complex numbers ${\alpha,\beta}$. Actually, ${\alpha}$ is real and positive because ${A}$ is symmetric positive definite. As ${z}$ runs through unimodular complex numbers, the maximum of ${|\alpha z+\beta \bar z|}$ is ${\alpha+|\beta|}$ and the minimum is ${\alpha-|\beta|}$. Therefore, ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$.

When ${|z|=1}$, the angle ${\gamma}$ that the vector ${\alpha z+\beta \bar z}$ forms with ${z}$ is equal to the argument of ${\bar z (\alpha z+\beta \bar z)=\alpha+\beta \bar z^2}$. The latter is maximized when ${0, \alpha, \alpha+\beta \bar z^2}$ form a right triangle with hypotenuse ${\alpha}$.

Proof by picture

Hence, ${\sin\gamma = |\beta|/\alpha}$. This proves ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$, and (2) follows.

There are similar observations in the literature on quasiconformality of monotone maps, including an inequality similar to (2) (for general matrices), but I have not seen either (1) or (2) stated as an identity for positive definite matrices.

# Convexity of polar curves

Everybody knows the second derivative test for the convexity of Cartesian curves ${y=y(x)}$. What is the convexity test for polar curves ${r=r(\theta)}$? Google search brought up Robert Israel’s answer on Math.SE: the relevant inequality is

$\displaystyle \mathcal{C}[r]:=r^2+2(r')^2-r\,r''\ge 0 \ \ \ \ \ (1)$

But when using it, one should be aware of the singularity at the origin. For example, ${r=1+\cos \theta}$ satisfies $\mathcal{C}[r] = 3(1+\cos \theta)\ge 0$ but the curve is not convex: it’s the cardioid.

FooPlot graphics today: fooplot.com

The formula (1) was derived for ${r> 0}$; the points with ${r=0}$ must be investigated directly. Actually, it is easy to see that when ${r }$ has a strict local minimum with value ${0}$, the polar curve has an inward cusp and therefore is not convex.

As usual, theoretical material is followed by an exercise.

Exercise: find all real numbers ${p}$ such that the polar curve ${r=(1+\cos 2\theta)^p}$ is convex.

All values ${p>0}$ are ruled out by the cusp formed at ${\theta=\pi/2}$. For ${p=0}$ we get a circle, obviously convex. When ${p<0}$, some calculations are in order:

$\displaystyle \mathcal{C}[r] = (1+4p+4p^2+(1-4p^2)\cos 2\theta)(1+\cos 2\theta)^{2p-1}$

For this to be nonnegative for all ${\theta}$, we need ${1+4p+4p^2\ge |1-4p^2|}$. Which is equivalent to two inequalities: ${p(4+8p) \ge 0}$ and ${2+4p\ge 0}$. Since ${p<0}$, the first inequality says that ${p\le -1/2}$, which is the opposite of the second.

Answer: ${p=0}$ and ${p=-1/2}$.

This exercise is relevant to the problem from the previous post, finding the “right” interpolation method in polar coordinates. Given a set of ${(r,\theta)}$ values, I interpolated ${(r^{1/p},\theta)}$ with a trigonometric polynomial, and then raised that polynomial to power ${p}$.

If the given points had Cartesian coordinates ${(\pm a,0), (0,\pm b)}$ then this interpolation yields ${r=(\alpha+\beta \cos \theta)^p}$ where ${\alpha,\beta}$ depend on ${a,b}$ and satisfy ${\alpha>|\beta|}$. Using the exercise above, one can deduce that ${p=-1/2}$ is the only nonzero power for which the interpolated curve is convex for any given points of the form ${(\pm a,0), (0,\pm b)}$.

In general, curves of the form ${r=P(\theta)^{-1/2}}$, with ${P}$ a trigonometric polynomial, need not be convex. But even then they look more natural than their counterparts with powers ${p\ne -1/2}$. Perhaps this is not surprising: the equation ${r^2P(\theta)=1}$ has a decent chance of being algebraic when the degree of ${P}$ is low.

Random example at the end: ${r=(3-\sin \theta +2\cos \theta+\cos 2\theta-2\sin 2\theta)^{-1/2}}$

Celebration of power -1/2