Archive

Archive for the ‘Uncategorized’ Category

Long versus Short Proofs

October 17, 2013 Leave a comment

Proofs are very similar to computer programs. And just like computer programs, there are often many different ways of writing a proof.  Sometimes the difference between proofs is great; one might be based on geometry while another on analysis. The purpose of this short note is to emphasise that different proofs are possible at the “simple” level too.  While this may appear mundane, it is actually an important part of learning mathematics: once you have proved a result, spend a few minutes trying to simplify the proof. It will make the next proof you do easier. The two key messages are as follows.

  • There is no substitute for experience when it comes to deriving proofs.
  • Practicing on simple examples to find the “best” proof is training for being able even just to find a proof of a harder result.

For lack of a better on-the-spot example, consider the following problem. Let f\colon\mathbb{C} \rightarrow \mathbb{C} be an analytic function having a double zero at the origin and whose first derivative has at most linear growth: | f'(x) | \leq \alpha |x| for some \alpha \in \mathbb{R}.  What is the most general form of f?

First Approach

A double zero at the origin means f(x) = x^2 h(x) for some analytic function h. Therefore f'(x) = 2xh(x) + x^2h'(x) and \frac{f'(x)}{x} = 2h(x) + xh'(x).  Since $latex 2h(x) + xh'(x)$ is both analytic and bounded, it must be constant, by Liouville’s theorem.  (Here, all analytic functions are entire because they are defined on the whole of the complex plane.) Solving the differential equation $latex 2h(x) + xh'(x) = c$ by substituting in the power series expansion h(x) = \sum a_i x^i shows that h(x) = \frac{c}2. [The general solution of $latex 2h(x) + xh'(x) = 0$ is h(x) = \beta x^{-2} where \beta is an arbitrary scalar. The only general solution that is analytic is h(x)=0.] The conclusion is that the most general form of f is f(x) = ax^2 for some complex scalar a.

Second Approach

The first approach is unattractive and unnecessarily complicated. Instead of starting with the double zero, start instead with the first derivative having a linear bound. (A standard generalisation of Liouville’s theorem is that an entire function with a linear bound must itself be linear. We will pretend here we do not know this fact.) If f(x) has a double zero at the origin then f'(x) has a zero at the origin, therefore f'(x) = x g(x) for some analytic g(x). The linear bound together with Liouville’s theorem means g(x) is constant, that is f'(x) = 2 a x for some scalar a. Therefore f(x) must equal ax^2 if it is to satisfy both f'(x) = 2 a x and f(0)=0.

What was the Difference?

The first approach expressed f as f(x) = x^2 h(x) while the second approach expressed f' as f'(x) = x g(x). Both approaches resulted in a differential equation, but the second approach resulted in the simpler differential equation f'(x) = 2ax. Underlying this example is that a “change of coordinates” can simplify a differential equation. Although both approaches could be made to work in this simple example, there are situations where some approaches are too difficult to follow through to completion. 

One could argue that because the linear bound constraint is “harder” than the double-zero constraint, one should start with the linear bound constraint and not the double-zero constraint, and therefore be led to the simpler differential equation. Yet the real messages are as stated at the start:

  • There is no substitute for experience when it comes to deriving proofs.
  • Practicing on simple examples to find the “best” proof is training for being able even just to find a proof of a harder result.

Advances in Mathematics

June 25, 2013 Leave a comment

Advances in mathematics occur in one of two ways.
The first occurs by the solution of some outstanding problem, such as the Bieberbach conjecture or Fermat’s conjecture. Such solutions are justly acclaimed by the mathematical community. The solution of every famous mathematical problem is the result of joint effort of a great many mathematicians. It always comes as an unexpected application of theories that were previously developed without a specific purpose, theories whose effectiveness was at first thought to be highly questionable.
Mathematicians realized long ago that it is hopeless to get the lay public to understand the miracle of unexpected effectiveness of theory. The public, misled by two hundred years of Romantic fantasies, clamors for some “genius” whose brain power cracks open the secrets of nature. It is therefore a common public relations gimmick to give the entire credit for the solution of famous problems to the one mathematician who is responsible for the last step.
It would probably be counterproductive to let it be known that behind every “genius” there lurks a beehive of research mathematicians who gradually built up to the “final” step in seemingly pointless research papers. And it would be fatal to let it be known that the showcase problems of mathematics are of little or no interest for the progress of mathematics. We all know that they are dead ends, curiosities, good only as confirmation of the effectiveness of theory. What mathematicians privately celebrate when one of their showcase problems is solved is Polya’s adage: “no problem is ever solved directly.”
There is a second way by which mathematics advances, one that mathematicians are also reluctant to publicize. It happens whenever some commonsense notion that had heretofore been taken for granted is discovered to be wanting, to need clarification or definition. Such foundational advances produce substantial dividends, but not right away. The usual accusation that is leveled against mathematicians who dare propose overhauls of the obvious is that of being “too abstract.” As if one piece of mathematics could be “more abstract” than another, except in the eyes of the beholder (it is time to raise a cry of alarm against the misuse of the word “abstract,” which has become as meaningless as the word “Platonism.”)
An amusing case history of an advance of the second kind is uniform convergence, which first made headway in the latter quarter of the nineteenth century. The late Herbert Busemann told me that while he was a student, his analysis teachers admitted their inability to visualize uniform convergence, and viewed it as the outermost limit of abstraction. It took a few more generations to get uniform convergence taught in undergraduate classes.
The hostility against groups, when groups were first “abstracted” from the earlier “group of permutations” is another case in point. Hadamard admitted to being unable to visualize groups except as groups of permutations. In the thirties, when groups made their first inroad into physics via quantum mechanics, a staunch sect of reactionary physicists, repeatedly cried “Victory!” after convincing themselves of having finally rid physics of the “Gruppenpest.” Later, they tried to have this episode erased from the history of physics.
In our time, we have witnessed at least two displays of hostility against new mathematical ideas. The first was directed against lattice theory, and its virulence all but succeeded in wiping lattice theory off the mathematical map. The second, still going on, is directed against the theory of categories. Grothendieck did much to show the simplifying power of categories in mathematics. Categories have broadened our view all the way to the solution of the Weil conjectures. Today, after the advent of braided categories and quantum groups, categories are beginning to look downright concrete, and the last remaining anticategorical reactionaries are beginning to look downright pathetic.
There is a common pattern to advances in mathematics of the second kind. They inevitably begin when someone points out that items that were formerly thought to be “the same” are not really “the same,” while the opposition claims that “it does not matter,” or “these are piddling distinctions.” Take the notion of species that is the subject of this book. The distinction between “labeled graphs” and “unlabeled graphs” has long been familiar. Everyone agrees on the definition of an unlabeled graph, but until a while ago the notion of labeled graph was taken as obvious and not in need of clarification. If you objected that a graph whose vertices are labeled by cyclic permutations – nowadays called a “fat graph” – is not the same thing as a graph whose vertices are labeled by integers, you were given a strange look and you would not be invited to the next combinatorics meeting.

Excerpt from the Forward by Gian-Carlo Rota (1997) to the book “Combinatorial Species and Tree-like Structures” by F. Bergeron et al.

Categories: Uncategorized Tags: ,

Background Information for Continuous-time Filtering and Estimation on Manifolds

February 5, 2013 Leave a comment

The preprint A Primer on Stochastic Differential Geometry in Signal Processing discusses, among other things, the following in simple but rigorous terms:

  • How Brownian motion can be generated on Riemannian manifolds;
  • How “coloured” (technically, left-invariant) Brownian motion can be generated on Lie groups;
  • Ito and Stratonovich integrals, and the transfer principle of Stratonovich integrals making them convenient to use for stochastic differential equations on manifolds;
  • The special orthogonal groups SO(n);
  • How a “Gaussian random variable” can be generated on a Riemannian manifold;
  • How state-space models extend to manifolds;
  • How stochastic development provides a convenient framework for understanding stochastic processes on manifolds;
  • Whether or not stochastic integrals are “pathwise” computable.

The last section of the paper includes the following:

Several concepts normally taken for granted, such as unbiasedness of an estimator, are not geometric concepts and hence raise the question of their correct generalisations to manifolds. The answer is that the difficulty lies not with manifolds, but with the absence of meaning to ask for an estimate of a parameter. The author believes firmly that asking for an estimate of a parameter is, a priori, a meaningless question. It has been given meaning by force of habit. An estimate only becomes useful once it is used to make a decision, serving as a proxy for the unknown true parameter value. Decisions include: the action taken by a pilot in response to estimates from the flight computer; an automated control action in response to feedback; and, what someone decides they hear over a mobile phone (with the pertinent question being whether the estimate produced by the phone of the transmitted message is intelligible). Without knowing the decision to be made, whether an estimator is good or bad is unanswerable. One could hope for an estimator that works well for a large class of decisions, and the author sees this as the context of estimation theory.