Archive

Archive for June, 2013

Measure-theoretic Formulation of the Likelihood Function

June 26, 2013 Leave a comment

Let P_\theta be a family of probability measures indexed by \theta \in \Theta. For notational convenience, assume 0 \in \Theta, so that P_0 is one of the probability measures in the family. This short note sketches why L(\theta) = E_0\left[ \frac{dP_\theta}{dP_0} \mid \mathcal X \right] is the likelihood function, where the \sigma-algebra \mathcal X describes the possible observations and E_0 denotes expectation with respect to the measure P_0.

First, consider the special case where the probability measure can be described by a probability density function (pdf) p(x,y;\theta). Here, x is a real-valued random variable that we have observed, y is a real-valued unobserved random variable, and \theta indexes the family of joint pdfs. The likelihood function when there is a “hidden variable” y is usually defined as \theta \mapsto p(x;\theta) where p(x;\theta) is the marginalised pdf obtained by integrating out the unknown variable y, that is, p(x;\theta) = \int_{-\infty}^{\infty} p(x,y;\theta)\,dy. Does this likelihood function equal L(\theta) when \mathcal X is the \sigma-algebra generated by the random variable x?

The correspondence between the measure and the pdf is: P_\theta(A) = \int_A p(x,y;\theta)\,dx\,dy for any (measurable) set A \subset \mathbb{R}^2; this is the probability that (x,y) lies in A. In this case, the Radon-Nikodym derivative \frac{dP_\theta}{dP_0} is simply the ratio \frac{p(x,y;\theta)}{p(x,y;0)}. The conditional expectation with respect to X under the distribution p(x,y;0) is E_0\left[ \frac{p(x,y;\theta)}{p(x,y;0)} \mid x \right] = \int_{-\infty}^{\infty} \frac{p(x,y;\theta)}{p(x,y;0)} p(x,y;0)\, dy = \int_{-\infty}^{\infty} p(x,y;\theta)\,dy, verifying in this special case that L(\theta) is indeed the likelihood function.

The above verification does not make L(\theta) = E_0\left[ \frac{dP_\theta}{dP_0} \mid \mathcal X \right] any less mysterious. Instead, it can be understood directly as follows. From the definition of conditional expectation, it is straightforward to verify that L(\theta) = \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} meaning that for any \mathcal X-measurable set A, P_\theta(A) = \int_A \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X}\,dP_0. The likelihood function is basically asking for the “probability” that we observed what we did, or precisely, we want to take the set A to be our actual observation and see how P_\theta(A) varies with \theta. This would work if P_\theta(A) > 0 but otherwise it is necessary to look at how P_\theta(A) varies when A is an arbitrarily small but non-negligible set centred on the true observation. (If you like, it is impossible to make a perfect observation correct to infinitely many significant figures; instead, an observation of x usually means we know, for example, that 1.0 \leq x \leq 1.1, hence A can be chosen to be the event that 1.0 \leq x \leq 1.1 instead of the negligible event x = 1.05.) It follows from the integral representation P_\theta(A) = \int_A \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X}\,dP_0 that \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} describes the behaviour of P_\theta(A) as A shrinks down from a range of outcomes to a single outcome. Importantly, the subscript \mathcal X means L(\theta) = \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} is \mathcal X-measurable, therefore, L(\theta) depends only on what is observed and not on any other hidden variables.

While the above is not a careful exposition, it will hopefully point the interested reader in a sensible direction.

Advances in Mathematics

June 25, 2013 Leave a comment

Advances in mathematics occur in one of two ways.
The first occurs by the solution of some outstanding problem, such as the Bieberbach conjecture or Fermat’s conjecture. Such solutions are justly acclaimed by the mathematical community. The solution of every famous mathematical problem is the result of joint effort of a great many mathematicians. It always comes as an unexpected application of theories that were previously developed without a specific purpose, theories whose effectiveness was at first thought to be highly questionable.
Mathematicians realized long ago that it is hopeless to get the lay public to understand the miracle of unexpected effectiveness of theory. The public, misled by two hundred years of Romantic fantasies, clamors for some “genius” whose brain power cracks open the secrets of nature. It is therefore a common public relations gimmick to give the entire credit for the solution of famous problems to the one mathematician who is responsible for the last step.
It would probably be counterproductive to let it be known that behind every “genius” there lurks a beehive of research mathematicians who gradually built up to the “final” step in seemingly pointless research papers. And it would be fatal to let it be known that the showcase problems of mathematics are of little or no interest for the progress of mathematics. We all know that they are dead ends, curiosities, good only as confirmation of the effectiveness of theory. What mathematicians privately celebrate when one of their showcase problems is solved is Polya’s adage: “no problem is ever solved directly.”
There is a second way by which mathematics advances, one that mathematicians are also reluctant to publicize. It happens whenever some commonsense notion that had heretofore been taken for granted is discovered to be wanting, to need clarification or definition. Such foundational advances produce substantial dividends, but not right away. The usual accusation that is leveled against mathematicians who dare propose overhauls of the obvious is that of being “too abstract.” As if one piece of mathematics could be “more abstract” than another, except in the eyes of the beholder (it is time to raise a cry of alarm against the misuse of the word “abstract,” which has become as meaningless as the word “Platonism.”)
An amusing case history of an advance of the second kind is uniform convergence, which first made headway in the latter quarter of the nineteenth century. The late Herbert Busemann told me that while he was a student, his analysis teachers admitted their inability to visualize uniform convergence, and viewed it as the outermost limit of abstraction. It took a few more generations to get uniform convergence taught in undergraduate classes.
The hostility against groups, when groups were first “abstracted” from the earlier “group of permutations” is another case in point. Hadamard admitted to being unable to visualize groups except as groups of permutations. In the thirties, when groups made their first inroad into physics via quantum mechanics, a staunch sect of reactionary physicists, repeatedly cried “Victory!” after convincing themselves of having finally rid physics of the “Gruppenpest.” Later, they tried to have this episode erased from the history of physics.
In our time, we have witnessed at least two displays of hostility against new mathematical ideas. The first was directed against lattice theory, and its virulence all but succeeded in wiping lattice theory off the mathematical map. The second, still going on, is directed against the theory of categories. Grothendieck did much to show the simplifying power of categories in mathematics. Categories have broadened our view all the way to the solution of the Weil conjectures. Today, after the advent of braided categories and quantum groups, categories are beginning to look downright concrete, and the last remaining anticategorical reactionaries are beginning to look downright pathetic.
There is a common pattern to advances in mathematics of the second kind. They inevitably begin when someone points out that items that were formerly thought to be “the same” are not really “the same,” while the opposition claims that “it does not matter,” or “these are piddling distinctions.” Take the notion of species that is the subject of this book. The distinction between “labeled graphs” and “unlabeled graphs” has long been familiar. Everyone agrees on the definition of an unlabeled graph, but until a while ago the notion of labeled graph was taken as obvious and not in need of clarification. If you objected that a graph whose vertices are labeled by cyclic permutations – nowadays called a “fat graph” – is not the same thing as a graph whose vertices are labeled by integers, you were given a strange look and you would not be invited to the next combinatorics meeting.

Excerpt from the Forward by Gian-Carlo Rota (1997) to the book “Combinatorial Species and Tree-like Structures” by F. Bergeron et al.

Categories: Uncategorized Tags: ,

“Most Likely” is an All or Nothing Proposition

June 8, 2013 Leave a comment

The principle of maximum likelihood estimation is generally not explained well; readers are made to believe that it should be obvious to them that choosing the “most likely outcome” is the most sensible thing to do.  It isn’t obvious, and it need not be the most sensible thing to do.

First, recall the statement I made in an earlier paper:

The author believes firmly that asking for an estimate of a parameter is, a priori, a meaningless question. It has been given meaning by force of habit. An estimate only becomes useful once it is used to make a decision, serving as a proxy for the unknown true parameter value. Decisions include: the action taken by a pilot in response to estimates from the flight computer; an automated control action in response to feedback; and, what someone decides they hear over a mobile phone (with the pertinent question being whether the estimate produced by the phone of the transmitted message is intelligible). Without knowing the decision to be made, whether an estimator is good or bad is unanswerable. One could hope for an estimator that works well for a large class of decisions, and the author sees this as the context of estimation theory.

Consider the following problem.  Assume two coins are tossed, but somehow the outcome of the first coin influences the outcome of the second coin.  Specifically, the possible outcomes (H = heads, T = tails) and their probabilities are: HH 0.35; HT 0.05; TH 0.3; TT 0.3.  Given these probabilities, what is our best guess as to the outcome?  We have been conditioned to respond by saying that the most likely outcome is the one with the highest probability, namely, HH.  What is our best guess as to the outcome of the first coin only?  Well, there is 0.35 + 0.05 = 0.4 chance it will be H and 0.3 + 0.3 = 0.6 chance it will be T, so the most likely outcome is T.  How can it be that the most likely outcome of the first coin is T but the most likely outcome of both coins is HH?

The (only) way to understand this sensibly is to think in terms of how the estimate will be used.  What “most likely” really means is that it is the best strategy to use when placing an all-or-nothing bet.  If I must bet on the outcome of the two coins, and I win $1 if I guess correctly and win nothing otherwise, my best strategy is to bet on HH.  If I must bet on the outcome of the first coin, the best strategy is to bet on T.  This is not a contradiction because betting on the first coin being T is the same as betting on the two coins being either TH or TT.  I can now win in two cases, not just one; it is a different gamble.

The above is not an idle example.  In communications, the receiver must estimate what symbols were sent.  A typical mathematical formulation of the problem is estimating the state of a hidden Markov chain.  One can choose to estimate the most likely sequence of states or the most likely state at a particular instance.  The above example explains the difference and helps determine which is the more appropriate estimate to use.

Finally, it is noted that an all-or-nothing bet is not necessarily the most appropriate way of measuring the performance of an estimator.  For instance, partial credit might be given for being close to the answer, so if I guess two coins correctly I win $2, if I guess one coin correctly I win $1, otherwise I win nothing.  This can be interpreted as “regularising” the maximum likelihood estimate.  Nevertheless, at the end of the day, the only way to understand an estimator is in the broader context of the types of decisions that can be made well by using that estimator.