Let be a family of probability measures indexed by . For notational convenience, assume , so that is one of the probability measures in the family. This short note sketches why is the likelihood function, where the -algebra describes the possible observations and denotes expectation with respect to the measure .
First, consider the special case where the probability measure can be described by a probability density function (pdf) . Here, is a real-valued random variable that we have observed, is a real-valued unobserved random variable, and indexes the family of joint pdfs. The likelihood function when there is a “hidden variable” is usually defined as where is the marginalised pdf obtained by integrating out the unknown variable , that is, . Does this likelihood function equal when is the -algebra generated by the random variable ?
The correspondence between the measure and the pdf is: for any (measurable) set ; this is the probability that lies in . In this case, the Radon-Nikodym derivative is simply the ratio . The conditional expectation with respect to under the distribution is , verifying in this special case that is indeed the likelihood function.
The above verification does not make any less mysterious. Instead, it can be understood directly as follows. From the definition of conditional expectation, it is straightforward to verify that meaning that for any -measurable set , . The likelihood function is basically asking for the “probability” that we observed what we did, or precisely, we want to take the set to be our actual observation and see how varies with . This would work if but otherwise it is necessary to look at how varies when is an arbitrarily small but non-negligible set centred on the true observation. (If you like, it is impossible to make a perfect observation correct to infinitely many significant figures; instead, an observation of usually means we know, for example, that , hence can be chosen to be the event that instead of the negligible event .) It follows from the integral representation that describes the behaviour of as shrinks down from a range of outcomes to a single outcome. Importantly, the subscript means is -measurable, therefore, depends only on what is observed and not on any other hidden variables.
While the above is not a careful exposition, it will hopefully point the interested reader in a sensible direction.
This is a sequel to the introductory article on measure-theoretic probability and accords with my belief that learning should not be one-pass, by which I mean loosely that it is more efficient to learn the basics first at a rough level and then come back to fill in the details soon afterwards. It endeavours to address the questions:
- Why a probability triple at all?
- What if is not a -algebra?
- Why is it important that is countably additive?
In addressing these questions, it also addresses the question:
- Why can’t a uniform probability be defined on the natural numbers ?
Consider a real-life process, such as the population of a family of rabbits at each generation . This gives us a countable family of random variables (Recall that countable means countably infinite; with only a finite number of random variables, matters would be simpler.) We can safely assume that if for some then the population has died out, that is,
What is the probability that the population dies out?
The key questions here are the implicit questions of how to actually define and then subsequently calculate this probability of extinction. Intuitively, we want the probability that there exists an such that When trying to formulate this mathematically, we may think to split this up into bits such as “does ?”, “does ?” and so forth. Because these events are not disjoint (if we know then we are guaranteed that ) we realise that we need some way to account for this “connection” between the random variables. Is there any better way of accounting for this “connection” other than by declaring the “full” outcome to be and interpreting each as a function of ? (Only by endeavouring to think of an alternative will the full merit of having an become clear.)
There are (at least) two paths we could take to define the probability of the population dying out. The first was hinted at already; segment into disjoint sets then add up the probabilities of each of the relevant sets. Precisely, the sets , , and so forth are disjoint, and we are tempted to sum the probabilities of each one occurring to arrive at the probability of extinction. This is an infinite summation though, so unless we believe that probability is countably additive (recall that this means for disjoint sets ) then this avenue is not available.
Another path is to recognise that the sets are like Russian dolls, one inside the other, namely This means that their probabilities, , form a non-decreasing sequence, and moreover, we are tempted to believe that should equal the probability of extinction. (The limit exists because the form a bounded and monotonic sequence.)
In fact, these paths are equivalent; if is countably additive and the are nested as above then and the converse is true too; if for any sequence of nested sets the probability and the limit operations can be interchanged (which is how the statement should be interpreted) then is countably additive.
Essentially, we have arrived at the conclusion that the only sensible way we can define the probability of extinction is to agree that probability is countably additive and then carry out the calculations above. Without countable additivity, there does not seem to be any way of defining the probability of extinction in general.
The above argument in itself is intended to complete the motivation for having a probability triple; the is required to “link” random variables together and countable additivity is required in order to model real-world problems of interest. The following section goes further though by giving an example of when countable additivity does not hold.
A Uniform Distribution on the Natural Numbers
For argument’s sake, let’s try to define a “probability triple” corresponding to a uniform distribution on the natural numbers . The probability of drawing an even number should be one half, the probability of drawing an integer multiple of 3 should be one third, and so forth. Generalising this principle, it seems entirely reasonable to define to be the limit, as , of the number of elements of less than divided by itself. Since this limit does not necessarily exist, we solve it by declaring to be the set of all for which this limit exists.
It can be shown directly that is not a -algebra. In fact, it is not even an algebra because it is relatively straightforward to construct two subsets of , call them and , which belong to but whose intersection does not, that is, there exist for which .
Does behave nicely? Let and observe that and We know from the earlier discussion about extinction that it is very natural to expect that . However, this is not the case here; since each of the contain only a finite number of elements, it follows that . Therefore, the limit on the left hand side is zero whereas the right hand side is equal to one.
- Countable additivity enables us to give meaning to probabilities of real-world events of interest to us (such as probability of extinction).
- Without countable additivity, even very basic results such as for nested need not hold. In other words, there are not enough constraints on for a comprehensive theory to be developed if we drop the requirement of being countably additive over a -algebra .