Archive

Posts Tagged ‘measure-theoretic probability’

Measure-theoretic Formulation of the Likelihood Function

June 26, 2013 Leave a comment

Let P_\theta be a family of probability measures indexed by \theta \in \Theta. For notational convenience, assume 0 \in \Theta, so that P_0 is one of the probability measures in the family. This short note sketches why L(\theta) = E_0\left[ \frac{dP_\theta}{dP_0} \mid \mathcal X \right] is the likelihood function, where the \sigma-algebra \mathcal X describes the possible observations and E_0 denotes expectation with respect to the measure P_0.

First, consider the special case where the probability measure can be described by a probability density function (pdf) p(x,y;\theta). Here, x is a real-valued random variable that we have observed, y is a real-valued unobserved random variable, and \theta indexes the family of joint pdfs. The likelihood function when there is a “hidden variable” y is usually defined as \theta \mapsto p(x;\theta) where p(x;\theta) is the marginalised pdf obtained by integrating out the unknown variable y, that is, p(x;\theta) = \int_{-\infty}^{\infty} p(x,y;\theta)\,dy. Does this likelihood function equal L(\theta) when \mathcal X is the \sigma-algebra generated by the random variable x?

The correspondence between the measure and the pdf is: P_\theta(A) = \int_A p(x,y;\theta)\,dx\,dy for any (measurable) set A \subset \mathbb{R}^2; this is the probability that (x,y) lies in A. In this case, the Radon-Nikodym derivative \frac{dP_\theta}{dP_0} is simply the ratio \frac{p(x,y;\theta)}{p(x,y;0)}. The conditional expectation with respect to X under the distribution p(x,y;0) is E_0\left[ \frac{p(x,y;\theta)}{p(x,y;0)} \mid x \right] = \int_{-\infty}^{\infty} \frac{p(x,y;\theta)}{p(x,y;0)} p(x,y;0)\, dy = \int_{-\infty}^{\infty} p(x,y;\theta)\,dy, verifying in this special case that L(\theta) is indeed the likelihood function.

The above verification does not make L(\theta) = E_0\left[ \frac{dP_\theta}{dP_0} \mid \mathcal X \right] any less mysterious. Instead, it can be understood directly as follows. From the definition of conditional expectation, it is straightforward to verify that L(\theta) = \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} meaning that for any \mathcal X-measurable set A, P_\theta(A) = \int_A \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X}\,dP_0. The likelihood function is basically asking for the “probability” that we observed what we did, or precisely, we want to take the set A to be our actual observation and see how P_\theta(A) varies with \theta. This would work if P_\theta(A) > 0 but otherwise it is necessary to look at how P_\theta(A) varies when A is an arbitrarily small but non-negligible set centred on the true observation. (If you like, it is impossible to make a perfect observation correct to infinitely many significant figures; instead, an observation of x usually means we know, for example, that 1.0 \leq x \leq 1.1, hence A can be chosen to be the event that 1.0 \leq x \leq 1.1 instead of the negligible event x = 1.05.) It follows from the integral representation P_\theta(A) = \int_A \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X}\,dP_0 that \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} describes the behaviour of P_\theta(A) as A shrinks down from a range of outcomes to a single outcome. Importantly, the subscript \mathcal X means L(\theta) = \left. \frac{dP_\theta}{dP_0}\right|_{\mathcal X} is \mathcal X-measurable, therefore, L(\theta) depends only on what is observed and not on any other hidden variables.

While the above is not a careful exposition, it will hopefully point the interested reader in a sensible direction.

Measure-theoretic Probability: Still not convinced?

June 22, 2010 Leave a comment

This is a sequel to the introductory article on measure-theoretic probability and accords with my belief that learning should not be one-pass, by which I mean loosely that it is more efficient to learn the basics first at a rough level and then come back to fill in the details soon afterwards. It endeavours to address the questions:

  • Why a probability triple (\Omega,\mathfrak{F},\mathbb{P}) at all?
  • What if \mathfrak F is not a \sigma-algebra?
  • Why is it important that \mathbb P is countably additive?

In addressing these questions, it also addresses the question:

  • Why can’t a uniform probability be defined on the natural numbers \{0,1,2,\cdots,\infty\}?

Consider a real-life process, such as the population X_k of a family of rabbits at each generation k. This gives us a countable family of random variables \{X_1,X_2,\cdots\}. (Recall that countable means countably infinite; with only a finite number of random variables, matters would be simpler.) We can safely assume that if X_k = 0 for some k then the population has died out, that is, X_{k+1} = X_{k+2} = \cdots = 0.

What is the probability that the population dies out?

The key questions here are the implicit questions of how to actually define and then subsequently calculate this probability of extinction. Intuitively, we want the probability that there exists an m such that X_m = 0. When trying to formulate this mathematically, we may think to split this up into bits such as “does X_1 = 0?”, “does X_2 = 0?” and so forth. Because these events are not disjoint (if we know X_1 = 0 then we are guaranteed that X_2 = 0) we realise that we need some way to account for this “connection” between the random variables. Is there any better way of accounting for this “connection” other than by declaring the “full” outcome to be \omega \in \Omega and interpreting each X_k as a function of \omega? (Only by endeavouring to think of an alternative will the full merit of having an \Omega become clear.)

There are (at least) two paths we could take to define the probability of the population dying out. The first was hinted at already; segment \Omega into disjoint sets then add up the probabilities of each of the relevant sets. Precisely, the sets F_1 = \{\omega \in \Omega \mid X_1(\omega) = 0\}, F_2 = \{\omega \in \Omega \mid X_1(\omega) \neq 0, X_2(\omega) = 0\}, F_3 = \{\omega \in \Omega \mid X_2(\omega) \neq 0, X_3(\omega) = 0\} and so forth are disjoint, and we are tempted to sum the probabilities of each one occurring to arrive at the probability of extinction. This is an infinite summation though, so unless we believe that probability is countably additive (recall that this means \mathbb{P}(\cup_{i=1}^\infty F_i) = \sum_{i=1}^\infty \mathbb{P}(F_i) for disjoint sets F_k) then this avenue is not available.

Another path is to recognise that the sets B_k = \{\omega \in \Omega \mid X_k(\omega) = 0\} are like Russian dolls, one inside the other, namely B_1 \subset B_2 \subset B_3 \subset \cdots. This means that their probabilities, \mathbb{P}(B_k), form a non-decreasing sequence, and moreover, we are tempted to believe that \lim_{k \rightarrow \infty} \mathbb{P}(B_k) should equal the probability of extinction. (The limit exists because the \mathbb{P}(B_k) form a bounded and monotonic sequence.)

In fact, these paths are equivalent; if \mathbb P is countably additive and the B_k are nested as above then \mathbb{P}(\cup_{k=1}^\infty B_k) = \lim_{k \rightarrow \infty} \mathbb{P}(B_k) and the converse is true too; if for any sequence of nested sets the probability and the limit operations can be interchanged (which is how the statement \mathbb{P}(\cup_{k=1}^\infty B_k) = \lim_{k \rightarrow \infty}  \mathbb{P}(B_k) should be interpreted) then \mathbb P is countably additive.

Essentially, we have arrived at the conclusion that the only sensible way we can define the probability of extinction is to agree that probability is countably additive and then carry out the calculations above. Without countable additivity, there does not seem to be any way of defining the probability of extinction in general.

The above argument in itself is intended to complete the motivation for having a probability triple; the \Omega is required to “link” random variables together and countable additivity is required in order to model real-world problems of interest. The following section goes further though by giving an example of when countable additivity does not hold.

A Uniform Distribution on the Natural Numbers

For argument’s sake, let’s try to define a “probability triple” (\Omega,\mathfrak{F},\mathbb{P}) corresponding to a uniform distribution on the natural numbers \Omega = \{0,1,2,\cdots,\infty\}. The probability of drawing an even number should be one half, the probability of drawing an integer multiple of 3 should be one third, and so forth. Generalising this principle, it seems entirely reasonable to define \mathbb{P}(F) to be the limit, as N \rightarrow \infty, of the number of elements of F less than N divided by N itself. Since this limit does not necessarily exist, we solve it by declaring \mathfrak F to be the set of all F \subset \Omega for which this limit exists.

It can be shown directly that \mathfrak F is not a \sigma-algebra.  In fact, it is not even an algebra because it is relatively straightforward to construct two subsets of \Omega, call them A and B, which belong to \mathfrak F but whose intersection does not, that is, there exist A, B \in \mathfrak F for which A \cap B \not\in \mathfrak F.

Does \mathbb{P} behave nicely? Let B_k = \{0,\cdots,k\} and observe that B_1 \subset B_2 \subset \cdots and \Omega = \cup_{i=1}^{\infty} B_k. We know from the earlier discussion about extinction that it is very natural to expect that \lim_{k \rightarrow \infty} \mathbb{P}(B_k) = \mathbb{P}(\Omega). However, this is not the case here; since each of the B_k contain only a finite number of elements, it follows that \mathbb{P}(B_k) = 0. Therefore, the limit on the left hand side is zero whereas the right hand side is equal to one.

In summary:

  • Countable additivity enables us to give meaning to probabilities of real-world events of interest to us (such as probability of extinction).
  • Without countable additivity, even very basic results such as \mathbb{P}(\cup_{k=1}^\infty B_k) = \lim_{k \rightarrow \infty}   \mathbb{P}(B_k) for nested B_k need not hold. In other words, there are not enough constraints on \mathbb P for a comprehensive theory to be developed if we drop the requirement of \mathbb{P} being countably additive over a \sigma-algebra \mathfrak F.