## Some Comments on the Situation of a Random Variable being Measurable with respect to the Sigma-Algebra Generated by Another Random Variable

If is a -measurable random variable then there exists a Borel-measurable function such that . The standard proof of this fact leaves several questions unanswered. This note explains what goes wrong when attempting a “direct” proof. It also explains how the standard proof overcomes this difficulty.

First some background. It is a standard result that where is the set of all Borel subsets of the real line . Thus, if then there exists an such that . Indeed, this follows from the fact that since is -measurable, the inverse image of any Borel set must lie in .

A “direct” proof would endeavour to construct pointwise. The basic intuition (and not difficult to prove) is that must be constant on sets of the form for . This suggests defining by . Here, the supremum is used to go from the set to what we believe to be its only element, or to if is empty. Unfortunately, this intuitive approach fails because the range of need not be Borel. This causes problems because is the range of and must be Borel if is to be Borel-measurable.

Technically, we need a way of **extending** the definition of from the range of to a Borel set containing the range of , and moreover, the extension must result in a measurable function.

Constructing an appropriate extension requires knowing more about than simply for each individual . That is, we need a canonical representation of . Before we get to this though, let us look at two special cases.

Consider first the case when for some measurable set , where is the indicator function. If is -measurable then must lie in and hence there exists a Borel such that . Let . (It is Borel-measurable because is Borel.) To show , let be arbitrary. (Recall, random variables are actually functions, conventionally indexed by .) If then and , while because . Otherwise, if then analogous reasoning shows both and equal zero.

How did the above construction avoid the problem of the range of not necessarily being Borel? The subtlety is that the choice of need not be unique, and in particular, may contain values which lie outside the range of . Whereas a choice such as assigns a single value (in this case, ) to values of not lying in the range of , the choice can assign either or to values of not in the range of , and by doing so, it can make Borel-measurable.

Consider next the case when . As above, we can find Borel sets such that for , and moreover, gives a suitable . Here, it can be readily shown that if is in the range of then can lie in at most one . Thus, regardless of how the are chosen, will take on the correct value whenever lies in the range of . Different choices of the can result in different extensions of , but each such choice is Borel-measurable, as required.

The above depends crucially on having only *finitely* many indicator functions. A frequently used principle is that an arbitrary measurable function can be approximated by a sequence of bounded functions with each function being a sum of a finite number of indicator functions (i.e., a simple function). Therefore, the general case can be handled by using a sequence of random variables converging pointwise to . Each results in an obtained by replacing the by , as was done in the paragraph above. For in the range of , it turns out as one would expect: the converge, and gives the correct value for at . For not in the range of , there is no reason to expect the to converge: the choice of the at the th and th steps are not coordinated in any way when it comes to which values to include from the complement of the image of . Intuitively though, we could hope that each includes a “minimal” extension that is required to make measurable and that convergence takes place on this “minimal” extension. Thus, by choosing to be zero whenever does not converge, and choosing to be the limit of otherwise, we may hope that we have constructed a suitable despite how the at each step were chosen. Whether or not this intuition is correct, it can be shown mathematically that so defined is indeed the desired function. (See for example the short proof of Theorem 20.1 in the second edition of Billingsley’s book Probability and Measure.)

Finally, it is remarked that sometimes the monotone class theorem is used in the proof. Essentially, the idea is exactly the same: approximate by a suitable sequence . The subtlety is that the monotone class theorem only requires us to work with indicator functions where is particularly nice (i.e., lies in a -system generating the -algebra of interest). The price of this nicety is that must be bounded. For the above problem, as Williams points out in his book Probability with Martingales, we can simply replace by to obtain a bounded random variable. On the other hand, there is nothing to be gained by working with particularly nice , hence Williams’ admission that there is no real need to use the monotone class theorem (see A3.2 of his book).

## Measure-theoretic Formulation of the Likelihood Function

Let be a family of probability measures indexed by . For notational convenience, assume , so that is one of the probability measures in the family. This short note sketches why is the likelihood function, where the -algebra describes the possible observations and denotes expectation with respect to the measure .

First, consider the special case where the probability measure can be described by a probability density function (pdf) . Here, is a real-valued random variable that we have observed, is a real-valued unobserved random variable, and indexes the family of joint pdfs. The likelihood function when there is a “hidden variable” is usually defined as where is the marginalised pdf obtained by integrating out the unknown variable , that is, . Does this likelihood function equal when is the -algebra generated by the random variable ?

The correspondence between the measure and the pdf is: for any (measurable) set ; this is the probability that lies in . In this case, the Radon-Nikodym derivative is simply the ratio . The conditional expectation with respect to under the distribution is , verifying in this special case that is indeed the likelihood function.

The above verification does not make any less mysterious. Instead, it can be understood directly as follows. From the definition of conditional expectation, it is straightforward to verify that meaning that for any -measurable set , . The likelihood function is basically asking for the “probability” that we observed what we did, or precisely, we want to take the set to be our actual observation and see how varies with . This would work if but otherwise it is necessary to look at how varies when is an arbitrarily small but non-negligible set centred on the true observation. (If you like, it is impossible to make a perfect observation correct to infinitely many significant figures; instead, an observation of usually means we know, for example, that , hence can be chosen to be the event that instead of the negligible event .) It follows from the integral representation that describes the behaviour of as shrinks down from a range of outcomes to a single outcome. Importantly, the subscript means is -measurable, therefore, depends only on what is observed and not on any other hidden variables.

While the above is not a careful exposition, it will hopefully point the interested reader in a sensible direction.

## Measure-theoretic Probability: Still not convinced?

This is a sequel to the introductory article on measure-theoretic probability and accords with my belief that learning should not be one-pass, by which I mean loosely that it is more efficient to learn the basics first at a rough level and then come back to fill in the details soon afterwards. It endeavours to address the questions:

- Why a probability triple at all?
- What if is not a -algebra?
- Why is it important that is countably additive?

In addressing these questions, it also addresses the question:

- Why can’t a uniform probability be defined on the natural numbers ?

Consider a real-life process, such as the population of a family of rabbits at each generation . This gives us a countable family of random variables (Recall that countable means countably infinite; with only a finite number of random variables, matters would be simpler.) We can safely assume that if for some then the population has died out, that is,

*What is the probability that the population dies out?*

The key questions here are the implicit questions of how to actually define and then subsequently calculate this probability of extinction. Intuitively, we want the probability that there exists an such that When trying to formulate this mathematically, we may think to split this up into bits such as “does ?”, “does ?” and so forth. Because these events are not disjoint (if we know then we are guaranteed that ) we realise that we need some way to account for this “connection” between the random variables. Is there any better way of accounting for this “connection” other than by declaring the “full” outcome to be and interpreting each as a function of ? (Only by endeavouring to think of an alternative will the full merit of having an become clear.)

There are (at least) two paths we could take to define the probability of the population dying out. The first was hinted at already; segment into disjoint sets then add up the probabilities of each of the relevant sets. Precisely, the sets , , and so forth are disjoint, and we are tempted to sum the probabilities of each one occurring to arrive at the probability of extinction. This is an infinite summation though, so unless we *believe* that probability is countably additive (recall that this means for disjoint sets ) then this avenue is not available.

Another path is to recognise that the sets are like Russian dolls, one inside the other, namely This means that their probabilities, , form a non-decreasing sequence, and moreover, we are tempted to believe that should equal the probability of extinction. (The limit exists because the form a bounded and monotonic sequence.)

In fact, these paths are equivalent; if is countably additive and the are nested as above then and the converse is true too; if for any sequence of nested sets the probability and the limit operations can be interchanged (which is how the statement should be interpreted) then is countably additive.

Essentially, we have arrived at the conclusion that the only sensible way we can define the probability of extinction is to agree that probability is countably additive and then carry out the calculations above. *Without countable additivity, there does not seem to be any way of defining the probability of extinction in general.*

The above argument in itself is intended to complete the motivation for having a probability triple; the is required to “link” random variables together and countable additivity is required in order to model real-world problems of interest. The following section goes further though by giving an example of when countable additivity does not hold.

### A Uniform Distribution on the Natural Numbers

For argument’s sake, let’s try to define a “probability triple” corresponding to a uniform distribution on the natural numbers . The probability of drawing an even number should be one half, the probability of drawing an integer multiple of 3 should be one third, and so forth. Generalising this principle, it seems entirely reasonable to define to be the limit, as , of the number of elements of less than divided by itself. Since this limit does not necessarily exist, we solve it by declaring to be the set of all for which this limit exists.

It can be shown directly that is not a -algebra. In fact, it is not even an algebra because it is relatively straightforward to construct two subsets of , call them and , which belong to but whose intersection does not, that is, there exist for which .

Does behave nicely? Let and observe that and We know from the earlier discussion about extinction that it is very natural to expect that . However, this is not the case here; since each of the contain only a finite number of elements, it follows that . Therefore, the limit on the left hand side is zero whereas the right hand side is equal to one.

In summary:

- Countable additivity enables us to give meaning to probabilities of real-world events of interest to us (such as probability of extinction).
- Without countable additivity, even very basic results such as for nested need not hold. In other words, there are not enough constraints on for a comprehensive theory to be developed if we drop the requirement of being countably additive over a -algebra .