A Note on Duality in Convex Optimisation

December 22, 2018 Leave a comment

This note elaborates on the “perturbation interpretation” of duality in convex optimisation. Caveat lector: regularity conditions are ignored.

Recall that in convex analysis, a convex set is fully determined by its supporting hyperplanes. Therefore, any result about convex sets should be re-writable in terms of supporting hyperplanes. This is the basis of duality. A simple yet striking example is Fenchel duality which is nicely illustrated in the Wikipedia. Although it relies on the Legendre transform, which I have written about here, the current note is self-contained: it explains the Legendre transform from first principles.

Let’s start with a classic example of duality. In its usual presentation, it is not at all clear that it relates to the aforementioned duality between convex sets and their hyperplanes. Nevertheless, before presenting a general method for taking the dual of a problem, it is useful to have a concrete example at hand.

Consider minimising f(x) subject to g(x) \leq 0 where f, g are convex functions and x is a vector in \mathbb{R}^n. The dual of this problem is \sup_{\lambda \geq 0} \inf_x f(x) + \lambda g(x). A number of remarks are made below.

Remark 1.1: One way to “understand” this duality is by understanding when it is permissible to replace \sup \inf by \inf \sup. Such results go by the name of minimax theorems. Intuitively, if \psi(x,y) has a single critical point that is a saddle point (with the correct orientation) then \sup \inf can be replaced by \inf \sup. (And it is always true that \sup_y \inf_x \phi(x,y) \leq \inf_x \sup_y \phi(x,y).) If we assume \sup_{\lambda \geq 0} \inf_x f(x) + \lambda g(x) equals \inf_x \sup_{\lambda \geq 0} f(x) + \lambda g(x) then it becomes straightforward to verify the dual gives the correct answer. Indeed, \sup_{\lambda \geq 0} \lambda g(x) equals 0 if g(x) \leq 0 and equals \infty if g(x) > 0. Therefore, \sup_{\lambda \geq 0} f(x) + \lambda g(x) equals f(x) if g(x) \leq 0 and equals \infty otherwise. Therefore, the unconstrained minimisation of \sup_{\lambda \geq 0} f(x) + \lambda g(x) is equivalent to the constrained minimisation of f(x) subject to g(x) \leq 0. However, this is by no means the end of the story. It is not even the beginning!

Remark 1.2: The dual appears to relate to the method of Lagrange multipliers. (See here for the physical significance of the Lagrange multiplier.) This makes sense because there are only two cases. Either the global minimum of f(x) occurs at a point x^\star which satisfies the constraint, in which case the problem reduces to an unconstrained optimisation problem, or the optimum feasible solution x^\star satisfies the equality constraint g(x^\star) = 0, thereby suggesting a Lagrange multiplier approach. Again though, this is not the start of the story, although the following remarks do elaborate in this direction.

Remark 1.3: If the global minimum of f(x) occurs at a point x^\star which satisfies the constraint g(x^\star) \leq 0 then we can check what the solution to the dual \sup_{\lambda \geq 0} \inf_x f(x) + \lambda g(x) looks like, as follows. Let f^\star = f(x^\star). If \lambda = 0 then \inf_x f(x) + \lambda g(x) = f^\star. If \lambda > 0 then \inf_x f(x) + \lambda g(x) \leq f^\star because g(x^\star) \leq 0 implies f(x^\star) + \lambda g(x^\star) \leq f^\star. Therefore, the largest value of \inf_x f(x) + \lambda g(x) occurs when \lambda = 0, showing that the dual \sup_{\lambda \geq 0} \inf_x f(x) + \lambda g(x) reduces to the correct solution \inf_x f(x) in this case.

Remark 1.4: If the global minimum of f(x) is infeasible (it does not satisfy the constraint g(x) \leq 0), we can reason as follows. The function f(x) + \lambda g(x) is convex. Provided f,g are sufficiently nice (e.g., strictly convex and differentiable), and f is bounded from below, the minimum of f(x) + \lambda g(x) occurs when \nabla f(x) + \lambda \nabla g(x) = 0. We recognise this as the same condition occurring in the method of Lagrange multipliers for solving the constrained problem of minimising f(x) subject to g(x)=0. This Lagrange multiplier interpretation means that, for a given \lambda, solving \nabla f(x) + \lambda \nabla g(x) = 0 gives an x which is the minimum of f(x) subject to g(x) = c for some c and not necessarily for c = 0. Specifically, if (\lambda', x') satisfies \nabla f(x') + \lambda' \nabla g(x') = 0 then x=x' minimises f(x) subject to g(x) = g(x'). For reference, denote by (\lambda^\star, x^\star) the optimal solution, that is, g(x^\star) = 0 and \nabla f(x^\star) + \lambda^\star \nabla g(x^\star) = 0. Treating x' as a function of \lambda', consider the values of f(x') + \lambda' g(x') compared with f(x^\star) + \lambda^\star g(x^\star) = f(x^\star). We want to show f(x') + \lambda' g(x') \leq f(x^\star) so that the dual problem finds the correct solution. Now, f(x') + \lambda' g(x') = \inf_x f(x) + \lambda' g(x) \leq f(x^\star) + \lambda' g(x^\star) = f(x^\star), as required.

Remark 1.5: A benefit of the dual problem is that it has replaced the “complicated” constraint g(x) \leq 0 by the “simple” constraint \lambda \geq 0.

While we have linked the dual problem with Lagrange multipliers to explain what was going on, it is somewhat messy and requires additional assumptions such as differentiability. Furthermore, the dual problem was written down, rather than derived. The remainder of this note attempts to rectify these issues.

The Legendre Transform and Duality

Let h \colon \mathbb{R} \rightarrow \mathbb{R} be a convex function; in all that follows, the domain can be \mathbb{R}^n (or some other vector space) but we study the n=1 case for simplicity, and because no intuition is lost. Shading in the graph of h produces its epigraph — \{(x,y) \in \mathbb{R} \times \mathbb{R} \mid y \geq h(x) \} — which is convex. The supporting hyperplanes of the epigraph of h therefore determine the epigraph of h and therefore h itself is determined by these supporting hyperplanes.

We will be lazy and write “supporting hyperplane of h” to mean a supporting hyperplane of the epigraph of h.

A hyperplane is of the form y = \lambda x - c for constants \lambda, c \in \mathbb{R}. (In higher dimensions, we would write y = \langle \lambda, x \rangle - c.) Since for any given “orientation” \lambda there is at most one supporting hyperplane, we can define a function h^\ast(\lambda) by the condition that, for any given orientation \lambda, the plane y = \lambda x, when shifted down by h^\ast(\lambda) to become y = \lambda x - h^\ast(\lambda), is a supporting hyperplane of (the epigraph of) h. (If there is no such hyperplane then we will see presently it is natural to define h^\ast(\lambda) to be infinity.)

It is straightforward to derive a formula for h^\ast(\lambda). Imagine superimposing the line y = \lambda x onto the graph of h. If the line intersects the interior of the epigraph of h then it must be shifted down until it just touches the graph of h without entering the interior of the epigraph. If the line does not touch the graph at all, it must be shifted up until it just touches. The amount of shift can be determined by examining, for each x, the distance between the line and the graph. This distance is \lambda x - h(x), and the following calculation shows we want to maximise this quantity.

For y = \lambda x - c to be a supporting hyperplane of h, it must satisfy two requirements. First, we must have that h(x) \geq \lambda x - c for all x. This means c must satisfy c \geq \lambda x - h(x) for all x, or in other words, c \geq \sup_x \lambda x - h(x). The second requirement is that the line should intersect the graph, or in other words, c should be the smallest possible constant for which the first requirement holds. That is, c = \sup_x \lambda x - h(x) is such that y = \lambda x - c is a supporting hyperplane of h, and indeed, it is the unique supporting hyperplane with orientation \lambda.

We therefore define h^\ast(\lambda) = \sup_x \lambda x - h(x). It is called the Legendre transform of h.

The original claim is that h can be recovered from its supporting hyperplanes. That is, given h^\ast, we should be able to determine h. We can do this point by point. Given an x, we know that h(x) must lie above all the supporting hyperplanes y = \lambda x - h^\ast(\lambda). That is, we have h(x) \geq \sup_\lambda \lambda x - h^\ast(\lambda). Additionally, we know h(x) should touch at least one supporting hyperplane, therefore, h(x) = \sup_\lambda \lambda x - h^\ast(\lambda).

It is striking that this formula has exactly the same form as the Legendre transform! Indeed, the Legendre transform is self-dual: the above can be rewritten as h(x) = h^{\ast\ast}(x).

At this juncture, it is easy to state the algebra behind taking the dual of an optimisation problem. In my opinion though, there is more to be understood than just the following algebra. The remainder of this note therefore elaborates on the following.

Given a convex function f(x) that we wish to minimise, we can write down a dual optimisation problem in the following way. First, we embed f(x) into a family of convex cost functions \phi(x,y) meaning we choose a function \phi that is convex and for which \phi(x,0) = f(x). Different embeddings can lead to different duals!

For reasons that will be explained later, the “trick” is to define h(y) = \inf_x \phi(x,y) and then write down an expression for h^{\ast\ast}(0). Since the Legendre transform is self-dual,  h^{\ast\ast}(0) = h(0) = \inf_x \phi(x,0) = \inf_x f(x) gives the answer to the original optimisation problem.

By definition, h^{\ast\ast}(0) = \sup_\lambda - h^{\ast}(\lambda). Also by definition, h^\ast(\lambda) = \sup_y \lambda y - h(y) = \sup_y \lambda y - \inf_x \phi(x,y) = -\inf_y \inf_x \phi(x,y) - \lambda y. We therefore arrive at the following.

Duality: \sup_\lambda \inf_{x,y} \phi(x,y) - \lambda y = \inf_x \phi(x,0).

While the above algebra is straightforward, a number of questions remain.

  1. What is going on geometrically?
  2. Why should the left-hand side be easier to evaluate than the right-hand side?
  3. How should the family \phi be chosen?

It is remarked that if we are allowed to interchange \sup and \inf then the left-hand side becomes \inf_x \inf_y \left( \phi(x,y) - \sup_\lambda \lambda y\right) and \sup_\lambda \lambda y equals infinity unless y = 0, so that \inf_x \inf_y \left( \phi(x,y) - \sup_\lambda \lambda y\right) = \inf_x \phi(x,0).

A Geometrical Explanation

The aim is to understand geometrically what the left-hand side of the equation labelled Duality is computing. Start with the term \inf_{x,y} \phi(x,y) - \lambda y. From our discussion about the Legendre transform, we recognise this term as being the amount by which to shift the plane z = 0 x + \lambda y to make it a supporting hyperplane for the graph z = \phi(x,y). (I like to imagine \phi as the quadratic function \phi(x,y) = x^2 + y^2 and I think of holding a sheet of paper to the surface of z = x^2 + y^2, thereby forming a tangent plane which is the same thing as a supporting hyperplane in this case.) Indeed, \inf_{x,y} \phi(x,y) - \lambda y = -\phi^\ast(0,\lambda) where \phi^\ast(\nu,\lambda) = \sup_{x,y} \nu x + \lambda y - \phi(x,y) is the Legendre transform of \phi.

For a given \lambda, the supporting hyperplane z = \lambda y - \phi^\ast(0,\lambda) touches (and, if \phi is smooth, is “tangent” to) \phi at points (x',y') that achieve the supremum in the definition of the Legendre transform, that is, at points (x',y') satisfying \sup_{x,y} \lambda y - \phi(x,y) = \lambda y' - \phi(x',y'). Equivalently, these are the points (x',y') achieving the infimum in \inf_{x,y} \phi(x,y) - \lambda y.

The first key observation is that because the hyperplane z = \lambda y - \phi^\ast(0,\lambda) does not vary in x, if it touches the graph of \phi at (x',y') then x' must minimise x \rightarrow \phi(x,y'). Intuitively, this is because the derivative of the hyperplane in the x direction being zero implies the derivative of \phi in the x direction (if it exists) must also be zero at the point of contact between the hyperplane and the graph. More formally, it is a straightforward generalisation of the property of the Legendre transform that \inf_x h(x) = -h^\ast(0).  Visually, \inf_x h(x) = -h^\ast(0) states that the minimum of h can be found by starting with a horizontal line and raising or lowering it until it becomes tangent to h. Algebraically, it follows immediately from the definition of the Legendre transform: -h^\ast(0) = -\sup_x -h(x) = \inf_x h(x).

To prove the assertion that x' minimises x \rightarrow \phi(x,y'), fix an orientation \lambda and assume (x',y') satisfies \sup_{x,y} \lambda y - \phi(x,y) = \lambda y' - \phi(x',y'). Define h(x) = \phi(x,y'). Then h^\ast(0) = \sup_x -h(x) = \sup_x -\phi(x,y'). Now, \lambda y' - \phi(x',y') = \sup_{x,y} \lambda y - \phi(x,y) = \sup_x \left( \sup_y \lambda y - \phi(x,y) \right) = \sup_x \lambda y' - \phi(x,y'), showing that -\phi(x',y') = \sup_x -\phi(x,y'), implying h^\ast(0) = -\phi(x',y'). The previous paragraph tells us that \inf_x h(x) = -h^\ast(0), hence \inf_x \phi(x,y') = \phi(x',y'), as claimed.

We are halfway there. Given an orientation \lambda, we can implicitly find a point (x',y') by solving the optimisation problem \inf_{x,y} \phi(x,y) - \lambda y. We know for any such point (x',y'), x' minimises x \mapsto \phi(x,y'). Therefore, if we can find a \lambda such that y' = 0 then we are done.

To help focus on the implicit function \lambda \rightarrow y', rewrite \sup_{x,y} \lambda y - \phi(x,y) as \sup_y \left( \lambda y - \inf_x \phi(x,y) \right). By defining h(y) = \inf_x \phi(x,y), we get \sup_{x,y} \lambda y - \phi(x,y) = h^\ast(\lambda). We want to find \lambda such that the point (x',y') achieving the supremum in \sup_{x,y} \lambda y - \phi(x,y) satisfies y' = 0. That is, we want to find \lambda such that \sup_{x,y} \lambda y - \phi(x,y) = \sup_x - \phi(x,0) = -\inf_x \phi(x,0). Rewriting in terms of h gives the condition h^\ast(\lambda) = -h(0). We can solve this either geometrically or algebraically.

Geometrically, recall that -h^\ast(\lambda) is the location on the vertical axis where the hyperplane with orientation \lambda intersects the vertical axis. Visually consider a tangent line of the convex function x \mapsto (x-1)^2. As we vary the tangent point, we observe where the tangent line intersects the y-axis. It is visually clear that the maximum intersection point occurs when x = 0. We therefore expect that the solution to h^\ast(\lambda) = -h(0) is the value of \lambda that maximises -h^\ast(\lambda).

Algebraically, start by recalling from earlier that we have shown a property of the Legendre transform is that -g^\ast(0) = \inf_x g(x). What we want is the “dual” of this property, obtained via the substitution g = h^\ast, namely -h^{\ast\ast}(0) = \inf_\lambda h^\ast(\lambda). Since the Legendre transform is self-dual, h(0) = h^{\ast\ast}(0), and in particular, we have deduced that \inf_\lambda h^\ast(\lambda) = -h(0), agreeing with our geometric intuition in the previous paragraph.

What we have shown is that if \lambda maximises -h^{\ast}(\lambda) = \inf_{x,y} \phi(x,y) - \lambda y then the infimum in \inf_{x,y} \phi(x,y) - \lambda y is achieved at a point (x',y') with y' = 0. Indeed, assume \lambda maximises -h^{\ast}(\lambda). Retracing ours steps shows that -h^{\ast}(\lambda) = h(0) and since h(0) = \inf_x \phi(x,0) and -h^{\ast}(\lambda) = \inf_{x,y} \phi(x,y) - \lambda y we have that \inf_{x,y} \phi(x,y) - \lambda y = \inf_x \phi(x,0). If x' minimises \phi(x,0), so that \inf_x \phi(x,0) = \phi(x',0), then \inf_{x,y} \phi(x,y) - \lambda y = \inf_x \phi(x,0) = \phi(x',0) = \phi(x',0) - \lambda\,0 showing (x',0) achieves the infimum.

Summary: To find \inf_x \phi(x,0), we first find the intercept point \inf_{x,y} \phi(x,y) - \lambda y of a hyperplane with orientation (0,\lambda). We then vary \lambda until the intercept point is as large as possible: \sup_\lambda \inf_{x,y} \phi(x,y) - \lambda y. We know from earlier that the corresponding hyperplane will tangentially intersect the graph of \phi at a point (x',y') where y' = 0 (because \lambda has been maximised) and where x' minimises x \mapsto \phi(x,y') (because the hyperplane does not vary in x). Since the infimum is achieved at y =0, the value of \sup_\lambda \inf_{x,y} \phi(x,y) - \lambda y is readily determined by substituting in y = 0. Specifically, \sup_\lambda \inf_{x,y} \phi(x,y) - \lambda y = \sup_\lambda \inf_x \phi(x,0) = \inf_x \phi(x,0). This concludes our geometric interpretation of the Duality equation.

Remark: While introducing points (x',y') at which the infimum is achieved in the Duality equation is essential for giving geometrical intuition, it is a hindrance when it comes to algebraic derivations. Indeed, the infimum need not be achieved at any point, or it may be achieved at multiple points. This is the natural tension in mathematics between intuitive explanations and rigorous (but not necessarily intuitive) algebra.

An Example of When the Dual Problem is Simpler

The Duality equation is only useful if solving \inf_{x,y} \phi(x,y) - \lambda y for various \lambda were easier than solving \inf_x \phi(x,0). Naively, one might assume this is never the case! Indeed, \inf_{x,y} \phi(x,y) - \lambda y = \inf_y \left( \inf_x \phi(x,y) - \lambda y \right) appears to contain the original optimisation problem! But this reasoning is flawed: the optimisation problem could be written equally well as \inf_x \left( \inf_y \phi(x,y) - \lambda y \right), and an example is now given of where \inf_x \left( \inf_y \phi(x,y) - \lambda y \right) is easier to evaluate than \inf_x \phi(x,0).

Consider the constrained optimisation problem introduced near the start of this note: minimise f(x) subject to g(x) \leq 0. The “discontinuity” at the boundary g(x) = 0 causes difficulties: unconstrained optimisation (of continuous convex functions!) is generally easier than constrained optimisation. For reasons that will be discussed in the next section, we introduce a family of optimisation problems, indexed by y, where the objective function f(x) stays the same but the constraints vary with y, namely, g(x) \leq -y. (The choice of negative sign is for later convenience.)

In convex optimisation, a constrained optimisation problem can be written as an unconstrained (but discontinuous and therefore still “difficult”) optimisation problem simply by defining the value of the function to be infinity at points violating the constraint. One way to write this is as f(x) + \sup_{\alpha \geq 0} \alpha g(x) because the supremum will equal zero if g(x) \leq 0, and will equal \infty otherwise. We therefore define our embedding to be \phi(x,y) = f(x) + \sup_{\alpha \geq 0} \alpha (g(x)+y), so that \inf_x \phi(x,y) is the minimum of f(x) subject to g(x) \leq -y.

Consider evaluating \inf_y f(x) + \sup_{\alpha \geq 0} \alpha (g(x)+y) - \lambda y. The infimum over all y is equal to the minimum of the infimum over y \leq -g(x) and the infimum over y > -g(x). If y \leq -g(x) then g(x)+y \leq 0 and \sup_{\alpha \geq 0} \alpha (g(x)+y) = 0. If y > -g(x) then g(x)+y > 0 and \sup_{\alpha \geq 0} \alpha (g(x)+y) = \infty. Therefore, \inf_y f(x) + \sup_{\alpha \geq 0} \alpha (g(x)+y) - \lambda y = f(x) + \inf_{y,\ y \leq -g(x)} - \lambda y. The last infimum equals \lambda g(x) if \lambda \geq 0 and equals -\infty otherwise. Both \inf_x f(x) + \lambda g(x) and \inf_x f(x) - \infty are easier to evaluate than the original \inf_x f(x) + \sup_{\alpha \geq 0} \alpha g(x).

Finally, it is mentioned that for this example, the left-hand side of the Duality equation becomes \sup_{\lambda \geq 0} \inf_x f(x) + \lambda g(x). This is precisely the dual problem introduced at the start of this note.

How to Choose an Embedding?

First, a remark. One might ask, why not look for a function \phi(x,y) such that \inf_{x,y} \phi(x,y) = \inf_x \phi(x,0)? In principle, it seems necessary to know \inf_x \phi(x,0) before such an extension could be made, thereby rendering the extension unhelpful for computing \inf_x \phi(x,0). The benefit of the Duality equation is that \phi just needs to be convex.

Since choosing an extension \phi so that \inf_{x,y} \phi(x,y) - \lambda y is easier to evaluate than \inf_x \phi(x,0) comes down to experience, what we do here is analyse in greater detail the choice in the previous section.

To aid visualisation, assume we wish to minimise f(x) = (x-1)^2 subject to g(x) = x \leq 0. Therefore, \phi(x,0) equals (x-1)^2 for x \leq 0 and equals \infty for x > 0.

Simply replicating \phi(x,0) for other values of y, meaning choosing \phi(x,y) = \phi(x,0), does not simplify the optimisation problem. Since the main difficulty is that \phi contains jumps to infinity, the first thing we might aim for, is ensuring \inf_y \phi(x,y) < \infty for all x.

We cannot arbitrarily choose values for \phi(x,y) because \phi must be convex: we must extend \phi(x,0) while maintaining convexity and so \phi(x,y) must depend on \phi(x,0).

Taking \phi(x,y) = f(x) for y \neq 0 ensures \inf_y \phi(x,y) < \infty for all x, but \phi will not be convex because, for a fixed x > 0, y \mapsto \phi(x,y) would equal f(x) for y \neq 0 but equal \infty for y = 0. At least on one side, say, y < 0, we must have \phi(x,y) = \infty if \phi(x,0) = \infty.

With these thoughts in mind, we are led to consider choosing \phi(x,y) to be f(x) for x \leq y and \infty for x > y. This gives us a convex function satisfying \inf_y \phi(x,y) < \infty for all x. Up to a change of sign, it corresponds with the choice in the previous section.

The reader is encouraged to visualise the graph of \phi. Note that the graph of \phi(x,y) - \lambda y is obtained simply by rotating the graph of \phi about the x-axis, alternatively described as “tilting” \phi forwards or backwards. Therefore, \inf_{x,y} \phi(x,y) is the global minimum of \phi, while \inf_{x,y} \phi(x,y) - \lambda y is the global minimum of a tilted version of \phi.

There is an interesting interpretation of duality in this particular case. Consider \inf_{x,y} \phi(x,y) - \lambda y. When \lambda = 0 this corresponds to finding the unconstrained minimum of f(x). If \lambda is now slowly decreased, so that \phi(x,y) - \lambda y tilts forwards, \inf_{x,y} \phi(x,y) - \lambda y is now solving the constrained problem of minimising f(x) subject to a constraint x \leq c where c is smaller than the location of the global minimum of f. (As \lambda decreases, so does c.) In this way, by continuously varying \lambda, we can increase the severity of the constraint and thereby track down the constrained minimum that we seek.

Advertisements

Simple Explanation for Interpretation of Lagrange Multiplier as Shadow Cost

December 10, 2018 1 comment

Minimising f(x) subject to g(x)=0 can be solved using Lagrange multipliers, where f(x) + \lambda g(x) is referred to as the Lagrangian. The optimal value (x^\star, \lambda^\star) satisfies both \nabla f(x^\star) + \lambda^\star \nabla g(x^\star)=0 and g(x^\star) = 0. Although \lambda was added to the problem, it turns out that \lambda^\star has physical significance: it is the rate of change of the optimal value f(x^\star) to perturbations c around zero in the constraint g(x)=c. A simple explanation (rather than an algebraic proof) is given below.

First the reader is reminded why the condition \nabla f(x^\star) + \lambda^\star \nabla g(x^\star)=0 is introduced. Let M = \{ x \mid g(x) = 0 \} denote the constraint set. Then f restricted to M has a local minimum at a point x^\star \in M if small perturbations that remain on M do not decrease f. In other words, the directional derivative of f should be zero in directions tangent to M at x^\star. Since \nabla g(x^\star) is normal to the surface M at x^\star, the directions h tangent to M at x^\star are those satisfying \langle \nabla g(x^\star), h \rangle = 0. The directional derivative at x^\star in the direction h is \langle \nabla f(x^\star), h \rangle. The condition is thus that \langle \nabla f(x^\star), h \rangle = 0 for all h satisfying \langle \nabla g(x^\star), h \rangle = 0. This condition is equivalent to requiring that \nabla f(x^\star) is a scalar multiple of \nabla g(x^\star), that is, \nabla f(x^\star) = - \lambda^\star \nabla g(x^\star) for some scalar \lambda^\star.

To understand the physical significance of \lambda^\star, it helps to think in terms of contour lines (or level sets) for both f and g. For concreteness, visualise the constraint set \{x \mid g(x)=0\} as a circle. Let f^\star = f(x^\star) denote the minimum achievable value of f(x) subject to g(x)=0. Then the level sets \{x \mid f(x)=f^\star + \epsilon \} may be visualised as ellipses whose sizes increase with \epsilon. Furthermore, if \epsilon = 0 then the level set is tangent to the constraint set (the circle), while if \epsilon < 0 then the level set \{x \mid f(x)=f^\star + \epsilon\} does not intersect the constraint set (the circle).

The key point is to consider what happens if we change the constraint set (the circle) from \{x \mid g(x)=0\} to \{x \mid g(x)=c\} for some c close to zero. Changing c will perturb the circle. If the circle gets slightly bigger then we can find a new x^\star, close to the old one, lying on the perturbed circle for which f(x^\star) is smaller than before. Since the gradient represents the direction of greatest increase, it seems sensible to assume the new minimum is at x^\star - \epsilon \nabla f(x^\star) for some \epsilon > 0. We must choose \epsilon so x^\star - \epsilon \nabla f(x^\star) satisfies the new constraint: g(x^\star - \epsilon \nabla f(x^\star)) = c. To first order, g(x^\star - \epsilon \nabla f(x^\star)) = g(x^\star) - \epsilon \langle \nabla g(x^\star), \nabla f(x^\star) \rangle = - \epsilon \langle \nabla g(x^\star), - \lambda^\star \nabla g(x^\star)  \rangle = \epsilon \lambda^\star \| \nabla g(x^\star) \|^2. That is, \epsilon is given by solving \epsilon \lambda^\star \| \nabla g(x^\star) \|^2 = c.

The new value of the objective function, to first order, is given by f(x^\star - \epsilon \nabla f(x^\star)) = f^\star - \epsilon \langle \nabla f(x^\star) , \nabla f(x^\star) \rangle = f^\star - \epsilon \| \nabla f(x^\star) \|^2 = f^\star - \epsilon (\lambda^\star)^2 \| \nabla g(x^\star) \|^2. Since \epsilon = c (\lambda^\star)^{-1} \| \nabla g(x^\star) \|^{-2}, we get that the improvement in f is \epsilon (\lambda^\star)^2 \| \nabla g(x^\star) \|^2 = c \lambda^\star. The rate-of-change of the improvement in f with respect to a change c in the constraint is precisely \lambda^\star, as we set out to show.

Note that the above is not a proof but rather a rule of thumb that aids in our intuition. An actual proof is straightforward but not so intuitive.

Categories: Uncategorized

Tennis: Attacking versus Defending

October 13, 2018 Leave a comment

What is an attacking shot? Must an attacking shot be hit hard? What about a defensive shot? This essay will consider both the theory and the practice of attacking and defensive shots. And it will show that if you often lose 6-2 to an opponent then only a small amount of improvement might be enough to tip the scales the other way.

Tennis is a Game of Probabilities

Humans are not perfect. Anyone who has tried aiming for a small target by hitting balls fed from a ball machine will know that hitting identical balls will produce different outcomes. What is often not appreciated though is how sensitive the outcome of a tennis match is to small improvements.

Imagine a match played between two robots, where on average, out of every 20 points played, one robot wins just a single more point than the other robot. This is a very small difference in abilities. Mathematically, this translates to the superior robot having a probability of 55% of winning a point against the inferior robot. (I have used robots as a way of ignoring psychological factors. While the effects of psychological factors are relatively small, at a professional level where differences in abilities are so small to begin with, psychological factors can be crucial to determining the outcome of a match.)

Using the “Probability Calculator” found near the bottom of this page, we find that such a subtle difference in abilities translates into the superior robot winning a 5-set match 95.35% of the time. (The probability of winning a 3-set match is 91%.)

The graphs found on this page show that the set score is “most likely” to be 6-2 or 6-3 to the superior robot, so the next time you lose a match 6-2 or 6-3, believe that with only a small improvement you might be able to tip the scales the other way!

The In-Rally Effect of a Shot

While hitting a winner is a good (and satisfying) thing, no professional tries to hit a winner off every ball. Tennis is a game of probabilities, where subtle differences pay large dividends.

The aim of every ball is to increase the probability of eventually winning the point.

So if your opponent has hit a very good ball and you feel you are in trouble, you might think to yourself that you are down 30-70, giving yourself only 30% chance of winning the rally. Your aim with your return shot is to increase your odds. Even if you cannot immediately go back to 50-50, getting to 40-60 will then put 50-50 within reach on the subsequent shot.

So why not go for winners off every ball? If you can maintain a 55-45 advantage off each ball you hit, you will have a 62.31% chance of winning the game, an 81.5% chance of winning the set, and a 91% chance of winning a 3-set match. On the other hand, against a decent player, hitting a winner off a deep ball is difficult and carries with it a significant chance of failure, i.e., hitting the ball out. Studies have shown that even the very best players cannot control whether a ball narrowly falls in or narrowly falls out, meaning that winners hit very close to the line are rarely intentional but rather come from a shot aimed safely within the court that has drifted closer to the line than intended.

Attacking Options

If by an attacking ball we mean playing a shot that gives us the upper hand in the rally, then our shot needs to make it difficult for our opponent to hit a challenging ball for us. How can this be done?

First, our ability to hit an accurate ball relates to how well we are balanced, how much time we have to react, how fast the incoming ball is, the spin on the ball and the height of the ball at contact. So we can make it difficult for opponents in many different ways. The opponent’s balance can be upset by

  • wrong-footing her;
  • making her cover a relatively long distance in a short amount of time to reach the ball;
  • jamming her by hitting straight at her;
  • forcing her to move backwards (so she cannot transfer her weight forwards into the ball).

The opponent’s neurones can be given a harder task by

  • giving her less time to calculate how to swing the racquet;
  • varying the incoming ball on each shot (different spins, heights, depths, speeds);
  • making it harder for her to predict what we will do;
  • getting her thinking about other things (such as the score).

Due to our physical structure, making contact with the ball outside of our comfort zone (roughly, between knee to shoulder height) decreases the margin for error because it is harder to “flatten out” the trajectory of the racquet about the contact point.

These observations lead to a variety of strategies for gaining the upper hand, including the following basic ones.

  • Take the ball early, on the rise, to take time away from the opponent.
  • Hit sharp angles, going near the lines.
  • Hit with heavy top-spin to get the ball to bounce above the opponent’s shoulder.
  • Hit hard and flat.
  • Hit very deep balls close to the baseline.

Each of these strategies carries a risk of hitting out, therefore, it is generally advised not to combine strategies: if you take the ball on the rise, do not also aim close to the lines or hit excessively hard, for example.

If you find yourself losing to someone but not knowing why, it is because subtle differences in the balls you are made to hit can increase the chances of making a mistake, and only very small changes in probability (such as dropping from winning 11 balls out of 20 to only 10 balls out of 20) can hugely affect the outcome of the match (such as dropping from a 91% chance of winning the 3-set match to only a 50% chance of winning). To emphasise, if your opponent takes the ball earlier than normal, you are unlikely to notice the subtle time difference, but over the course of the match, you will feel you are playing worse than normal.

If you are hitting hard but all your balls are coming back, remember that it is relatively easy to return a hard flat ball: the opponent can shorten her backswing yet still hit hard by using your pace, making a safe yet effective return. Hit hard and flat to the open court off a high short ball, but otherwise, try hitting with more top-spin.

Defensive Options

Just as an attacking ball need not be a fast ball, a defensive ball need not be a slow ball. Rather, a defensive ball is one where our aim is to return to an approximately 50-50 chance of winning the point. If we are off balance, or are facing a challenging ball, we cannot go for too much or we risk hitting out and immediately losing the point.

Reducing the risk of hitting out can be achieved in various ways.

  • Do not swing as fast or with excessive spin.
  • Take the ball later after the bounce, giving it more time to slow down.
  • Block the ball back (perhaps taking it on the rise) to a safe region well away from the lines.
  • Focus on being balanced, even if it means hitting with less power.

Importantly, everyone is different, and you should learn what your preferred defensive shots are. For example, while flat balls inherently have a lower margin for error, nonetheless some people may find they are more accurate hitting flat than hitting with top-spin simply due to their biomechanical structure and past practice.

Placement of a defensive shot is crucial. Because speed, spin and/or time have been sacrificed for greater safety, it is quite likely that if the opponent has time to get into a comfortable position they can punish your ball. This is yet another example of where small differences can have large consequences: if the opponent can step into the ball you might be in trouble, yet hitting just a metre deeper, or a metre further to the side, might prevent this. Moreover, never forget that placement is relative to where the opponent currently is: hitting deep to the backhand is normally a good shot unless the opponent is already there!

Changes in depth can be very effective but they must be relative to where the opponent is. If an opponent is attacking, she might have moved to being on the baseline, looking to take the next ball early. Hitting a deep top-spin shot, not necessarily very hard, is very effective in this scenario because the opponent is forced to move backwards and thus cannot generate as much power. (A skilled opponent can still hit hard, but even a 10 km/h reduction in ball speed makes a large difference.)

Technical Considerations

Watching the slow-motion clips on YouTube of professional players hitting groundstrokes shows that the same player has many different ways of hitting her forehand and backhand. At the point of contact, you can look to see where her feet are, which way her hips are pointing, which way her shoulders are pointing, the angle of her wrist, and the contact point of the ball relative to the body. And as she hits, you can look to see which parts of the body are moving and which have momentarily become static.

While an upcoming article may consider such technical aspects in greater detail, here it is simply noted that while you should be able to hit every kind of shot from any reasonable contact point, each has advantages and disadvantages. And since tennis is a game of probabilities, it comes as no surprise that the top players instinctively know not just what type of shot to hit but also how technically to hit it in the best possible way for them at that particular moment: if they have time, and want to hit a heavy top-spin, they will probably choose a contact point further away from their body and step into the ball, while if they must return a very fast ball, they may instead use a more open stance and hit the ball more in line with their eyes, well out in front.

Next time you are on the court, experiment with how changing the contact point (even over a relatively small range of 10–20 cm) can change how hard you can hit the ball, how accurately you can hit the ball, and how much top-spin you can generate. And do not forget, this may change depending on the type of incoming ball. For example, generating pace off a slow ball is better done using a different technique than returning a fast incoming ball. Failing to recognise this may mean you “feel” you are not hitting the ball well when in fact you are just not using the best technique for the particular type of shot you want to hit.

The other side of the coin is recognising that sometimes it is necessary to play a ball using a non-optimal technique due to a funny bounce, or lack of time (or inherent laziness). In this situation, it is important to adjust the type of shot you hit. If you are forced to return a hard-hit ball at full stretch, you will lose accuracy, so do not go anywhere near the lines. If you are jammed, you are not going to be able to hit as heavy a ball as you may wish, so you may change to hit flatter, opting to take time away from the opponent. Of course, changing from heavy to flat generally means changing where you want your shot to land: a shorter top-spin that bounces above shoulder height is generally good whereas a shorter flat shot is generally bad, for example.

Categories: Uncategorized

Does the Voltage Across an Inductor Immediately Reverse if the Inductor is Suddenly Disconnected?

August 3, 2018 Leave a comment

Consider current flowing from a battery through an inductor then a resistor before returning to the battery again. What happens if the battery is suddenly removed from the circuit? Online browsing suggests that the voltage across the inductor reverses “to maintain current flow” but the explanations for this are either by incomplete analogy or by emphatic assertion. Moreover, one could argue for the opposite conclusion: if an inductor maintains current flow, then since the direction of current determines the direction of the voltage drop, the direction of the voltage drop should remain the same, not change!

To understand precisely what happens, it is important to think in terms of actual electrons. When the battery is connected, there is a stream of electrons being pushed out of the negative terminal of the battery, being pushed through the resistor, being pushed through the inductor then being pulled back into the battery through its positive terminal. The question is what happens if the inductor is ripped from the circuit, thereby disconnecting its ends from the circuit. (The explanation of what happens does not change in any substantial way whether it is the battery or the inductor that is removed.)

The analogy of an inductor is a heavy water wheel. The inductor stores energy in a magnetic field while a water wheel stores energy as rotational kinetic energy. But if we switch off the water supply to a water wheel, and the water wheel keeps turning, what happens? Nothing much! And if we disconnect an inductor, so there is no “circuit” for current to flow in, what can happen?

One trick is to think not of a water wheel but of a (heavy) fan inside a section of pipe. Ripping the inductor out of the circuit corresponds to cutting the piping on either side of the fan and immediately capping the ends of the pipes. This capping mimics the fact that electrons cannot flow past the ends of wires; not taking sparks into consideration. Crucially then, when we disconnect the fan, there is still piping on either side of the fan, and still water left in these pipes.

Consider the water pressure in the capped pipe segments on both sides of the fan. Assume prior to cutting out the fan, water had been flowing from right to left through the fan. (Indeed, when the pump is first switched on, it will cause a pressure difference to build up across the fan. This pressure difference is what causes the fan to start to spin. As the fan spins faster, this pressure difference gets less and (ideally) goes to zero in the limit.) Initially then, there is a higher pressure on the right side of the fan. The fan keeps turning, powered partly by the pressure difference but mainly by its stored rotational kinetic energy. (Think of its blades as being very heavy, therefore not wanting to slow down.) So water gets sucked from the pipe on the right and pushed into the pipe on the left. These pipes are capped, therefore, the pressure on the right decreases while the pressure on the left increases. “Voltage drop” is a difference in pressure, therefore, the “voltage drop” across the “inductor” is changing.

There is no discontinuous change in pressure! The claim that the voltage across an inductor will immediately reverse direction is false!

That said, the pressure difference is changing, and there will come a time when the left pipe will have a higher pressure than the right pipe. Now there are two competing forces: the stored kinetic energy in the fan wants to keep pumping water from right to left, while the larger pressure on the left wants to force water from left to right. The fan will start to slow down and eventually stop, albeit instantaneously. At the very moment the fan stops spinning, there is a much larger pressure on the left than on the right. Therefore, this pressure difference will force the fan to start spinning in the opposite direction!

Under ideal conditions then, the voltage across the inductor will oscillate!

Why should we believe this analogy though? Returning to the electrons, the story goes as follows. Assume an inductor, in a circuit, has a current flowing through it, from left to right. Therefore, electrons are flowing through the inductor from right to left (because Benjamin Franklin had 50% chance of getting the convention of current flow correct). If the inductor is ripped out of the circuit, the magnetic field that had been built up will still “push” electrons through the inductor in an attempt to maintain the same current flow. The density of electrons on the right side of the inductor will therefore decrease, while the density on the left side will therefore increase. Electrons repel each other, so it becomes harder and harder for the inductor to keep pushing electrons from right to left because every electron wants its own space and it is getting more and more crowded on the left side of the inductor. Eventually, the magnetic field has used up all its energy trying to cram as many electrons as possible into the left side of the inductor. The electrons on the left are wanting to get away from each other and are therefore pushing each other over to the right side of the inductor. This “force” induces a voltage drop across the inductor: as electrons want to flow from left to right, we say the left side of the inductor is more negative than the right side. The voltage drop has therefore reversed, but it did not occur immediately, nor will it last forever, because the system will oscillate: as the electrons on the left move to the right, they cause a magnetic field to build up in the inductor, and the process repeats ad infinitum.

Adding to the explanation, we can recognise a build-up of charge as a capacitor. There is always parasitic capacitance because charge can always accumulate in a section of wire. Therefore, there is no such thing as a perfect inductor (for if there were, we could not disconnect it!). Rather, an actual inductor can be modelled by an ideal inductor in parallel with an ideal capacitor. (Technically, there should also be a resistor in series to model the inevitable loss in ordinary inductors.) An inductor and capacitor in parallel form what is known as a resonant “LC” circuit, which, as the name suggests, resonates!

 

Intuition behind Caratheodory’s Criterion: Think “Sharp Knife” and “Shrink Wrap”!

August 24, 2017 2 comments

Despite many online attempts at providing intuition behind Caratheodory’s criterion, I have yet to find an answer to why testing all sets should work.

Therefore, I have taken the liberty of proffering my own intuitive explanation. For the impatient, here is the gist. Justification and background material are given later.

Caratheodory

The set U is non-measurable. The set A is measurable.

 

We will think of our sets as rocks. As explained later, a rock is not “measurable” if its boundary is too jagged. In the above image, the rock U is not measurable because it has a very jagged boundary. On the other hand, the rock A has a very nice boundary and hence we would like to consider A to be “measurable”.

For A to be measurable according to Caratheodory’s criterion, the requirement is for \mu(U) = \mu(U \cap A^\mathrm{c}) + \mu(U \cap A) to hold for all sets U. While other references generally make it clear that this is a sensible requirement if U is measurable (and we will go over this below), what makes Caratheodory’s criterion unintuitive is the requirement that \mu(U) = \mu(U \cap A^\mathrm{c}) + \mu(U \cap A) must hold for all sets U, including non-measurable sets. However, we argue that a simple change in perspective makes it intuitively clear why we can demand \mu(U) = \mu(U \cap A^\mathrm{c}) + \mu(U \cap A) holds for all U.

Caratheodory 2

An outer measure uses shrink wrap to approximate the boundary. If A cuts the rock U cleanly then the same “errors” are made when approximating the boundaries of the three objects in the picture, hence Caratheodory’s criterion holds.

The two ingredients of our explanation are that the boundary of A serves as a knife, and that an outer measure calculates volume by using shrink wrap. (For example, Lebesgue outer measure approximates a set by a finite union of cubes that fully cover the set, and uses the volume of the cubes as an estimate of the volume of the set; precisely, the infimum over all such approximations is used.)

In the above image, the red curve represents the shrink wrap. The outer measure \mu(U) is given by the area enclosed by the red curve. (For rocks, which are three dimensional, we would use the volume encased by the shrink wrap.)

When we take U \cap A^\mathrm{c} and U \cap A, we think of using part of the boundary of A (the part that intersects U) to cut U. If A is a rectangle (or cube) then we produce a very clean straight cut of the set (or rock) U.

Consider shrink wrapping  U \cap A^\mathrm{c} and U \cap A individually, as shown by the red curves in the above image. Two observations are critical.

  • If the cut made by A is clean then the shrink wrap fits the cut perfectly; no error is made for this part of the boundaries of U \cap A^\mathrm{c} and U \cap A.
  • For all other parts of the boundaries of U \cap A^\mathrm{c} and U \cap A (i.e., the jagged parts), the errors made by the shrink wrap for U \cap A^\mathrm{c} and U \cap A precisely equal the errors made by the shrink wrap for U (because it is the same jagged boundary being fitted).

It follows that \mu(U) = \mu(U \cap A^\mathrm{c}) + \mu(U \cap A) holds whenever A has a nice boundary.

Regardless of whether U is measurable or not, Caratheodory’s criterion is blind to the boundary of U and instead is only testing how “smooth” the boundary of A is. (Different sets U will test different parts of the boundary of A.)

From the Beginning

First, it should be appreciated that Caratheodory’s criterion is not magical: it cannot always produce the exact sigma-algebra that you are thinking of, so it suffices to understand why, in certain situations, it is sensible. In particular, this means we can assume that the outer measure we are given, works by considering approximations of a given set A by larger but nicer sets, such as finite collections of open cubes, whose measure we know how to compute. Taking the infimum of the measures of these approximating sets gives the outer measure of A.

In physical terms, think of measuring the volume of rocks. An outer measure works by wrapping shrink wrap as tightly as possible around the rock, then asking for the volume encased by the shrink wrap. The key point is that if the boundary of the rock is piecewise smooth then the shrink wrap computes its exact volume, whereas if the boundary is sufficiently jagged then the shrink wrap cannot follow the contours perfectly and the shrink wrap (i.e., the outer measure) will exaggerate the true volume of the rock.

It is easy to spot rocks with bad boundaries if the volume of the whole space is finite: look at both the rock and its complement. Place shrink wrap over both of them. If the shrink wrap can follow the contours then it will measure the volume of the two interlocking parts perfectly and the sum of the volumes will equal the volume of the whole space. If not, we know the rock is too ill-shaped to be considered measurable.

If the total volume is infinite then we must zoom in on smaller portions of the boundary. For example, we might take an open ball U and look at the boundary of A \cap U using our shrink wrap trick: shrink wrap both A \cap U and its complement in U, namely, A^\mathrm{c} \cap U, and see if the volumes of A \cap U and A^\mathrm{c} \cap U add up to the volume of U. If not, we know A is too ill-shaped to be considered measurable. This seemingly relies on our choice of U being a sufficiently nice/benign shape that it does not interfere with our observation of the boundary of A. For Lebesgue outer measure, the choice of an open ball for U seems entirely appropriate, but perhaps the right choice will depend on the particular outer measure under consideration. Caratheodory’s criterion removes the need for such a choice by requiring \mu(A \cap U) + \mu(A^\mathrm{c} \cap U) = \mu(U) to hold for all U and not just nice U. The images and explanation given at the start of this note explain why this works: the jaggedness of U appears on both sides of the equation and cancels out, therefore Caratheodory’s criterion is really only testing the boundary of A.

Remarks

  1. Of course, intuition must be backed up by rigorous mathematics, and it turns out that Caratheodory’s criterion is useful because it is easy to work with and produces the desired sigma-algebra in many situations of interest.
  2. Our intuitive argument has focused on the boundary of sets. If we consider a Borel measure (one generated by the open sets of a topological space) then we know that open sets are measurable and hence it is indeed the behaviour at the boundary that matters. That said, intuition need not be perfect to be useful.

An Alternative (and Very Simple) Derivation of the Boltzmann Distribution

August 22, 2017 Leave a comment

This note gives a very simple derivation of the Boltzmann distribution that avoids any mention of temperature or entropy. Indeed, the Boltzmann distribution can be understood as the unique distribution with the property that when a large (or even a small!) number of Boltzmann distributions are added together, all the different ways of achieving the same “energy” have the same probability of occurrence, as now explained.

For the sake of a mental image, consider a small volume of gas. This small volume can take on three distinct energy levels, say, 0, 1 and 2. Assume the choice of energy level is random, with corresponding probabilities p_0, p_1 and p_2. A large volume of the gas can be partitioned into many such small volumes, and the total energy is simply the sum of the energies of each of the small volumes.

Mathematically, let N denote the number of small volumes and let E_i denote the energy of the ith small volume, where i ranges from 1 to N. The total energy of the system is E = \sum_{i=1}^N E_i.

For a fixed N and a fixed total energy E, there may be many outcomes E_1,\cdots,E_N with the prescribed energy E. For example, with N=3 and E = 2, the possibilities are (2,0,0), (0,2,0), (0,0,2), (1,1,0), (1,0,1),(0,1,1). The probabilities of these possibilities are p_2p_0p_0, p_0p_2p_0, p_0p_0p_2, p_1p_1p_0, p_1p_0p_1,p_0p_1p_1 respectively. Here, we assume the small volumes are independent of each other (non-interacting particle model). Can we choose p_0, p_1,p_2 so that all the above probabilities are the same?

Since the ordering does not change the probabilities, the problem reduces to the following. Let N_0, N_1 and N_2 denote the number of small volumes which are in energy levels 0, 1 or 2 respectively. Then N_0 + N_1 + N_2 = N and N_1 + 2N_2 = E. For fixed N and E, we want p_0^{N_0} p_1^{N_1} p_2^{N_2} to be constant for all valid N_0, N_1, N_2.

A brute-force computation is informative: write N_1 = E - 2 N_2 and N_0 = N - N_1 - N_2 = N - E + N_2. Then p_0^{N_0} p_1^{N_1} p_2^{N_2} = p_0^N p_0^{-E} p_1^E p_0^{N_2} p_1^{-2N_2} p_2^{N_2}. This will be independent of N_2 if and only if (p_0 p_2 / p_1^2)^{N_2} is constant, or in other words, p_0 p_2 = p_1^2. Remarkably, this is valid in general, even for small N.

Up to a normalising constant, the solutions to p_0 p_2 = p_1^2 are parametrised by p_0 = 1 and p_2 = p_1^2. Varying p_1 varies the average energy of the system. Letting \beta = -\ln p_1, we can reparametrise the solutions by p_i = \exp\{- \beta i\}. (Check: p_0 = \exp\{0\} = 1; p_1 = \exp\{\ln p_1\} = p_1; and p_2 = \exp\{2 \ln p_1\} = p_1^2.) This is indeed the Boltzmann distribution when the energy levels are 0, 1 or 2.

More generally, we can show that the Boltzmann distribution is the unique distribution with the property that different configurations (or “microstates”) having the same N and E have the same probability of occurrence. In one direction, consider the (unnormalised) probabilities p_i = \exp\{-\beta E_i\} that define the Boltzmann distribution. Then \prod p_i^{N_i} = \exp\{-\beta \sum E_i N_i\} = \exp\{-\beta E\} where E = \sum E_i N_i is the total energy of the system. (Here, the ith region has energy E_i, and there are precisely N_i regions with this energy in the configuration under consideration.) Therefore, the probability of any particular configuration depends only on the total energy and is thus uniformly distributed conditioned on the total energy being known. In the other direction, although we omit the proof, we can derive the Boltzmann distribution in generality along the same lines as was done earlier for three specific energy levels: regardless of the total number of energy levels, it suffices to consider in turn the energy levels E_0, E_1 and E_i with probabilities p_0, p_1 and p_i. Assuming the N_j are zero except for j \in \{0,1,i\} then leads to a constraint on p_i / p_0 as a function of p_1 / p_0.

Some Comments on the Situation of a Random Variable being Measurable with respect to the Sigma-Algebra Generated by Another Random Variable

August 21, 2017 Leave a comment

If Y is a \sigma(X)-measurable random variable then there exists a Borel-measurable function f \colon \mathbb{R} \rightarrow \mathbb{R} such that Y = f(X). The standard proof of this fact leaves several questions unanswered. This note explains what goes wrong when attempting a “direct” proof. It also explains how the standard proof overcomes this difficulty.

First some background. It is a standard result that \sigma(X) = \{X^{-1}(B) | B \in \mathcal{B}\} where \mathcal{B} is the set of all Borel subsets of the real line \mathbb{R}. Thus, if A \in \mathcal{B} then there exists an H \in \mathcal{B} such that Y^{-1}(A) = X^{-1}(H). Indeed, this follows from the fact that since Y is \sigma(X)-measurable, the inverse image Y^{-1}(A) of any Borel set A must lie in \sigma(X).

A “direct” proof would endeavour to construct f pointwise. The basic intuition (and not difficult to prove) is that Y must be constant on sets of the form X^{-1}(c) for c \in \mathbb{R}. This suggests defining f by f(x) = \sup Y(X^{-1}(x)). Here, the supremum is used to go from the set Y(X^{-1}(x)) to what we believe to be its only element, or to \infty if X^{-1}(x) is empty. Unfortunately, this intuitive approach fails because the range of X need not be Borel. This causes problems because f^{-1}((-\infty,\infty)) is the range of X and must be Borel if f is to be Borel-measurable.

Technically, we need a way of extending the definition of f from the range of X to a Borel set containing the range of X, and moreover, the extension must result in a measurable function.

Constructing an appropriate extension requires knowing more about Y than simply Y(X^{-1}(x)) for each individual x. That is, we need a canonical representation of Y. Before we get to this though, let us look at two special cases.

Consider first the case when Y = I_{A} for some measurable set A, where I is the indicator function. If Y is \sigma(X)-measurable then Y^{-1}(1) must lie in \sigma(X) and hence there exists a Borel H such that Y^{-1}(1) = A = X^{-1}(H). Let f = I_H. (It is Borel-measurable because H is Borel.) To show Y = f \circ X, let \omega be arbitrary. (Recall, random variables are actually functions, conventionally indexed by \omega.) If \omega \in X^{-1}(H) then X(\omega) \in H and (f \circ X)(\omega) = 1, while Y(\omega) = 1 because \omega \in X^{-1}(H) = Y^{-1}(1). Otherwise, if \omega \not\in X^{-1}(H) then analogous reasoning shows both Y(\omega) and (f \circ X)(\omega) equal zero.

How did the above construction avoid the problem of the range of X not necessarily being Borel? The subtlety is that the choice of H need not be unique, and in particular, H may contain values which lie outside the range of X. Whereas a choice such as f(x) = \sup Y(X^{-1}(x)) assigns a single value (in this case, \infty) to values of x not lying in the range of X, the choice f = I_H can assign either 0 or 1 to values of x not in the range of X, and by doing so, it can make f  Borel-measurable.

Consider next the case when Y = I_{A_1} + 2 I_{A_2}. As above, we can find Borel sets H_i such that A_i = X^{-1}(H_i) for i=1,2, and moreover, f = I_{H_1} + 2 I_{H_2} gives a suitable f. Here, it can be readily shown that if x is in the range of X then x can lie in at most one H_i. Thus, regardless of how the H_i are chosen, f will take on the correct value whenever x lies in the range of X. Different choices of the H_i can result in different extensions of f, but each such choice is Borel-measurable, as required.

The above depends crucially on having only finitely many indicator functions. A frequently used principle is that an arbitrary measurable function can be approximated by a sequence of bounded functions with each function being a sum of a finite number of indicator functions (i.e., a simple function). Therefore, the general case can be handled by using a sequence Y_n of random variables converging pointwise to Y. Each Y_n results in an f_n obtained by replacing the A_i by H_i, as was done in the paragraph above. For x in the range of X, it turns out as one would expect: the f_n(x) converge, and f(x) = \lim f_n(x) gives the correct value for f at x. For x not in the range of X, there is no reason to expect the f_n(x) to converge: the choice of the H_i at the nth and (n+1)th steps are not coordinated in any way when it comes to which values to include from the complement of the image of X. Intuitively though, we could hope that each H_i includes a “minimal” extension that is required to make f measurable and that convergence takes place on this “minimal” extension. Thus, by choosing f(x) to be zero whenever f_n(x) does not converge, and choosing f(x) to be the limit of f_n(x) otherwise, we may hope that we have constructed a suitable f despite how the H_i at each step were chosen. Whether or not this intuition is correct, it can be shown mathematically that f so defined is indeed the desired function. (See for example the short proof of Theorem 20.1 in the second edition of Billingsley’s book Probability and Measure.)

Finally, it is remarked that sometimes the monotone class theorem is used in the proof. Essentially, the idea is exactly the same: approximate Y by a suitable sequence Y_n. The subtlety is that the monotone class theorem only requires us to work with indicator functions I_A where A is particularly nice (i.e., A lies in a \pi-system generating the \sigma-algebra of interest). The price of this nicety is that Y must be bounded. For the above problem, as Williams points out in his book Probability with Martingales, we can simply replace Y by \arctan Y to obtain a bounded random variable. On the other hand, there is nothing to be gained by working with particularly nice A_i, hence Williams’ admission that there is no real need to use the monotone class theorem (see A3.2 of his book).