~/imallett (Ian Mallett)

Monte Carlo Integration: Discretized Domain

The basic idea of Monte Carlo integration is to try to figure out the integral of some hard-to-integrate function on some domain. Consider the following diagram:

The square represents the domain of some function (call it \(g(\vec{v})\), with \(\vec{v}\) a 2D coordinate). In subdomain \(A\), \(g(\vec{v})=3\). In subdomain \(B\), \(g(\vec{v})=4\). In subdomain \(C\), \(g(\vec{v})=5\). Say that the whole square (domain) has area \(40*40=1600\) with subdomain \(A\) having area \(20*20=400\), subdomain \(B\) having area \(20*40=800\), and subdomain \(C\) having area \(20*20=400\).

To figure out the integral of \(g\) over this domain, we take a weighted sum: \[ \int_{domain}g(\vec{v})d\vec{v}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ =\int_{A}g(\vec{v})d\vec{v} + \int_{B}g(\vec{v})d\vec{v} + \int_{C}g(\vec{v})d\vec{v}\\ =400*3 + 800*4 + 400*5~~~~~~~~~~~~~~~~~~~~\\ =6400~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \]Let's now add the Monte Carlo bit. Suppose I only want to evaluate \(g\) once, but I still want to estimate the whole integral. So, pick a subdomain (I picked \(C\)) and evaluate \(g\) at some random place inside of it (wherever I go, it happens to be \(5\)). I'm not allowed to throw any more samples, so that one \(5\) needs to tell us a guess for the entire integral.

To do this, intuitively, I just scale it up: I know that the integral over subdomain \(C\) is \(400*5=2000\) (just as I did above), and the areas of \(A\) and \(B\) make it four times larger, so I produce a guess of \(8000\). The correct answer was \(6400\), but that's not bad for only knowing only one value. The correct way to scale it up in general is to divide by the proportion of the subdomain to the domain: \(2000~/~(400/1600)\). This may be obvious if you think about it. Here's the formula for this particular example of sampling \(C\):\[ \newcommand{\slfrac}[2]{\left.#1\middle/#2\right.} \int_{domain}g(\vec{v})d\vec{v} \approx \slfrac{ \left[ \int_{C}g(\vec{v})d\vec{v} \right] } { \left[ \frac{Area(C)}{Area(domain)} \right] } ~~~~~~~~~~~~~~~(1) \]

Monte Carlo Integration: Continuous Domain

Just as before, we want to estimate that left hand side with one sample. We compute probabilities instead of areas (and note the \(pdf(\cdots)\), since the distribution is continuous). Instead of integrating over subdomain \(C\), we're choosing the event of sampling some point \(v\) in the domain (which you should note, as above can be any dimension). The following is analogous to equation \((1)\):\[ \int_{domain}g(v)d v \approx \slfrac{ g(w) } { \left[\frac{ pdf(w) } { 1 }\right] } ~~~~~~~~~~~~~~~(2) \]The only real difference here is the lack of an integral on the right-hand-side. In \((1)\), that integral really only served to multiply the value of the function by the area of the subdomain. Since in this case the subdomain is actually a point associated with a probabilistic event, there's no associated scaling that needs to be done. Note that on the right-hand-side, \(w\) is a single randomly chosen point—not a placeholder for the entire space like \(v\) is on the left-hand-side.

As it turns out, equation \((2)\) works because of this formula. If you remember nothing else, remember this:\[ \int_{domain}g(x) d x = E\left[\frac{g(x)}{pdf(x)}\right] ~~~~~~~~~~~~~~~(3) \]Here \(E[\cdots]\) is the expected value (a weighted average) of its argument, a random variable. The \(pdf(x)\) can be any probability density function with the same support (domain) as the integral. All the above holds exactly, but will also hold approximately when merely estimating the right-hand-side.

Ian Mallett - Contact -
- 2018 - Creative Commons License