Math Formulae Quick Reference

I do a lot of math, but there are some formulae that just don't stick in my head for whatever reason. It's high time I collected them in one place so I can memorize them, or at least look them up quickly without having to search for things and then convert to a reasonable (and in some cases, fixed) notation.

This page will not list every formula. Some of them (e.g. the quadratic formula), I know cold. Some of them (e.g. the pythagorean trig identity) are trivial. Moreover, because this is a quick reference, only formulae I actually use in practice are listed here. All this besides the obvious that there are a lot of formulae.

For the above reasons, formula suggestions will probably not be accepted. However, any corrections are definitely welcome.


Basic Definitions

\begin{align*} \vec{u} \times \vec{v} &:= \begin{bmatrix} u_1~v_2 ~-~ u_2~v_1\\ u_2~v_0 ~-~ u_0~v_2\\ u_0~v_1 ~-~ u_1~v_0 \end{bmatrix}\\ \sinh(x) &= (e^x - e^{-x})/2\\ \cosh(x) &= (e^x + e^{-x})/2 \end{align*}

Gamma Function and Factorials[1]

\begin{align*} n! ~~&\approx~~ \sqrt{2\pi n} ~(n/e)^n \\ \binom{n}{k} ~~&=~~ \frac{n!}{(n-k!)k!} ~~=~~ (n)_k \frac{1}{k!} ~~=~~ \frac{\Gamma(n+k)}{\Gamma(n)\Gamma(k+1)}\\ \end{align*} \begin{alignat*}{3} & \Gamma(n+1) &&= n! = \Pi(n) \hspace{1cm} && n \text{ a nonnegative integer}\\ & \Gamma(n ) &&= (n-1)! && n \text{ a positive integer}\\ \end{alignat*}

The falling (\(x^\underline{n}\)) and rising (\(x^\overline{n}\)) factorials are as follows, for (\(x\) and \(x+n\) real and not negative integers). Note the notational issue with the Pochhammer symbol \((x)_n\)[2]!

\begin{align*} x^\underline{n} &= \frac{x!}{(x-k)!} = \frac{\Gamma(x+1)}{\Gamma(x-k+1)} \\ &= (x)_k \hspace{1cm}\text{(everywhere but hypergeometric function)} \\[1.5em] x^\overline{n} &= \frac{(x+k-1)!}{(x-k)!} = \frac{\Gamma(x+k)}{\Gamma(x)} \\ &= \begin{cases} x^{(k)} & \hspace{1cm}\text{(elsewhere)} \\ (x)_k & \hspace{1cm}\text{(hypergeometric function)} \end{cases} \end{align*}

Gamma function identities: \begin{alignat*}{3} & \Gamma(1+z) &&= z ~\Gamma(z) && \\ & \Gamma(1-z)\Gamma(z) &&= \pi / \sin(\pi z) && z \notin \mathbb{Z} \\ & \Gamma(z)~\Gamma(z+1/2) ~~&&=~~ 2^{1-2z} \sqrt{\pi} ~\Gamma(2z) && \end{alignat*}


Matrices

The Jacobian:

\[ \nabla \vec{f} = J = \begin{bmatrix} \dfrac{ \partial \vec{f} }{ \partial x_1 } & \cdots & \dfrac{ \partial \vec{f} }{ \partial x_n } \end{bmatrix} = \begin{bmatrix} \dfrac{ \partial f_1 }{ \partial x_1 } & \cdots & \dfrac{ \partial f_1 }{ \partial x_n }\\ \vdots & \ddots & \vdots\\ \dfrac{ \partial f_m }{ \partial x_1 } & \cdots & \dfrac{ \partial f_m }{ \partial x_n } \end{bmatrix}\\ \]

For a scalar function, the Jacobian is called the 'gradient'. There is also 'divergence':

\begin{alignat*}{5} \nabla f ~&=~ \text{grad }f ~&=&~ \begin{bmatrix} \dfrac{ \partial f }{ \partial x_1 } & \cdots & \dfrac{ \partial f }{ \partial x_n } \end{bmatrix} \hspace{1cm}&&\text{(gradient)}\\ \nabla \dotprod f ~&=~ \text{div }f ~&=&~~~ \dfrac{ \partial f }{ \partial x_1 } + \cdots + \dfrac{ \partial f }{ \partial x_n } \hspace{1cm}&&\text{(divergence)} \end{alignat*}

For a 3D scalar function, there is also 'curl':

\[ \nabla \times f ~=~ \text{curl }f ~=~ \begin{vmatrix} \vec{\hat{x}} & \vec{\hat{y}} & \vec{\hat{z}} \\ \dfrac{\partial f}{\partial x} & \dfrac{\partial f}{\partial y} & \dfrac{\partial f}{\partial z} \\ f_x & f_y & f_z \end{vmatrix} \hspace{1cm}\text{(curl)} \]

Rotation matrices in 2D and in 3D:

\begin{align*} R(\theta) &= \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \\ \end{bmatrix}\\ R_x(\theta) &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{bmatrix} \hspace{1cm} R_y(\theta) = \begin{bmatrix} \cos(\theta) & 0 & \sin(\theta) \\ 0 & 1 & 0 \\ -\sin(\theta) & 0 & \cos(\theta) \end{bmatrix} \hspace{1cm} R_z(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{align*}

Determinants[3] for 2D and 3D matrices:

\begin{align*} \begin{vmatrix} a & b \\ c & d \end{vmatrix}\hspace{5mm} &= a d - b c \\ \begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix} &= ( a e i + b f g + c d h ) - ( a f h + b d i + c e g )\\ &= a \begin{vmatrix} e & f \\ h & i \end{vmatrix} - b \begin{vmatrix} d & f \\ g & i \end{vmatrix} + c \begin{vmatrix} d & e \\ g & h \end{vmatrix} \end{align*}

Logarithm and Trig Identities[4]

\begin{align*} \log_b(x\pm y) &= \log_b(x) + \log_b\!\left(1 \pm \frac{y}{x}\right) \end{align*} \begin{align*} \sin(x\pm y) ~~&=~~ \sin(x)\cos(y) \pm \cos(x)\sin(y)\\ \cos(x\pm y) ~~&=~~ \cos(x)\cos(y) \mp \sin(x)\sin(y)\\ \tan(x\pm y) ~~&=~~ \frac{\tan(x) \pm \tan(y)}{1 \mp \tan(x)\tan(y)}\\ \sin(2\theta) ~~&=~~ 2\sin(\theta)\cos(\theta) ~~=~~ (\sin(\theta)+\cos(\theta))^2-1\\ \cos(2\theta) ~~&=~~ \cos^2(\theta)-\sin^2(\theta) ~~=~~2\cos^2(\theta)-1 ~~=~~1-2\sin^2(\theta)\\ \sin(\theta/2) ~~&=~~ (\sgn) \sqrt{(1-\cos(\theta))/2}\\ \cos(\theta/2) ~~&=~~ (\sgn) \sqrt{(1+\cos(\theta))/2}\\ \sin^2(\theta) ~~&=~~ (1-\cos(2\theta))/2\\ \cos^2(\theta) ~~&=~~ (1+\cos(2\theta))/2\\ a\cdot\cos(\theta) + b\cdot\sin(\theta) ~~&=~~ \left(\sgn(a)\sqrt{a^2+b^2}\right) \cos( \theta+\arctan(-b/a) ) \end{align*} \begin{alignat*}{3} & \sin(\arcsin(x)) = x \hspace{5mm} && \cos(\arcsin(x)) = \sqrt{1-x^2} \hspace{5mm} && \tan(\arcsin(x)) = x / \sqrt{1-x^2} \\ & \sin(\arccos(x)) = \sqrt{1-x^2} \hspace{5mm} && \cos(\arccos(x)) = x \hspace{5mm} && \tan(\arccos(x)) = \left(\sqrt{1-x^2}\right) / x \\ & \sin(\arctan(x)) = x / \sqrt{1+x^2} \hspace{5mm} && \cos(\arctan(x)) = 1 / \sqrt{1+x^2} \hspace{5mm} && \tan(\arctan(x)) = x \end{alignat*} \begin{align*} \cosh^2(x) - \sinh^2(x) &= 1 \end{align*}

Series

Sum of an arithmetic series[5]:

\[ a_1 + a_2 + \cdots + a_n = n \frac{a_1+a_n}{2} \]

Finite and infinite geometric series[6]:

\begin{align*} \sum_{k=0}^n r^k &= \frac{1-r^n}{1-r}\\ \sum_{k=0}^\infty z^k &= \frac{1}{1-z},\hspace{1cm}\text{requires }|z| < 1 \end{align*}

Binomial series[7]:

\[ (1+z)^a = \begin{cases} \displaystyle \sum_{k=0}^\infty \binom{a}{k} z^k \hspace{4mm}\text{ if } |z| < 1 \\ \displaystyle \sum_{k=0}^\infty \binom{a}{k} z^{a-k} \text{ if } |z|> 1 \end{cases} \]

Taylor series definition and common functions[8]:

\begin{align*} f(z) &= \sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!} (z-a)^k \\ e^z &= \sum_{k=0}^\infty \frac{1}{k!} z^k,\hspace{1cm}\text{all } z\\ \ln(1 - z) &= \begin{cases} \displaystyle \hspace{16mm}-\sum_{k=1}^\infty \frac{1}{k } z^{+k},\hspace{1cm}\text{if } |z|\leq 1 \wedge z\neq 1 \\ \displaystyle \log(-z) -\sum_{k=1}^\infty \frac{1}{k } z^{-k},\hspace{1cm}\text{if } |z|\geq 1 \wedge z\neq 1 \end{cases}\\ \sin(z) &= \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!} z^{2k+1},\hspace{1cm}\text{all } z\\ \cos(z) &= \sum_{k=0}^\infty \frac{(-1)^k}{(2k )!} z^{2k },\hspace{1cm}\text{all } z\\ \end{align*}

Special Functions

Hypergeometric functions (note using atypical but unambiguous Knuth notation[2]):

\begin{alignat*}{3} {_p}F_q( a_1,\cdots,a_p ; b_1,\cdots,b_q ; z) &= \sum_{k=0}^\infty \frac{ a_1^\overline{k} \cdots a_p^\overline{k} }{ b_1^\overline{k} \cdots b_q^\overline{k} } \frac{z^k}{k!} \hspace{1cm}&&\text{(generalized hypergeometric function)} \\ {_2}F_1(a_1,a_2;b;z) &= &&\text{([ordinary/Gaussian ]hypergeometric function)} \\ M( a,b, z) = {_1}F_1(a;b;z) &= &&\text{(Kummer's confluent hypergeometric function)} \end{alignat*}

Bessel functions of the first kind:

\[ J_\alpha(x) = \sum_{k=0}^\infty \frac{(-1)^k}{k!~\Gamma{(k+\alpha+1)}} \left(\frac{x}{2}\right)^{2k+\alpha} \]

Error function:

\begin{align*} \erf(z) &= \frac{2}{\sqrt{\pi}} \int_0^z \exp(-t^2)~dt \\ \erfc(z) &= 1 - \erf(z) \\[1.5em] \Phi(x) &= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp(-t^2/2)~dt\hspace{1cm}\text{(cumulative distribution function)}\\ &= \frac{1}{2}\left( 1 + \erf\left(\frac{x}{\sqrt{2}}\right) \right) \end{align*}

Derivatives and Integrals

Logarithms (see more):

\begin{align*} \int \log_b(z) ~ d z = \frac{1}{\ln(b)} z~(\ln(z) - 1) + C \end{align*}

Trigonometric (see more)[9]:

\[ \begin{matrix} \hspace{1mm} (d/d\theta) \sin(\theta) = +\cos(\theta) & \hspace{3mm} (d/dy) \sin^{-1}(y) = +1 / \sqrt{1-y^2} \\ (d/d\theta) \cos(\theta) = -\sin(\theta) & \hspace{2mm} (d/dy) \cos^{-1}(y) = -1 / \sqrt{1-y^2} \\ \hspace{13mm} (d/d\theta) \csc(\theta) = -\csc(\theta)\cot(\theta) & \hspace{11mm} (d/dy) \csc^{-1}(y) = -1 / \sqrt{ y^2~(y^2-1) } \\ \hspace{13mm} (d/d\theta) \sec(\theta) = +\sec(\theta)\tan(\theta) & \hspace{11mm} (d/dy) \sec^{-1}(y) = +1 / \sqrt{ y^2~(y^2-1) } \\ \hspace{1mm} (d/d\theta) \tan(\theta) = +\sec^2(\theta) & (d/dy) \tan^{-1}(y) = +1 / ( 1 + y^2 ) \\ \hspace{2mm} (d/d\theta) \cot(\theta) = -\csc^2(\theta) & \hspace{1mm} (d/dy) \cot^{-1}(y) = -1 / ( 1 + y^2 ) \end{matrix} \] \begin{align*} \int \csc(a\theta)~d\theta &= -\frac{1}{a}\ln\left| \csc(a\theta) + \cot(a\theta)\right| + C\\ \int \sec(a\theta)~d\theta &= +\frac{1}{a}\ln\left| \sec(a\theta) + \tan(a\theta)\right| + C\\ \int \tan(a\theta)~d\theta &= -\frac{1}{a}\ln|\cos(a\theta)| + C\\ \int \cot(a\theta)~d\theta &= +\frac{1}{a}\ln|\sin(a\theta)| + C\\ \int \sin^k(a x) ~ d x &= -\frac{\cos(a x) \sin^{k-1}(a x)}{a k} + \frac{k-1}{k} \int \sin^{k-2}(a x) ~ d x \text{, }\hspace{1cm}k>0\\ \int \cos^k(a x) ~ d x &= +\frac{\sin(a x) \cos^{k-1}(a x)}{a k} + \frac{k-1}{k} \int \cos^{k-2}(a x) ~ d x \text{, }\hspace{1cm}k>0\\ \end{align*}

Integration: Techniques

The sad but kindof awesome reality is that CASes are better than humans for most integration, and the 'classic' integration techniques are special-cases. However, humans can also make the intractable problems tractable by framing the problem differently, applying certain transforms or theorems, and at last using series expansions to control the non-elementary result.

With that in mind, this is roughly how you should integrate functions[10]:

  1. If it's very easy (obvious \(u\)-substitutions, function inverses, exploitable symmetries, shifts to integration limits, change of variables), do that. But even for moderately easy problems, it's better to:
  2. Re-think the problem context. Is there a change of variables or different integration order?
  3. Ask a CAS 🙂
  4. If you don't have an answer yet, the problem is very difficult. Seriously: consider asking the internet or giving up.
  5. For rational functions, do Hermite reduction. This is basically partial-fraction decomposition but better[11]. Ask a CAS to do this.
  6. Do polynomial reduction[12]. Ask a CAS to do this.
  7. Try applying integral transforms or vector-calculus theorems as applicable.

Some general tips:

  • For advanced techniques, solve a simple test problem first so you understand how it works. Check your work with simpler cases and numerically. Takes a bit of time, but can save a lot of time.
  • Exponents on functions are awful. You usually need to fix that as soon as possible, or any other work is pointless. If a substitution doesn't work, try a reduction, or at least a power series, integrating term by term.
  • Manipulate individual pieces in isolation; much faster to write and less space.

Integration Technique: Early Tricks

  • Check for shifts or reciprocals that you can apply to the integration limits. Also, walking the integral backward.

    Shifting is particularly good for infinite limits because they don't change. You can shift off constants and even reciprocals:

    \[ \int_{-\infty}^\infty f(x) ~dx = \int_{-\infty}^\infty f\!\left( x - c - \sum_{k=1}^n \frac{a_k}{x-b_k} \right) ~dx \text{, }\hspace{1cm} a_k > 0 \]
  • Check for even/odd sections of the integral you may be able to just throw away (or add!).

Integration Technique: Trig and \(\tan(\theta/2)\) Substitution

The usual way to do trig substitution is to describe an implied right triangle. For example, \(\sqrt{a^2-x^2}\) seems like one leg of a right triangle, with the other leg being \(x\) and the hypotenuse being \(a\). Consider the two 'simple' (non-square-root) sides of the triangle. Place \(\theta\) such that you can write \(x\) as some simple trigonometric function (in this case, place \(\theta\) opposite from \(x\) and write \(x=a\cdot\sin(\theta)\)).

This procedure results in the following substitutions:

  • Integral has \(\sqrt{a^2-x^2}\) . . . use \(x=a\cdot\sin(\theta)\).
  • Integral has \(\sqrt{x^2-a^2}\) . . . use \(x=a\cdot\sec(\theta)\).
  • Integral has \(a^2+x^2\) . . . use \(x=a\cdot\tan(\theta)\).

Also consider:

  • Integral has \(\sqrt{a^2+x^2}\) . . use \(x=a\cdot\sinh(\theta)\) or \(x=a\cdot\cosh(\theta)\).

To remove trig functions, the obvious \(u=\sin(\theta)\) or \(u=\cos(\theta)\) work. However, these result in square-roots, which can be annoying. However, the tangent-half-angle substitution[13] does not:

\begin{gather*} t := \tan(\theta/2)\\ \sin(\theta) = \frac{2t}{1+t^2} \text{, }\hspace{1cm} \cos(\theta) = \frac{1-t^2}{1+t^2} \text{, }\hspace{1cm} d\theta = \frac{2}{1+t^2} d t \end{gather*}

Example: easily computes \(\int \csc(\theta) ~d\theta = \ln|\tan(\theta/2)|+C\)


Integration Technique: Euler Substitution

Use Euler substitution[14] when the integrand is a rational function involving the square-root of a quadratic:

\[ \int R\!\left( x, \sqrt{A x^2 + B x + C} \right) ~d x \]

Substitution of the first kind—use when \(A>0\). (Yes, I know \(x\) is on the RHS; that's deliberate. Solve for \(t\) or \(x\) as needed. You can choose the sign, as shown.)

\[ \sqrt{A x^2 + B x + C} = \pm (\sqrt{A})~x + t ~~~\leftrightarrow~~~ x = \frac{ C - t^2 }{ \pm (2\sqrt{A})~t - B } \]

Substitution of the second kind—use when \(C>0\). (Again, you can choose the sign.)

\[ \sqrt{A x^2 + B x + C} = x~t \pm C ~~~\leftrightarrow~~~ x = \frac{ \pm (2\sqrt{C})~t - B }{ A - t^2 } \]

Substitution of the third kind—use when you don't mind extracting roots \(\alpha\) and \(\beta\):

\[ \sqrt{A x^2 + B x + C} = \sqrt{A (x-\alpha) (x-\beta)} = (x-\alpha)~t ~~~\leftrightarrow~~~ x = \frac{ A\beta - \alpha t^2 }{ A - t^2 } \]

Integration Technique: Inverse

If the integrand is an inverse of some better-known function, it can be integrated using[15]:

\[ \int f^{-1}(y)~d y = y~f^{-1}(y) - (F \circ f^{-1})(y) + C \]

In case it isn't clear, \(f\) is the function, \(f^{-1}\) is the inverse of \(f\), and \(F\) is the antiderivative of \(f\). This works for any continuous and invertible \(f\) (doesn't have to be differentiable). In the complex domain, works for at least all (bi)holomorphic functions.


Integration Technique: Complex Contour

Residue Theorem: if \(f\) holomorphic (differentiable) within a region except at a finite set of points \(\{c_k\}\), then a closed contour integral over \(\gamma\) within that region (with winding numbers \(W(\gamma,c_k)\) for each \(c_k\)) can be found as a sum of residues:

\[ \oint_\gamma f(z) ~d z = 2\pi i \sum_k W(\gamma,c_k) \Res(f,c_k) \]

A residue is the contour integral around the point, not containing any other singularities (see definition as first equation below; prove by reversing the previous formula). Calculating residues can be much easier than the original function:

\begin{alignat*}{3} &\Res\!\left(~ f(z),~ c ~\right) &&:= \oint_{\gamma_c} f(z) ~d z &&\text{ (definition; $\gamma_c$ includes only $c$)} \\ &\Res\!\left(~ f(z),~ c ~\right) &&= a_{-1} &&\text{ where } f(z) = \sum_{k=-\infty}^\infty a_k (z-c)^k \text{ (Laurent or Taylor series)} \\ &\Res\!\left(~ f(z),~ c ~\right) &&= 0 &&\text{ when $f$ can be analytically continued at $c$} \\ &\Res\!\left(~ f(z),~ c ~\right) &&= \lim_{z \rightarrow c}~ (z-c) f(z) &&\text{ when $f$ has a simple pole at $c$} \\ &\Res\!\left(~ f(z),~ c ~\right) &&= \frac{1}{(n-1)!} \lim_{z \rightarrow c}~ \frac{d^{n-1}}{d z^{n-1}} (z-c)^n f(z) &&\text{ when $f$ has a pole of order $n$ at $c$} \\ &\Res\!\left(~ z^k,~ c ~\right) &&= \begin{cases} 2\pi i & \text{ if } k = -1\\ 0 & \text{ if } k \in \mathbb{Z} \end{cases} && \\ &\Res\!\left(~ f(z),~ \infty ~\right) &&= -\Res\!\left(~ f(1/z)/z^2,~ 0 ~\right) && \\ \end{alignat*}

For real-valued integrals, there are several usual approaches:

  • Transform the integral directly into a contour integral that encloses at least one of the singularities of the integrand, then apply the residue theorem. For example, using \(z=e^{i\theta}\) on \(\int_0^{2\pi} \cdots d\theta\) produces a contour integral around the unit circle.
  • Define a contour that follows the real-axis along the region of interest and a return arc in the complex plane, enclosing at least one of the singularities of the integrand. Apply the residue theorem. Show that the return arc has integral zero, so the result is the value of the real integral. Usually, you're solving \(\int_{-\infty}^\infty \cdots d x\) and the return arc disappears as it goes to infinity.

There are also two special-cases of the residue theorem[16] that are common / useful / more-intuitive for this:

Cauchy's Integral Theorem[17]: if \(h\) holomorphic within a region, then a closed contour integral within that region is zero:

\[ \oint_\gamma h(z) ~d z = 0 \]

Cauchy's Integral Formula[18]: if \(h\) holomorphic within a region, then a closed contour integral of the related function \(h/(z-c)\) (var. \(h/(z-c)^n\)) within that region, with winding number \(1\), can be found by function evaluation:

\[ \oint_\gamma \frac{h(z)}{z-c} ~d z = 2\pi i ~h(c) \text{,}\hspace{1cm} \oint_\gamma \frac{h(z)~d z}{(z-c)^n} = \frac{2\pi i}{(n-1)!} h^{(n-1)}(c) \]

For trig in particular, using \(z=e^{i \theta}\) will often be helpful:

\[ \begin{aligned} z&=e^{i \theta} \\ d z/(i z)&=d\theta \end{aligned} \hspace{1cm} \begin{aligned} \sin(k \theta)&=(z^k-z^{-k})/(2i) \\ \cos(k \theta)&=(z^k+z^{-k})/2 \end{aligned} \]

Fun facts (Parseval–Gutzmer formula):

\[ \text{if } f(z) = \sum^\infty_{k=0} a_k ~z^k \text{, then } \int_0^{2\pi} |f(re^{i\theta})|^2 ~d\theta = 2\pi \sum^\infty_{k=0} |a_k|^2 ~r^{2k} \]

Integration Technique: Change of Variables

The multi-variable generalization of \(u\)-substitution follows. Let \(\vec{x}=T(\vec{u})\) be a transformation. Then the integral over coordinates \(\vec{x}\) can be re-expressed as an integral over coordinates \(\vec{u}\) as[19]:

\[ \int_A f(\vec{x}) ~d(\vec{x}) = \int_{T^{-1}(A)} f(T(\vec{u})) ~| J(T) | ~d(\vec{u}) \]

Thus in particular for polar coordinates[20]:

\begin{gather*} T(r,\theta) = \begin{bmatrix} x(r,\theta) = r \cos(\theta) \\ y(r,\theta) = r \sin(\theta) \end{bmatrix} \hspace{1cm} J(T) = \begin{bmatrix} \dfrac{ \partial x }{ \partial r } & \dfrac{ \partial x }{ \partial\theta } \\ \dfrac{ \partial y }{ \partial r } & \dfrac{ \partial y }{ \partial\theta } \end{bmatrix} = \begin{bmatrix} \cos(\theta) & -r\sin(\theta) \\ \sin(\theta) & r\cos(\theta) \end{bmatrix} \\ \iint_A f(x,y) ~dx ~dy = \iint_{T^{-1}(A)} f(T(r,\theta)) ~r ~d r ~d\theta \end{gather*}

And for spherical coordinates[21]:

\begin{gather*} T(r,\theta,\phi) = \begin{bmatrix} x(r,\theta,\phi) = r \cos(\theta) \sin(\phi) \\ y(r,\theta,\phi) = r \sin(\theta) \sin(\phi) \\ z(r,\theta,\phi) = r \hspace{11.6mm} \cos(\phi) \end{bmatrix}\\ J(T) = \begin{bmatrix} \dfrac{ \partial x }{ \partial r } & \dfrac{ \partial x }{ \partial\theta } & \dfrac{ \partial x }{ \partial\phi } \\ \dfrac{ \partial y }{ \partial r } & \dfrac{ \partial y }{ \partial\theta } & \dfrac{ \partial x }{ \partial\phi } \\ \dfrac{ \partial z }{ \partial r } & \dfrac{ \partial z }{ \partial\theta } & \dfrac{ \partial x }{ \partial\phi } \end{bmatrix} = \begin{bmatrix} \cos(\theta) \sin(\phi) & -r \sin(\theta) \sin(\phi) & r \cos(\theta) \cos(\phi) \\ \sin(\theta) \sin(\phi) & r \cos(\theta) \sin(\phi) & r \sin(\theta) \cos(\phi) \\ \cos(\phi) & 0 & -r \sin(\phi) \end{bmatrix} \\ \iiint_V f(x,y) ~dx ~dy = \iiint_{T^{-1}(V)} f(T(r,\theta,\phi)) ~r^2 \sin(\phi) ~d r ~d\phi ~d\theta \end{gather*}

Integration Technique: Feynman

As a basic explanation[22], let's say we want to know the following integral:

\[ I = \int_a^b f(x) ~ d x \]

We 'generalize' the integral by defining a new \(g(x,t)\), such that for some \(t=t_f\) we have \(g(x,t_f)=f(x)\). Notice that if we can get \(G(t_f) = I\) we solve our problem!

\[ G(t) := \int_a^b g(x,t) ~ d x\\ \]

We start with integrating \(G(t_0)=G_0\) at any convenient \(t=t_0\) that makes the integral easy.

Then, we differentiate \(G(t)\) under the integral and integrate that. The idea is that this differentiation makes the integrand simpler, allowing us to solve it:

\[ \frac{d}{d t} G(t) = \int_a^b \left( \frac{\partial}{\partial t} g(x,t) \right) d x = h(t) \]

Finally, we get our answer by integrating out to our desired \(t=t_f\). This is just the fundamental theorem of calculus:

\[ I = G(t_f) = G_0 + \int_{t_0}^{t_f} h(t) ~ dt \]

The best demonstration I've seen is for \(f(x):=(x^2-1)/\ln(x)\). Many methods will prove difficult, but the Feynman integration generalization \(g(x,t):=(x^t-1)/\ln(x)\) makes the solution simple.


Mathy Code Snippets

def rndint( val:float ) -> int: return int(round( val ))
def lerp( v0,v1, t:float ): return v0*(1.0-t) + v1*t
def cubic_interpolate( v0,v1,v2,v3, t:float ):
	"""Cubic interpolation between evenly spaced points, (0↦v1, 1↦v2)"""
	a = (   -v0 + 3*v1 - 3*v2 + v3 ) / 6
	b = (  3*v0 - 6*v1 + 3*v2      ) / 6
	c = ( -2*v0 - 3*v1 + 6*v2 - v3 ) / 6
	d = (         6*v1             ) / 6
	return a*t*t*t + b*t*t + c*t + d

(sRGB gamma encode / decode)


Notes