# Chapter 5: Bessel Functions

NOTE: Math will not display properly in Safari - please use another browser

The material covered in this chapter is also presented in Boas Chapter 12, Sections 12, 14, 15, and 19.

## 5.1 Introduction

When a light beam is intercepted by a screen with a circular hole, it produces a bright circular spot on a second screen placed on the far side of the first one. Under usual circumstances, when the light's wavelength is very small compared with the size of the hole, the transition between the bright spot and the shadow is sharp, and the size of the spot matches precisely the size of the hole. But when the hole's radius is reduced to become comparable to (but still larger than) the wavelength, the situation changes: the spot spreads out beyond the size of the hole, the transition to the shadow becomes blurred, and a number of dark fringes are observed between bright strips of light. The phenomenon is known as diffraction, and the light's intensity pattern is known as an Airy disk. A picture of this is shown in Fig.5.1. Figure 5.1: Airy disk produced by the diffraction of a light beam by a circular aperture. Credit: Wikipedia.

Diffraction is a clear manifestation of the wave nature of light, which asserts itself when the wavelength $\lambda$ becomes comparable to $R$, the hole's radius. The light that arrives at a given spot on the second screen originates from various positions inside the hole (or aperture, in correct parlance), and since it travels on different paths, it arrives with different phases. When waves arrive in phase, they interfere constructively and produce a bright strip. When they arrive out of phase, they interfere destructively and produce a dark fringe. In experiments it is found that the first dark circle makes an angle

\begin{equation} \theta \simeq 1.22 \frac{\lambda}{2R} \tag{5.1} \end{equation}

with respect to the centre of the Airy disk.

A challenge of the wave theory of light is to explain this effect, and to predict the angle of Eq.(5.1). We will face this challenge in Sec.5.8. But to be successful we shall need a new tool, the Bessel functions, the topic of this chapter.

## 5.2 Definition

In spirit the Bessel functions $J_n(x)$ are similar to the Legendre polynomials introduced in Chapter 3. In both cases we have an infinite set of orthogonal functions that satisfy a number of recursion relations and a second-order differential equation. And in both cases the functions can be defined in terms of a generating function. But there are differences. For the Bessel functions, the label $n$ runs over all integers, including both positive and negative values. And unlike the Legendre polynomials, the Bessel functions cannot be expressed in a simple way.

The generating function for the Bessel functions is

\begin{equation} \Phi(x,h) := e^{\frac{1}{2} x (h - 1/h)}, \tag{5.2} \end{equation}

and the Bessel functions are defined implicitly by

\begin{equation} \Phi(x,h) =\sum_{n=-\infty}^\infty J_n(x)\, h^n. \tag{5.3} \end{equation}

This equation is analogous to Eq.(3.4) for the Legendre polynomials, but notice that the sum over $n$ includes both positive and negative values. The negative powers are required to account for the presence of $1/h$ in the generating function. We can separate the positive and negative powers of $h$ by writing

\begin{equation} \Phi = \sum_{n=0}^\infty J_n(x)\, h^n + \sum_{n=-\infty}^{-1} J_n(x)\, h^n, \tag{5.4} \end{equation}

and if we replace $n$ by $-n$ in the second sum, we get

\begin{equation} \Phi(x,h) = \sum_{n=0}^\infty J_n(x)\, h^n + \sum_{n=1}^{\infty} J_{-n}(x)\, h^{-n}. \tag{5.5} \end{equation}

This expression is entirely equivalent to Eq.(5.3).

To extract the Bessel functions from the generating function we expand the exponential in a power series. We have

\begin{equation} e^y = 1 + y + \frac{1}{2!} y^2 + \frac{1}{3!} y^3 + \cdots = \sum_{p=0}^\infty \frac{1}{p!} y^p, \tag{5.6} \end{equation}

so that

\begin{equation} e^{\frac{1}{2} x h} = \sum_{p=0}^\infty \frac{1}{p!} \biggl( \frac{xh}{2} \biggr)^p \tag{5.7} \end{equation}

and

\begin{equation} e^{-\frac{1}{2} x/h} = \sum_{q=0}^\infty \frac{1}{q!} \biggl( -\frac{x}{2h} \biggr)^q =\sum_{q=0}^\infty \frac{(-1)^q}{q!} \biggl( \frac{x}{2h} \biggr)^q. \tag{5.8} \end{equation}

Multiplying the series gives

\begin{align} \Phi &= \Biggl[ \sum_{p=0}^\infty \frac{1}{p!} \biggl( \frac{xh}{2} \biggr)^p \Biggr] \Biggl[ \sum_{q=0}^\infty \frac{(-1)^q}{q!} \biggl( \frac{x}{2h} \biggr)^q \Biggr] \nonumber \tag{5.9a}\\ &= \sum_{p=0}^\infty \sum_{q=0}^\infty \frac{(-1)^q}{p! q!} \biggl( \frac{1}{2} x \biggr)^{p+q}\, h^{p-q}, \tag{5.9b} \end{align}

and we begin to see something that resembles Eq.(5.3): we have sums and powers of $h$, but some work is required to reconcile the expressions and extract the Bessel functions. Figure 5.2: The discrete $q$-$p$ plane includes nonnegative integer values of $q$ and $p$ only. The double sum of Eq.(5.9) can be viewed as a sum over all points in this space.

We let $n := p-q$ in Eq.(5.9) in order to get us closer to Eq.(5.3). Solving for $p$ gives $p = n + q$, and we can rewrite the double sum over $p$ and $q$ as a double sum over $n$ and $q$. The hard part of this translation is to figure out the bounds for the new summation variables. The double sum over $p$ and $q$ can be represented as in Fig.5.2, in which a term in the sum is pictured as a point in the $q$-$p$ plane; to complete the sum we must include all points in the space. (Note that the space includes nonnegative integers only; this is a discrete space instead of a continuum.) The equation $p = n + q$ represents a set of straight lines in this space, one line for each value of $n$, and we can scan the space by going along each one of these lines. Consider, for example, the line $n=0$; to include all points along this line, the sum over $q$ must begin at $q = 0$ and proceed all the way to $q = \infty$. Consider next the line $n=1$; the figure makes it clear that in this case also, the sum over $q$ must begin at $q = 0$. This will continue to be true for all positive values of $n$. But consider now the line $n=-1$; in this case the sum over $q$ must begin at $q=1$, because to include $q=0$ would introduce a point $(p=-1, q=0)$ that does not belong to the space. Similarly, going along $n=-2$ we are instructed to begin the sum at $q = 2$, and so on. We conclude that while the sum over $q$ begins at $q=0$ when $n \geq 0$, it begins at $q = -n$ when $n < 0$.

With this change of summation variables, Eq.(5.9) becomes

\begin{equation} \Phi = \sum_{n=0}^\infty \sum_{q=0}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q}\, h^{n} + \sum_{n=-\infty}^{-1} \sum_{q=-n}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q}\, h^{n}, \tag{5.10} \end{equation}

and setting $n \to -n$ in the second sum gives

\begin{align} \Phi& = \sum_{n=0}^\infty \Biggl[ \sum_{q=0}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q} \Biggr]\, h^{n} \nonumber \tag{5.11a}\\ & \quad \text{} + \sum_{n=1}^{\infty} \Biggr[ \sum_{q=n}^\infty \frac{(-1)^q}{q!(q-n)!} \biggl( \frac{1}{2} x \biggr)^{2q-n} \Biggr]\, h^{-n}. \tag{5.11b} \end{align}

This can now be compared with Eq.(5.5). The sums within the large square brackets are identified with the Bessel functions, and we have

\begin{align} J_n(x) &= \sum_{q=0}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q}, \tag{5.12a}\\ J_{-n}(x) &= \sum_{q=n}^\infty \frac{(-1)^q}{q!(q-n)!} \biggl( \frac{1}{2} x \biggr)^{2q-n}. \tag{5.12b} \end{align}

We may replace $q$ by $q + n$ in the second sum, and express it as

\begin{align} J_{-n}(x) &= \sum_{q=0}^\infty \frac{(-1)^{q+n}}{(q+n)!q!} \biggl( \frac{1}{2} x \biggr)^{2q+n} \nonumber \\ &= (-1)^n \sum_{q=0}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q} \nonumber \\ &=(-1)^n J_n(x).\tag{5.13} \end{align}

The infinite sums for $J_n(x)$ and $J_{-n}(x)$ are the best we can do to give a concrete expression to the Bessel functions. With $n \geq 0$ we have that

\begin{align} J_n(x) &= \sum_{q=0}^\infty \frac{(-1)^q}{q!(q+n)!} \biggl( \frac{1}{2} x \biggr)^{n+2q} \tag{5.14a}\\ J_{-n}(x) &= (-1)^n J_n(x). \tag{5.14b} \end{align}

It can be shown that the series converges for any $x < \infty$, so that it can be used explicitly to get numerical values for the Bessel functions. This is analogous to exploiting the power series $x - x^3/3! + x^5/5! - x^7/7! + \cdots$ to evaluate $\sin x$. Plots of the first few Bessel functions are shown in Fig.5.3.

## 5.3 Properties

Equation (5.14a) implies that when $x \ll 1$, the Bessel functions can be approximated by

\begin{equation} J_n(x) \simeq \frac{1}{2^n n!} x^n = \frac{1}{(2n)!!} x^n. \tag{5.15} \end{equation}

All functions vanish at $x=0$, except for $J_0$, which evaluates to $J_0(0) = 1$. The series expansion of Eq.(5.14a) does not give us a good handle on the behaviour of the Bessel functions when $x \gg 1$. In this case it can be shown that they may be approximated by

\begin{equation} J_n(x) \simeq \sqrt{ \frac{2}{\pi x} } \cos\bigl[ x - {\textstyle \frac{1}{4}}(2n+1) \pi \bigr]. \tag{5.16} \end{equation}

All functions gently go to zero as $x \to \infty$.

Exercise 5.1: Prove Eq.(5.16). Just kidding! This is hard, and requires a machinery that's beyond the scope of this chapter.

The graphs displayed in Fig.5.3 reveal that the Bessel functions oscillate around zero, but the functions are not periodic; they become approximately periodic when $x \gg 1$, as can be seen from Eq.(5.16). Each function goes to zero at an infinite number of points on the $x$-axis. These zeros are very important in applications, and it is useful to introduce a notation for them. We let

\begin{equation} \alpha_{np} := \mbox{$p^{\rm th}$ zero of $J_n(x)$}. \tag{5.17} \end{equation}

The zeros have two labels. The first is $n$, which selects the Bessel function, and the second is $p$, which selects the zero among the infinite set. For example, the first few zeros of $J_2(x)$ are $\alpha_{21} = 5.135622302$, $\alpha_{22} = 8.417244140$, and $\alpha_{23} = 11.61984117$; these values must be determined numerically.

When $n$ is an even number, the series of Eq.(5.14a) involves even powers of $x$ only, and $J_n(x)$ is therefore an even function of $x$. When $n$ is odd, the series involves odd powers of $x$ only, and the function is odd. This property can be succinctly expressed by

\begin{equation} J_n(-x) = (-1)^n\, J_n(x). \tag{5.18} \end{equation}

## 5.4 Recursion Relations

The Bessel functions satisfy the recursion relation

\begin{equation} J_{n+1} + J_{n-1} = \frac{2n}{x} J_n, \tag{5.19} \end{equation}

as well as

\begin{align} & J_n' = \frac{1}{2} \bigl( J_{n-1} - J_{n+1} \bigr), \tag{5.20a} \\ & J_n' = J_{n-1} - \frac{n}{x} J_n, \tag{5.20b} \\ & J_n' = -J_{n+1} + \frac{n}{x} J_n, \tag{5.20c} \\ & \frac{d}{dx} \bigl( x^n J_n \bigr) = x^n J_{n-1}, \tag{5.20d} \\ & \frac{d}{dx} \bigl( x^{-n} J_n \bigr) = -x^{-n} J_{n+1}, \tag{5.20e} \end{align}

in which a prime indicates differentiation with respect to $x$. These relations can all be derived directly from the series expansion of Eq.(5.14a). But it is easier to follow the strategy adopted back in Chapter 3, and to proceed instead from the generating function.

When we differentiate Eq.(5.2) with respect to $h$ we get the identity

\begin{equation} \frac{\partial \Phi}{\partial h} = \frac{1}{2} x \biggl( 1 + \frac{1}{h^2} \biggr) \Phi, \tag{5.21} \end{equation}

and when we next substitute Eq.(5.3), we obtain

\begin{align} \sum_{n=-\infty}^\infty n J_n\, h^{n-1} &= \frac{1}{2} x \biggl( 1 + \frac{1}{h^2} \biggr) \sum_{n=-\infty}^\infty J_n\, h^n \nonumber \\ &= \frac{1}{2} x \Biggl( \sum_{n=-\infty}^\infty J_n\, h^n + \sum_{n=-\infty}^\infty J_n\, h^{n-2} \Biggr). \tag{5.22} \end{align}

We replace $n$ by $n-1$ in the first sum, replace it by $n+1$ in the second sum, and get

\begin{align} \sum_{n=-\infty}^\infty n J_n\, h^{n-1} &= \frac{1}{2} x \sum_{n=-\infty}^\infty \bigl( J_{n-1} + J_{n+1} \bigr) h^{n-1} \nonumber \\ &= \sum_{n=-\infty}^\infty \frac{1}{2} x \bigl( J_{n-1} + J_{n+1} \bigr) h^{n-1}. \tag{5.23} \end{align}

Because the series on the left-hand side is equal to the series on the right-hand side for any value of $h$, the coefficients must be equal for all values of $n$. We have obtained $n J_n = \frac{1}{2} x (J_{n-1} + J_{n+1} )$, which is the same statement as Eq.(5.19).

Next we differentiate Eq.(5.2) with respect to $x$ and get

\begin{equation} \frac{\partial \Phi}{\partial x} = \frac{1}{2} \biggl( h - \frac{1}{h} \biggr) \Phi. \tag{5.24} \end{equation}

Substitution of Eq.(5.3) then produces

\begin{align} \sum_{n=-\infty}^\infty J_n'\, h^n &= \frac{1}{2} \biggl( h - \frac{1}{h} \biggr) \sum_{n=-\infty}^\infty J_n\, h^n \nonumber \\ &= \frac{1}{2} \sum_{n=-\infty}^\infty J_n\, h^{n+1} - \frac{1}{2} \sum_{n=-\infty}^\infty J_n\, h^{n-1} \nonumber \\ &= \frac{1}{2} \sum_{n=-\infty}^\infty \bigl( J_{n-1} - J_{n+1} \bigr)\, h^n, \tag{5.25} \end{align}

and we have obtained Eq.(5.20a). To establish Eqs.(5.20b) and (5.20c) we simply combine Eqs.(5.19) and (\ref{eq5:recursion2a}). And Eqs.(5.20a), equations (5.20d), (5.20e) are merely alternative ways to express Eqs.(5.20b), (5.20c), respectively.

Exercise 5.2: Fill in the gaps in the derivation of Eqs.(5.20).

## 5.5 Bessel's Equation

The Bessel functions satisfy the second-order differential equation

\begin{equation} x^2 J''_n + x J'_n + (x^2 - n^2) J_n = 0, \tag{5.26} \end{equation}

known as Bessel's equation.

To prove this we start with Eq.(5.20b), which we write in the form

\begin{equation} x J_n' + n J_n - x J_{n-1} = 0. \tag{5.27} \end{equation}

We differentiate this with respect to $x$,

\begin{equation} x J_n'' + J_n' + n J_n' - x J'_{n-1} - J_{n-1} = 0, \tag{5.28} \end{equation}

and then multiply by $x$,

\begin{equation} x^2 J_n'' + x J_n' + n x J_n' - x^2 J'_{n-1} - x J_{n-1} = 0. \tag{5.29} \end{equation}

Next we use Eq.(5.20b) again to rewrite the third term as $n x J'_n = n x J_{n-1} - n^2 J_n$, and Eq.(5.20c) with $n \to n-1$ to rewrite the fourth term as $x^2 J'_{n-1} = -x^2 J_n + (n-1)x J_{n-1}$. We obtain Eq.(5.26) after a bit of simplification.

Exercise 5.3: Reproduce the steps that lead to Eq.(5.26).

## 5.6 Integral Representation

In some applications of Bessel functions (including our discussion of diffraction, to be continued in Sec.5.8), it is useful to give them the integral representation

\begin{equation} J_n(x) = \frac{1}{2\pi i^n} \int_0^{2\pi} e^{ix\cos\phi} e^{-in\phi}\, d\phi. \tag{5.30} \end{equation}

A special case of this result is

\begin{equation} J_0(x) = \frac{1}{2\pi} \int_0^{2\pi} e^{ix\cos\phi}\, d\phi. \tag{5.31} \end{equation}

Because the left-hand side of the equation is real, the imaginary part of the right-hand side is necessarily zero, and the integrand can equally well be expressed as $\cos(x\cos\phi)$; this real form of the identity is also frequently encountered.

To establish Eq.(5.30) we return to the generating function of Eq.(5.2), in which we substitute $h = i e^{i\phi}$. With this replacement the combination $h - 1/h$ is

\begin{equation} h - \frac{1}{h} = i e^{i\phi} - \frac{1}{i e^{i\phi}} = i e^{i\phi} + i e^{-i\phi} = 2 i \cos\phi, \tag{5.32} \end{equation}

and the generating function becomes $\Phi = e^{ix\cos\phi}$. On the other hand, putting $h = i e^{i\phi}$ in Eq.(5.3) yields

\begin{equation} \Phi = \sum_{n=-\infty}^\infty J_n(x)\, (i e^{i\phi})^n = \sum_{n=-\infty}^\infty i^n\, J_n(x)\, e^{in\phi}. \tag{5.33} \end{equation}

Equating the two expressions for $\Phi$, we obtain

\begin{equation} e^{ix\cos\phi} = \sum_{n=-\infty}^\infty i^n\, J_n(x)\, e^{in\phi}, \tag{5.34} \end{equation}

an interesting expansion of $e^{ix\cos\phi}$ in terms of the simple functions $e^{in\phi}$, with Bessel functions as coefficients. This is an example of a Fourier series, a topic to which we shall return in Chapter 7.

We wish to solve Eq.(5.34) and extract the Bessel functions. We shall rely on the orthogonality of the complex exponential, as expressed back in Sec.4.7 in the form of Eq.(4.38), which we restate as

\begin{equation} \int_0^{2\pi} e^{-im\phi} e^{in\phi}\, d\phi = 2\pi \delta_{mn}. \tag{5.35} \end{equation}

Exercise 5.4: In case you didn't do it back in Sec.4.7, verify Eq.(5.35).

Multiply both sides of Eq.(5.34) by $e^{-im\phi}$ and integrate with respect to $\phi$, from $\phi=0$ to $\phi = 2\pi$. This gives

\begin{equation} \int_0^{2\pi} e^{ix\cos\phi} e^{-im\phi}\, d\phi = \sum_{n=-\infty}^\infty i^n\, J_n(x) \int_0^{2\pi} e^{-im\phi} e^{in\phi}\, d\phi, \tag{5.36} \end{equation}

after pulling the $\phi$-independent terms out of the integral on the right-hand side. Use of Eq.(5.35) allows us to write this as

\begin{equation} \int_0^{2\pi} e^{ix\cos\phi} e^{-im\phi}\, d\phi = \sum_{n=-\infty}^\infty i^n\, J_n(x)\, ( 2\pi \delta_{mn} ). \tag{5.37} \end{equation}

Thanks to the Kronecker delta, there is only one nonzero term in the sum, the one for which $n = m$. We have arrived at

\begin{equation} \int_0^{2\pi} e^{ix\cos\phi} e^{-im\phi}\, d\phi = 2\pi i^m\, J_m(x), \tag{5.38} \end{equation}

which is the same as Eq.(5.30), with $n$ replaced by $m$.

## 5.7 Orthogonality

Like the Legendre polynomials and the associated Legendre functions, the Bessel functions form a set of orthogonal functions. The statement of orthogonality, however, is much more subtle in the case of Bessel functions. It is not, for example, the straightforward and perhaps expected $\int_a^b J_n(x) J_{n'}(x)\, dx = 0$ for $n \neq n'$. This statement is not true, for any choice of interval $(a,b)$. The correct statement of orthogonality is

\begin{equation} \int_0^1 J_n(\alpha_{np} x) J_n(\alpha_{nq} x)\, x\, dx = 0 \tag{5.39} \end{equation}

for $p \neq q$, where $\alpha_{np}$ is the $p^{\rm th}$ zero of the function $J_n(x)$. Notice that Eq.(5.39) involves two Bessel functions with the same label $n$, but with different arguments. Notice also the presence of $x$ inside the integral.

When $p=q$ in the integral of Eq.(5.39) we obtain the norm of the Bessel function, in the specific sense derived from the statement of orthogonality. In this case we have

\begin{equation} \int_0^1 \bigl[ J_n(\alpha_{np} x) \bigr]^2 x\, dx = \frac{1}{2} \bigl[ J'_n(\alpha_{np}) \bigr]^2 = \frac{1}{2} \bigl[ J_{n-1}(\alpha_{np}) \bigr]^2 = \frac{1}{2} \bigl[ J_{n+1}(\alpha_{np}) \bigr]^2. \tag{5.40} \end{equation}

For Bessel functions the notions of orthogonality and norm go well beyond the discussion provided in Sec.3.6. These more abstract notions are motivated by what we can actually prove on the basis of Bessel's equation (5.26); as such the spirit is close to what we did back in Sec.3.7.

To establish Eq.(5.39) we begin with Bessel's equation, which we write as

\begin{equation} y \frac{d^2 J_n}{dy^2} + \frac{d J_n}{dy} + \biggl( y - \frac{n^2}{y} \biggr) J_n = 0, \tag{5.41} \end{equation}

in terms of a new variable $y$. If we set $y = \alpha x$ in this equation, where $\alpha$ is an arbitrary constant that will later be identified with $\alpha_{np}$, we get

\begin{equation} x \frac{d^2 J_n}{dx^2} + \frac{d J_n}{dx} + \biggl( \alpha^2 x - \frac{n^2}{x} \biggr) J_n = 0 \tag{5.42} \end{equation}

after multiplying through by $\alpha$. This can be put in the alternative form

\begin{equation} \frac{d}{dx} \biggl( x \frac{d J_n}{dx} \biggr) + \biggl( \alpha^2 x - \frac{n^2}{x} \biggr) J_n = 0, \tag{5.43} \end{equation}

where it is understood that $J_n \equiv J_n(\alpha x)$, so that the Bessel function is still expressed as a function of the old argument $y = \alpha x$.

Exercise 5.5: Reproduce the steps that lead to Eq.(5.43).

Next we write another copy of Eq.(5.43), replacing the constant $\alpha$ by a new constant $\tilde{\alpha}$, which will later be identified with $\alpha_{nq}$. We have

<\a>begin{equation} \frac{d}{dx} \biggl( x \frac{d \tilde{J}_n}{dx} \biggr) + \biggl( \tilde{\alpha}^2 x - \frac{n^2}{x} \biggr) \tilde{J}_n = 0, \tag{5.44} \end{equation}

where $\tilde{J}_n \equiv J_n(\tilde{\alpha} x)$. We multiply Eq.(5.43) by $\tilde{J}_n$, Eq.(5.44) by $J_n$, and subtract the two equations. This gives

\begin{equation} \tilde{J}_n \frac{d}{dx} \biggl( x \frac{d J_n}{dx} \biggr) - J_n \frac{d}{dx} \biggl( x \frac{d \tilde{J}_n}{dx} \biggr) +\bigl( \alpha^2 - \tilde{\alpha}^2 \bigr) x\, \tilde{J}_n J_n = 0, \tag{5.45} \end{equation}

which can also be expressed in the form

\begin{equation} \frac{d}{dx} \biggl( \tilde{J}_n\, x \frac{dJ_n}{dx} - J_n\, x \frac{d \tilde{J}_n}{dx} \biggr) + \bigl( \alpha^2 - \tilde{\alpha}^2 \bigr) x\, \tilde{J}_n J_n = 0. \tag{5.46} \end{equation}

This brings us one step away from arriving at Eq.(5.39).

Exercise 5.6: Reproduce the steps that lead to Eq.(5.46).

For the final stretch we integrate both sides of Eq.(5.46) from $x=0$ to $x=1$. This yields

\begin{equation} \biggl( x \tilde{J}_n \frac{dJ_n}{dx} - x J_n \frac{d \tilde{J}_n}{dx} \biggr)\biggr|^1_0 + \bigl( \alpha^2 - \tilde{\alpha}^2 \bigr) \int_0^1 \tilde{J}_n J_n\, x\, dx = 0, \tag{5.47} \end{equation}

and we see that the integral has roughly the same form as the one in Eq.(5.39). The boundary term at $x=0$ vanishes because of the overall factor of $x$. To eliminate the boundary term at $x=1$ we must choose $\alpha$ and $\tilde{\alpha}$ so that the Bessel functions vanish there. So if we set $\alpha = \alpha_{np}$ we have that $J_n \equiv J_n(\alpha_{np} x)$ vanishes at $x = 1$, and if we set $\tilde{\alpha} = \alpha_{nq}$ we get that $\tilde{J}_n \equiv J_n(\alpha_{nq} x)$ also vanishes at $x = 1$. With these choices we eliminate the boundary terms, and we are left with

\begin{equation} \bigl( \alpha_{np}^2 - \alpha_{nq}^2 \bigr) \int_0^1 J_n(\alpha_{np} x) J_n(\alpha_{nq} x)\, x\, dx = 0. \tag{5.48} \end{equation}

We see that the factor in front of the integral is nonzero, precisely because we chose distinct zeros of $J_n(x)$ for $\alpha$ and $\tilde{\alpha}$. The integral itself must therefore vanish, and we have established Eq.(5.39). Notice that if we had been unwise and chosen $\alpha = \tilde{\alpha} = \alpha_{np}$ instead, we would have done all this work to arrive at an uninformative $0=0$.

To establish Eq.(5.40) we return to Eq.(5.47), in which we continue to set $\alpha = \alpha_{np}$ but now keep $\tilde{\alpha}$ arbitrary. Because the boundary term at $x=0$ continues to vanish, and because $J_n(\alpha_{np} x) = 0$ at $x=1$, the equation becomes

\begin{equation} J_n(\tilde{\alpha}) \biggl[ \frac{dJ_n(\alpha_{np} x)}{dx} \biggr]_{x=1} = \bigl( \tilde{\alpha}^2 - \alpha^2_{np}\bigr) \int_0^1 J_n(\tilde{\alpha}x) J_n(\alpha_{np}x)\, x\, dx, \tag{5.49} \end{equation}

which we rewrite as

\begin{equation} \int_0^1 J_n(\tilde{\alpha}x) J_n(\alpha_{np}x)\, x\, dx = \frac{J_n(\tilde{\alpha})}{\tilde{\alpha}^2 - \alpha^2_{np}} \biggr[ \frac{dJ_n(\alpha_{np} x)}{dx} \biggr]_{x=1}. \tag{5.50} \end{equation}

For the left-hand side to reproduce the integral of Eq.(5.40) we must set $\tilde{\alpha} = \alpha_{np}$, but this generates a problem with the first factor on the right-hand side, which evaluates to $0/0$. We can resolve this by taking the limit $\tilde{\alpha} \to \alpha_{np}$ while applying l'Hospital's rule,

\begin{equation} \lim_{\tilde{\alpha} \to \alpha_{pn}} \frac{J_n(\tilde{\alpha})}{\tilde{\alpha}^2 - \alpha^2_{np}} = \biggl[ \frac{d J_n(\tilde{\alpha})/d \tilde{\alpha}}{2\tilde{\alpha}} \biggr]_{\tilde{\alpha} = \alpha_{np}} = \frac{1}{2\alpha_{np}} \frac{d J_n(\alpha_{np})}{d\alpha_{np}} = \frac{1}{2\alpha_{np}} J'_n(\alpha_{np}), \tag{5.51} \end{equation}

in which the prime indicates differentiation with respect to the argument. The second factor on the right-hand side of Eq.(5.50) can be expressed as

\begin{equation} \frac{dJ_n(\alpha_{np} x)}{dx} = \alpha_{np} \frac{dJ_n(y)}{dy} = \alpha_{np} J'_n(y) \tag{5.52} \end{equation}

in which $y := \alpha_{np} x$, and the prime continues to indicate differentiation with respect to the argument. Evaluation at $x=1$, or $y = \alpha_{np}$, returns

\begin{equation} \biggl[ \frac{dJ_n(\alpha_{np} x)}{dx} \biggr]_{x=1} = \alpha_{np} J'_n(\alpha_{np}), \tag{5.53} \end{equation}

and combining the two factors turns Eq.(5.50) into

\begin{equation} \int_0^1 J_n(\alpha_{np} x) J_n(\alpha_{np}x)\, x\, dx = \frac{1}{2} \bigl[ J'_n(\alpha_{np}) \bigr]^2. \tag{5.54} \end{equation}

This is the same statement as the first form of Eq.(5.40). To obtain the alternative expressions we appeal to the recursion relations (5.20b) and (5.20c). Because $J_n(x)$ vanishes at $x = \alpha_{np}$, we have that $J'_n(\alpha_{np}) = J_{n-1}(\alpha_{np})$ and $J'_n(\alpha_{np}) = -J_{n+1}(\alpha_{np})$.

## 5.8 Diffraction by a Circular Aperture

We may now return to the discussion of diffraction initiated in Sec.5.1. Light originating from a far-away source passes through a circular aperture of radius $R$ in an otherwise opaque screen. A second screen is placed at a distance $d$ beyond the first one, and the light's intensity pattern on that screen is measured as a function of the angle $\theta$ relative to the optical axis. The situation is illustrated in Fig.5.4, and an experimental realization of the intensity pattern was shown in Fig.5.1. Figure 5.4: Diffraction by a circular aperture of radius $R$. A position within the aperture, relative to the centre, is denoted $\boldsymbol{r'}$. A position on the second screen, situated at a distance $d$ from the first screen, is denoted $\boldsymbol{r}$.

We wish to calculate the light's intensity at a position $\boldsymbol{r}$ on the second screen. This is an application of the wave theory of light, which in principle should involve the electric field $\boldsymbol{E}$, magnetic field $\boldsymbol{B}$, and be based on Maxwell's equations. To simplify the mathematics we shall take the electromagnetic wave to be described by a single scalar function $f$, which can be imagined to represent a component of either $\boldsymbol{E}$ or $\boldsymbol{B}$. We imagine that the wave is monochromatic, with frequency $\omega$, wave number $k = \omega/c$, and wavelength $\lambda = 2\pi/k$.

We imagine that space is filled with elementary emitters of electromagnetic waves. When a wave travels freely in the absence of obstacles, each emitter absorbs a piece of the wave and re-emits it in all directions; the collective action of these emitters results in a wave that simply travels on a straight line. When, however, the wave encounters an opaque screen, only the emitters inside the aperture can do their thing, and this affects the way in which the wave travels. Each emitter within the aperture emits a spherical wave, and some of these waves will travel to the second screen and reach the designated spot at position $\boldsymbol{r}$. Because the waves reaching $\boldsymbol{r}$ originate at different positions within the aperture, they travel on different paths and arrive with different phases. Some waves interfere constructively, and this creates a bright spot on the screen; some waves interfere destructively, and this creates a dark spot.

The preceding discussion captures the essence of the phenomenon, and our job is now to flesh out the mathematical details. Before we begin it should be pointed out that the story involving emitters absorbing and emitting light is not quite a true description of the physics. For one thing, light can easily travel in vacuum, in the complete absence of physical emitters. But while our story may fail to describe the true physics, it does pretty well at representing the actual mathematics of the phenomenon. This is reason enough for us to stick to it.

We consider the waves emitted by a large number of elementary emitters, all situated within the area element $da'$ at position $\boldsymbol{r'}$ within the aperture. Each emitter emits a spherical wave from its own position, and this wave travels from $\boldsymbol{r'}$ to a position $\boldsymbol{r}$ on the second screen. The wave can be given the complex representation

$\frac{e^{i(\omega t - k |\boldsymbol{r}-\boldsymbol{r'}|)}}{|\boldsymbol{r}-\boldsymbol{r'}|},$

with the understanding that only the real part of this expression is physically relevant. Here, $|\boldsymbol{r}-\boldsymbol{r'}|$ is the distance between the emitter and the point on the screen, $\omega$ is the wave's angular frequency, and $k$ is its wave number, related to the wavelength $\lambda$ by $k = 2\pi/\lambda$; the frequency and wave number are related by $\omega = c k$, where $c$ is the speed of light. The total wave emitted by all the emitters in $da'$ is

\begin{equation} df = n \frac{e^{i(\omega t - k |\boldsymbol{r}-\boldsymbol{r'}|)}}{|\boldsymbol{r}-\boldsymbol{r'}|}\, da', \tag{5.55} \end{equation}

where $n$ is the number of emitters per unit area, assumed to be constant across the aperture. The wave $f$ emitted by all the emitters in the aperture shall be the integral of $df$.

Before we write down the integral we must specify the variables contained in the vectors $\boldsymbol{r}$ and $\boldsymbol{r'}$. We let the coordinate system have its origin at the centre of the aperture, and we choose its orientation so that the first screen is situated in the $x$-$y$ plane; the optical axis then coincides with the $z$-axis. We employ polar coordinates in the aperture, and write

\begin{equation} \boldsymbol{r'} = (s' \cos\phi', s' \sin\phi', 0), \tag{5.56} \end{equation}

so that the position vector is given in terms of the distance $s'$ from the $z$-axis and the angle $\phi'$ relative to the $x$-axis. We recall that the area element $da'$ is $s' ds' d\phi'$ in polar coordinates. For simplicity we place $\boldsymbol{r}$ in the $x$-$z$ plane, and write

\begin{equation} \boldsymbol{r} = (r\sin\theta, 0, r\cos\theta), \tag{5.57} \end{equation}

where $r := |\boldsymbol{r}|$ and $\theta$ is the angle relative to the optical axis.

We assume that the distance between the screens is large compared with the size of the aperture, so that $r \gg R$. This allows us to introduce a convenient approximation for $|\boldsymbol{r}-\boldsymbol{r'}|$, given exactly by

\begin{align} |\boldsymbol{r}-\boldsymbol{r'}|&= \bigl[ r^2 - 2r s' \sin\theta\cos\phi' + s^{\prime 2} \bigr]^{1/2} \nonumber \\ &= r \bigl[ 1 - 2(s'/r) \sin\theta\cos\phi' + (s'/r)^2 \bigr]^{1/2}. \tag{5.58} \end{align}

For the factor of $|\boldsymbol{r}-\boldsymbol{r'}|^{-1}$ in the wave's amplitude we can afford to be quite crude, and set it to be approximately equal to $1/r$. For the $k |\boldsymbol{r}-\boldsymbol{r'}|$ term in the wave's phase we must be more precise, because it is the phase difference between waves originating at different positions in the aperture that will determine if they interfere constructively or destructively. We may continue to ignore the $(s'/r)^2$ term within the square brackets, but must account for the term proportional to $s'/r$. After performing a binomial expansion on the square root, we get

\begin{equation} |\boldsymbol{r}-\boldsymbol{r'}| \simeq r \bigl[ 1 - (s'/r) \sin\theta\cos\phi' \bigr] = r - s' \sin\theta\cos\phi. \tag{5.59} \end{equation}

Substitution within $df$ gives

\begin{equation} df = \frac{n}{r} e^{i(\omega t - kr)}\, e^{i k s' \sin\theta \cos\phi'}\, da' \tag{5.60} \end{equation}

for the wave originating at $\boldsymbol{r'}$ and traveling to $\boldsymbol{r}$.

Exercise 5.7: Make sure you can reproduce the steps that lead to Eq.(5.60).

The total wave arriving at $\boldsymbol{r}$ is the integral of $df$ over the entire aperture. This is

\begin{align} f &= \frac{n}{r} e^{i(\omega t - kr)} \int e^{i k s' \sin\theta \cos\phi'}\, da' \nonumber \\ &= \frac{n}{r} e^{i(\omega t - kr)} \int_0^R \int_0^{2\pi} e^{i k s' \sin\theta \cos\phi'}\, s'ds' d\phi'.\tag{5.61} \end{align}

The $\phi'$-integral can be evaluated directly with the help of Eq.(5.31). We have

\begin{equation} \int_0^{2\pi} e^{i k s' \sin\theta \cos\phi'}\, d\phi' = 2\pi J_0(ks'\sin\theta), \tag{5.62} \end{equation}

and making the substitution yields

\begin{equation} f = \frac{2\pi n}{r} e^{i(\omega t - kr)} \int_0^R J_0(ks'\sin\theta)\, s'ds'. \tag{5.63} \end{equation}

To evaluate the $s'$-integral we change variables to $u :=ks'\sin\theta$, so that

\begin{equation} f = \frac{2\pi n}{r (k\sin\theta)^2} e^{i(\omega t - kr)} \int_0^{kR\sin\theta} J_0(u)\, u\, du.\tag{5.64} \end{equation}

According to the recursion relation (5.20d), in which we set $n=1$, $u J_0 = d(uJ_1)/du$, and the integral evaluates to

\begin{equation} \int_0^{kR\sin\theta} J_0(u)\, u\, du = u J_1(u) \biggl|^{kR\sin\theta}_0 = k R \sin\theta\, J_1(kR\sin\theta). \tag{5.65} \end{equation}

And this, at last, gives us the final expression for the total wave at $\boldsymbol{r}$,

\begin{equation} f = \frac{N}{r} e^{i(\omega t - kr)} \biggl[ \frac{2 J_1(kR\sin\theta)}{kR\sin\theta} \biggr], \tag{5.66} \end{equation}

where $N := n (\pi R^2)$ is the total number of emitters in the aperture. It is good, at this stage, to look back and reflect on the fact that we were able to crack this problem thanks to our recently acquired expertise on Bessel functions.

Exercise 5.8: Make sure you can reproduce the steps that took us to Eq.(5.66). Figure 5.5: Light intensity after diffraction by a circular aperture. Plotted is $I(\theta)/I_0$ as a function of $kR\sin\theta$, as described by Eq.(5.67)

The light intensity at $\boldsymbol{r}$ is determined by the energy carried by the wave, which is proportional to $|f|^2$, the (complex) square of the wave function. We write this as

\begin{equation} I(\theta) = I_0 \biggl[ \frac{2 J_1(kR\sin\theta)}{kR\sin\theta} \biggr]^2, \tag{5.67} \end{equation}

where $I_0$ is a constant, and show a plot of $I(\theta)/I_0$ in Fig.5.5. You can see that the function alternates between local maxima and zeros, precisely what is required to explain the experimental results shown in Fig.5.1. The largest maximum occurs at $\theta = 0$, where $I = I_0$. We can see how this comes about mathematically by invoking Eq.(5.15), according to which $J_1(u) \simeq \frac{1}{2} u$ when $u \ll 1$; the approximation implies that $I(\theta)/I_0 \simeq 1$ when $k R \sin\theta \ll 1$. The first zero occurs at the first zero of the Bessel function, $\alpha_{11} \simeq 3.831705970$, and subsequent zeros occur at all other zeros. The first zero determines the angular position of the first dark fringe in the Airy disk of Fig.5.1. It is given by $kR \sin\theta \simeq 3.83$, or

\begin{equation} \sin\theta \simeq 1.22 \frac{\lambda}{2R} \tag{5.68} \end{equation}

if we recall that $k = 2\pi/\lambda$. This is the same statement as Eq.(5.1) in the small-angle approximation, $\sin\theta \simeq \theta$; the approximation is valid when $\lambda \ll R$, which is usually satisfied in experimental realizations of diffraction.

## 5.9 Bessel's Equation Revisited

Many textbook presentations of Bessel functions begin with Bessel's equation,

\begin{equation} x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - n^2) y = 0, \tag{5.69} \end{equation}

and define $J_n(x)$ to be a solution to this equation. In this context $J_n(x)$ is known as a Bessel function of the first kind, because the second-order differential equation admits a second solution, denoted $N_n(x)$ and known (you guessed it) as a Bessel function of the second kind. The two solutions can be distinguished by examining their behaviour near $x = 0$; while $J_n(x)$ is finite at $x=0$, $N_n(x)$ is infinite there.

Bessel's equation can be generalized by letting $n$ differ from an integer. In this case we would write it as

\begin{equation} x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \nu^2) y = 0, \tag{5.70} \end{equation}

replacing $n$ by $\nu$, reserving the notation $n$ for integers. The solutions to this generalized Bessel equation are $J_\nu(x)$ and $N_\nu(x)$, with the property that $J_\nu(x)$ is finite at $x=0$, while $N_\nu(x)$ is infinite there. In fact, the series expansion of Eq.(5.14) can be shown to be a solution to the generalized equation, provided that the factorial involving $n$ is properly re-expressed in terms of the Gamma function. The generalized Bessel function of the first kind is therefore given by

and it reduces to Eq.(5.14) when $\nu$ is an integer.

## 5.9 Practice Problems

1. (Boas Chapter 12, Section 12, Problem 2) Use the series expansion of Eq.(5.14) to show that $J_2(x) = (2/x) J_1(x) - J_0(x)$.

2. (Boas Chapter 12, Section 12, Problem 4) Use the series expansion to show that $J_0' = -J_1$.

3. (Boas Chapter 12, Section 12, Problem 6) Use the series expansion to show that $J_0 - J_2 = 2J_1'$.

4. (Boas Chapter 12, Section 12, Problem 7) Prove that $\lim_{x \to 0} [J_1(x)/x] = \frac{1}{2}$.

5. (Boas Chapter 12, Section 14, Problem 1) Convince your computer to plot $J_n(x)$ for $n = \{0, 1, 2, 3\}$ and $0 \leq x \leq 15$.

6. (Boas Chapter 12, Section 14, Problem 2) Find the first three zeros (excluding $x=0$) of the functions $J_n(x)$ for $n = \{0, 1, 2, 3\}$. Provide at least four significant digits.

7. (Boas Chapter 12, Section 15, Problem 1) Use the series expansion of Eq.(5.14) to provide an alternative proof of Eq.(5.20e).

8. (Boas Chapter 12, Section 15, Problem 6) Prove that $J_{n-1}(x) = J_{n+1}(x)$ when $x$ is a maximum or minimum of $J_n(x)$.

9. (Boas Chapter 12, Section 15, Problem 5) Prove that $J_{n-1}(x) = -J_{n+1}(x) = J'_n(x)$ when $x$ is a zero of $J_n(x)$.

10. (Boas Chapter 12, Section 15, Problem 5) Show that $\int_0^\infty J_1(x)\, dx = 1$.

## 5.10 Challenge Problems

1. Derive the following identities:

a} $\cos x = J_0(x) - 2J_2(x) + 2J_4(x) - 2J_6(x) + 2J_8(x)+ \cdots$,

b) $\sin x = 2J_1(x) - 2J_3(x) + 2J_5(x) - 2J_7(x) + 2J_9(x)+ \cdots$,

c) $1 = J_0(x) + 2J_2(x) + 2J_4(x) + 2J_6(x) + 2J_8(x) + \cdots$.

Hint: Invoke the generating function and pick a convenient value for the parameter $h$.

2. Prove that $J_{n+1}(x)$ must have a zero between two consecutive zeros of $J_n(x)$. Hint: Use the recursion relation $(x^{-n} J_n)' = -x^{-n} J_{n+1}$ and rely on Rolle's theorem.