Friday, July 02, 2010

A (possibly) less cryptic way to calculate the cross product

Here is the latest entry in my "how to take simple, well-understood problems and convert them to horrible-looking exercises in matrix multiplication" file. (I'd say about 99% of my graduate work so far should also be stored in this rather unwieldy file...)

The cross product (which I think only exists in ordinary, three-dimensional space) of two vectors $\mathbf{a}$ and $\mathbf{b}$, written $\mathbf{a}\times\mathbf{b}$, is a way of combining $\mathbf{a}$ and $\mathbf{b}$ to produce a new vector, oriented perpendicularly to both $\mathbf{a}$ and $\mathbf{b}$. In symbols, this is
\[ \mathbf{a} \times \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \sin \left(\theta \right) \mathbf{\hat{n}}, \]
where $|\mathbf{a}| = \sqrt{{a_1}^2+{a_2}^2+{a_3}^2}$ is the length of $\mathbf{a}$, and $\mathbf{\hat{n}}$ is an as-yet-unknown unit vector orthogonal to both $\mathbf{a}$ and $\mathbf{b}$. The standard way of calculating the components of $\mathbf{a} \times \mathbf{b}$ is by constructing a very strange-looking determinant:
\[ \mathbf{a} \times \mathbf{b} = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix} \times \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = \left| \begin{matrix} \mathbf{\hat{e}}_1 & \mathbf{\hat{e}}_2 & \mathbf{\hat{e}}_3 \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3\end{matrix} \right| = \begin{bmatrix} a_2 b_3 - a_3 b_2 \\ a_3 b_1 - a_1 b_3 \\ a_1 b_2 - a_2 b_1 \end{bmatrix}. \]
I remember feeling puzzled when I first learned this as an undergraduate. It does give you the right answer, but seriously, what's going on with that determinant? Its first row is a set of unit vectors, and its second and third rows are populated by the elements of the vectors $\mathbf{a}$ and $\mathbf{b}$. There's probably some obvious explanation that I'm missing, but to me, that determinant has always been maddeningly cryptic.

Anyway, I was waiting for Maple to grind through some calculations for me this afternoon, so I thought I'd take a crack at writing the cross product in a somewhat less confusing way. Looking at the result of the above equation, it's clear that we're looking for the following three inner products:
\[ \mathbf{a} \times \mathbf{b} = \begin{bmatrix} a_2 b_3 - a_3 b_2 \\ a_3 b_1 - a_1 b_3 \\ a_1 b_2 - a_2 b_1 \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} a_2 & -a_3 \end{bmatrix} \begin{bmatrix} b_3 \\ b_2 \end{bmatrix} \\ \begin{bmatrix} -a_1 & a_3 \end{bmatrix} \begin{bmatrix} b_1 \\ b_3 \end{bmatrix} \\ \begin{bmatrix} a_1 & -a_2 \end{bmatrix} \begin{bmatrix} b_2 \\ b_1 \end{bmatrix} \end{bmatrix}. \]
It should be possible to form a matrix $\mathbf{A}$ from the components of $\mathbf{a}$, while leaving $\mathbf{b}$ untouched. (Or vice-versa.) Clearly, $\mathbf{A}$ will be a $3 \times 3$ matrix, and, since $\mathbf{a} \times \mathbf{b}$ is orthogonal to $\mathbf{a}$ and $\mathbf{b}$, it should have zeroes on its diagonal. After a bit of tinkering,
\[ \mathbf{A} = \begin{bmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{bmatrix}, \]
which allows us to compute the cross product by multiplying $\mathbf{b}$ on the left by the matrix $\mathbf{A}$:
\[ \mathbf{A} \mathbf{b} = \mathbf{a} \times \mathbf{b}. \]
Two immediately evident properties of $\mathbf{A}$ are that it is traceless (since its diagonal elements are all zero),
\[ \text{tr}\left(\mathbf{A}\right) = 0, \]
and it is skew-symmetric (equal to the negative of its transpose):
\[ \mathbf{A} = -\mathbf{A}^\mathrm{T}. \]
A quick calculation reveals that $\mathbf{A}$ only has two eigenvalues,
\[ \lambda_{\pm} = \pm i | \mathbf{a} |. \]

I don't know if this is otherwise useful, but it seems to me that it's quite a bit less confusing than the determinant method of calculating $\mathbf{a} \times \mathbf{b}$. One possible application might be to define the cross product in higher-dimensional spaces, using a larger matrix $\mathbf{A}$ with the same internal structure.

1 comment:

  1. Same thought as below, looking at that determinant the first thing I think of is making a "Cat's Cradle" with strings. I don't know if boys did this so maybe I should explain, but maybe you did. So you take a circle of yarn and string it around your thumb and pinky on both hands, and the motions to make a Cat's Cradle (which was just a pattern in the string between your hands) was to thread you middle finger through the string on the opposite hand, first right to left, then to open the "cradle" to the right, then left to right, then open the cradle overall. It is hard to describe but in my eyes that determinant looks like the cat's cradle. First you solve the "mini determinant" at the bottom of the matrix (thread right to left), then from where you left off you solve the determinant for the whole matrix (opening the cradle from the right) and then you solve the minideterminant at the top (thread from left to right). Which is a long way of saying that this then made me curious-- what is the way to expand that pattern to bigger matrices? (This may be something very simple but for me it is not something I know how to figure out on my own!)

    ReplyDelete