Recall that the inner product is defined as a sum ($\sum_{n=0}^{N-1} x[n] \, y[n]^*$), which can also be expressed as a vector product ($y^H x$). Let's look at a couple of interesting values that sum could take.
The dot product of two signals could be rather large. If the signals are identical, it is simply the norm of the signal, squared:
$\langle x, x \rangle = \sum_{n=0}^{N-1} x[n]\, x[n]^* = \sum_{n=0}^{N-1} |x[n]|^2 = \|x\|_2^2$.
On the other hand, it is also possible for the dot product sum to be 0. Consider the two signals below:
The inner product of those two signals is obviously zero because each pointwise product is also zero. But it is possible, of course, for products in the sum to be nonzero and still have the total add up to zero:
Whenever the inner product of two signals is zero, it is defined that those signals are
orthogonal .
Orthogonality of harmonic sinusoids
Recall the special class of discrete-time finite length signals called harmonic sinusoids: $s_k[n] = e^{j \frac{2 \pi k}{N} n}, ~~~ n,k,N\in\Integers, ~~ 0\leq n \leq N-1, ~~ 0\leq k \leq N-1$It is a very interesting property that any two of these sinusoids having different frequencies (i.e., $k\neq l$) are orthogonal:
$\begin{align*}\langle s_k | s_l \rangle&= \sum_{n=0}^{N-1} d_k[n] d_l^*[n]\\&= \sum_{n=0}^{N-1} e^{j \frac{2 \pi k}{N} n} (e^{j \frac{2 \pi l}{N} n})^*\\&= \sum_{n=0}^{N-1} e^{j \frac{2 \pi k}{N} n} \: e^{-j \frac{2 \pi l}{N} n} \\&= \sum_{n=0}^{N-1} e^{j \frac{2 \pi}{N} (k-l) n} ~~~ \textrm{let }r=k-l \in I, r\neq 0 \\&= \sum_{n=0}^{N-1} e^{j \frac{2 \pi}{N} r n}\\&=\sum_{n=0}^{N-1} a^n ~~~ \textrm{with }~a=e^{j \frac{2 \pi}{N} r},\textrm{and recall }\sum_{n=0}^{N-1} a^n = \frac{1-a^N}{1-a} \\&= \frac{ 1- e^{j \frac{2 \pi r N }{N}} }{1-e^{j \frac{2 \pi r}{N}} } ~=~ 0 ~~\checkmark
\end{align*}$If two of these sinusoids have the same frequency, then their inner product is simply $N$:
$\begin{align*}\|s_k \|_2^2&= \sum_{n=0}^{N-1} |d_k[n]|^2 \\&= \sum_{n=0}^{N-1} |e^{j \frac{2 \pi k}{N} n}|^2\\&=\sum_{n=0}^{N-1} 1 ~=~ N\checkmark
\end{align*}$
So the dot product of two harmonic sinusoids is zero if their frequencies are different, and $N$ if they are the same. In order to make latter number is $1$, instead of $N$, they are sometimes normalized:
$\tilde{d}_k[n]= \frac{1}{\sqrt{N}}\, e^{j \frac{2 \pi k}{N} n},
~~~ n,k,N\in Z, ~~ 0\leq n \leq N-1, ~~ 0\leq k \leq N-1$
Matrix multiplication and inner products
Let's take look at the formula for the matrix multiplication $y=Xa$. For notation, we will represent the value on the $n$th row and $m$th column of $X$ as $x_m[n]$. The matrix multiplication looks like this:\begin{bmatrix}\vdots&\vdots&&\vdots \\
x_0[n]&x_1[n]&\cdots&x_{M-1}[n] \\\vdots&\vdots&&\vdots \end{mbatrix}
And the value for the multiplication is:$y[n] = \sum_{m=0}^{M-1} \alpha_m \, x_m[n]$
Hopefully that formula looks familiar! What it is showing is that the matrix multiplication $y=Xa$ is simply the inner product of $a$ with each row of $X$, when $X$ and $a$ are real. If they are complex valued, then the complex conjugate of one of them would have to be performed before the matrix multiplication for each value of $y$ to be the inner product of the matrix row with $a$.
The cauchy schwarz inequality
Above we saw that the inner product of two vectors can be as small as 0, in which case the vectors are orthogonal, or it can be large, such as when the two vectors are identical (and the inner product is simply the norm of the vector, squared). It turns out that there is a very significant inequality that explains these two cases. It is called the
Cauchy Schwarz Inequality , which states that for two vectors $x$ and $y$,
$|\langle x,y \rangle |\leq |\x\| \|y\|$Now the magnitude of the inner product is always greater than or equal to 0 (being 0 if the vectors are orthogonal), so we can expand the inequality thus:
$0\leq |\langle x,y \rangle |\leq |\x\| \|y\|$If we divide the equation by $|\x\| \|y\|$, then we have
$0\leq \frac{|\langle x,y \rangle |}{|\x\| \|y\|}\leq 1$This explains why we can define
$\cos \theta_{x,y} = \frac{\Re{\langle x, y \rangle}}{\|x\|_2 \, \|y\|_2}$,for the cosine function also has a range of 0 to 1.
Now there are many different proofs of the inequality, and it is something of a mathematical pastime to appreciate their various constructions. But for signal processing purposes, we are more interested in the utility of the inequality. What it basically says is that--when the lengths of two vectors are taken into consideration--their inner product ranges in value from 0 to 1. Because of this, we can see that the inner product introduces some kind of comparison between two different vectors. It is at its smallest when they are, in a sense, very different from each other, or
orthogonal . It is at its peak when the vectors are simply scalar multiples of each other, or in other words, are very alike.
It turns out there are many application in which we would like to determine how similar one signal is to another. How does a digital communication system decide whether the signal corresponding to a "0" was transmitted or the signal corresponding to a "1"? How does a radar or sonar system find targets in the signal it receives after transmitting a pulse? How does many computer vision systems find faces in images? For each of these questions, the similarity/dissimilarity bounds established by the Cauchy Schwarz inequality help us to determine the answer.