Sorry for not posting for a while. I had some accumulating work in the recent weeks, but now I’m good :). In this post, I wanted to give you all an introduction to digital signal processing (although for some this may be review)!

**What are signals and systems?**

Before we can talk about any type of signal processing, we first have to understand **signals** and **systems**. A signal is simply information that varies with time. An example of a signal might be the amplitude of an earthquake with respect to time as shown below.

Another example might be the voltage of a component with respect to time.

Essentially, signals encode information with respect to time, and generally, they are acquired by some sort of measuring device or sensors. The goal of signal processing as a whole is to devise **systems** that take in signals and output other signals that give valuable information.

An example of such a system might be a CD-player that reads a signal based on the surface of the CD and outputs an audio signal. A more important signal might be used in a self-driving car which reads a signal of how close it is to the car in front of it and outputs a signal that represents a danger score of the occupants of the car.

As you can see, signal processing is ubiquitous and useful in everyday applications. Now let’s get started with the real introduction to digital signal processing.

**Digital Signal Processing vs Analog Signal Processing**

Note that there are two different types of signals in signal processing, namely **digital** and **analog** signals. Analog signals are the signals that we see nature, that is, the signals that work on a * continuous* time domain, meaning at every point they have some sort of value like sound or temperature. Digital signals are different in that they are signals that work on a

**discrete**time domain, meaning that only after certain intervals is data collected. These signals could also be sound and temperature, however, in this case the curve would not be continuous.

Digital signal processing is when we directly deal with digital signals coming in as inputs whereas analog signal processing is where we deal with analog signals coming in as inputs. Both types of processing have their own advantages, but the reason why the term “digital signal processing” is more widely used in the engineering and computer science fields is simply that it is the only type of processing possible on our computers.

Our computers have a CPU (Central Processing Unit) that updates all information coming in, saved, or coming out of it at a certain clock rate, meaning that there is a certain limit to how fast a computer can take in information. This means that while many signals such as sound and luminosity are modeled by analog signals in the real world, computers because of their clock rate (which may, for example, be 1.8 GHz or 1 cycle every 0.00000000055555… seconds), can only sample this data at certain intervals.

Since computing actually allows us to perform operations on the data that we get, it makes sense that we chose to work with digital signals as they can actually be computed and worked with. Analog signals can be worked with, although it is much harder to deal with. For that reason and many more, digital signal processing has prevailed, although it is also true that analog signal processing has its own advantages as well.

**How Do We Mathematically Represent Discrete Signals?**

So now that we know that we have to work with digital signals, how do we mathematically represent it? Generally, when working with signals we use the notation x[n] where x represents our signal, and n represents our current time step. It’s very similar to how we would represent a continuous function, except we are just using the brackets notation to emphasize that we are working in a discretized space.

When manipulating digital signals, one of the most basic things we can do is flip, shift, and scale the signal to adjust it to our needs. These transformations are very akin to how transformations work with normal mathematical functions.

**Flipping Signals**

To flip a signal about the y-axis, we simply transform x[n] into x[-n]. If we were to draw this out, we can see this horizontal flip in our signal.

For a vertical flip about the x-axis, we can simply transform x[n] into -x[n]. We can see this in the image below.

**Shifting**

To shift our function left and right, we can transform x[n] into x[n-a], where a represents how many units to the right the signal is shifted. Below, we can see the signal shifted 2 units to the right.

**Scaling**

Whenever we scale a discrete signal, we can scale it horizontally or vertically. When we scale vertically, we just multiply the function by a coefficient like 2x[n] for example and the function stretches vertically by a factor of 2. However, since the horizontal axis only accepts integer values of n, when scaling horizontally, we lose any values of n that when transformed are no longer integers. Therefore, compressing signals like this causes data loss as shown below.

In this case, we can see that we only retain every second value of our signal when doing a compression. We can very easily as we did with normal functions combine all our different types of transformations to create new signals as well with functions like 2x[3x-4] where we can see a vertical stretch by a factor of 2, rightward shift by 4 units, and a horizontal compression by a factor of 3 **in that order**. We always do **flip, shift, scale**.

**Signal Decomposition**

Now that we can do basic transformations on our signal, you must be wondering, why go through all this trouble when we can’t even express complicated signals like the one below explicitly as a function of t like x[t] = t^2?

And to that, I say you’re **completely right!**

When we are dealing with such monstrosities, it becomes difficult to analyze signals, especially when dealing with more complicated signal processing techniques. At one point, it becomes necessary to mathematically represent such signals in terms of common functions that we use all the time.

**Introducing Fourier Series**

So how can we represent signals in terms of common mathematical functions?

Using Fourier Series!

Fourier Series allows us to take any relatively continuous function and write it as a sum of cosines and sines as shown below. This means that any signal can be written as a sum of cosines and sines.

Understand that for now, because Fourier series was not motivated by Digital Signal Processing or even Analog Signal Processing, but by a desire to simply explore mathematics and find solutions to certain types of differential equations, that the signal (or function) we are looking at is continuous. We will later find a way to discretize the Fourier Series to work with digital signals.

The Fourier Series of a function f(t) with a period of 2\pi can be represented as such

f(t) = c_n + \sum_{n = 1}^{\infty}a_ncos(nt) + \sum_{n=1}^{\infty}b_nsin(nt)

Later, we will generalize this formula to functions of any period and functions that are not periodic, but the intuition for this formula, that adding sinusoidal waves of different wavelengths/frequencies and amplitudes could produce any type of function seems reasonable because we have control over how we scale our component functions in the x and y directions. Now, there is more reasoning as to why this should make sense in a more linear algebra sense of the term as sin(x) and cos(x) are orthogonal functions that form the basis of any wave, but assuming this is true, the real question is how do we find our amplitudes a_n, b_n, and c_n?

**Finding the amplitude of our Fourier Series**

To start off, realize there is a theorem, namely that if we consider the set of functions

\begin{cases} sin(nt) & n = 1,..., \infty \\ cos(mt) & m = 0,...,\infty\end{cases}

, then any two distinct functions in that set are orthogonal on the interval [-\pi,\pi], which in the case of functions means given two functions p(t) and q(t) that \int_{-\pi}^{\pi} p(t)q(t)dt = 0.

For example, if I choose p(t) = sin(2t) and q(t) = sin(3t), this theorem states that \int_{-\pi}^{\pi} sin(2t)sin(3t)dt = 0.

Now, the reason why this theorem is important is because imagine, if I wrote our original function f(t) like so

f(t) = c + a_1cos(t) + b_1sin(t) + ..... + a_ncos(nt) + b_nsin(nt) + ....

where n just delineates an arbitrary term we are looking at. Then assume we want to find a_n for that term, we can try taking advantage of our theorem and multiply our function by cos(nt).

f(t)cos(nt) = ccos(nt)+ a_1cos(t)cos(nt) + b_1sin(t)cos(nt) + ..... + a_ncos(nt)cos(nt) + b_nsin(nt)cos(nt) + ....

Now, lets integrate both sides

\int_{-\pi}^{\pi}f(t)cos(nt)dt = \int_{-\pi}^{\pi}ccos(nt)dt+ \int_{-\pi}^{\pi}a_1cos(t)cos(nt)dt + \int_{-\pi}^{\pi}b_1sin(t)cos(nt)dt + ..... + \int_{-\pi}^{\pi}a_ncos(nt)cos(nt)dt + \int_{-\pi}^{\pi}b_nsin(nt)cos(nt)dt + ....

I think you guys see something! The majority of the integrals turn out to be equal to 0 because of the theorem we stated previously where distinct functions in that set have an integral that evaluate to 0. If we solve for the integrals we can based on that theorem (also note the first term can be written as ccos(0t)cos(nt), hence why it cancels out), we get

\int_{-\pi}^{\pi}f(t)cos(nt)dt = \int_{-\pi}^{\pi}a_ncos^2(nt)dt

Evaluating and solving, we get

a_n = \frac{1}{\pi}\int_{-\pi}^{\pi}f(t)cos(nt)dt

Similarly, if we go through the same process, multiplying by sin(nt) instead, we get

b_n = \frac{1}{\pi}\int_{-\pi}^{\pi}f(t)sin(nt)dt

Also notice to find c_n, we can multiply by cos(0t) because c_n = c_ncos(0t) = c_n * 1, and use orthogonality, and we get

c_n = \frac{1}{2\pi}\int_{-\pi}^{\pi}f(t)dt = \frac{1}{2}a_0

**Tada! We found all our amplitudes, meaning we can find the Fourier Series of any function with a period of 2\pi. **

**Extending Fourier Series**

To extend it to any period namely 2L, we can simply make it so the period of all our functions can be represented as \frac{2L}{n} so that for n = 1, the component with the largest period is 2L and thus our function has a period of 2L. This means by scaling, we can write our new formulas for a_n and b_n like this

a_n = \frac{1}{L}\int_{-L}^{L}f(t)cos(\frac{nt\pi}{L})dt

b_n = \frac{1}{L}\int_{-L}^{L}f(t)sin(\frac{nt\pi}{L})dt

where our Fourier Series is

f(t) = \frac{a_o}{2} + \sum_{n = 1}^{\infty}a_ncos(\frac{nt\pi}{L}) + \sum_{n=1}^{\infty}b_nsin(\frac{nt\pi}{L})

**Boom, we got our decomposition for an analog signal, but how do we make this formula work for discrete signals that aren’t continuous?**

**Fourier Transform**

Now, normally Fourier Series itself is a big topic, but because we aren’t dealing with continuous signals, it is best if we go straight do discretizing the Fourier Series. To do this, first we have to create a continuous Fourier Transform, which essentially takes a continuous signal with respect to time, and transforms into a function that gives the amplitude of it’s constituents based on a given frequency. Believe it or not, we already have the transform, but instead of being expressed in terms of frequency, we have expressed in terms of a counter, n.

a_n = \frac{1}{L}\int_{-L}^{L}f(t)cos(\frac{nt\pi}{L})dt

b_n = \frac{1}{L}\int_{-L}^{L}f(t)sin(\frac{nt\pi}{L})dt

To make our transform based on frequency, we have to first, make our life simpler by creating a complex-valued function c_n = a_n - b_ni so that instead of finding two separate values, we can find one complex value whose real and imaginary parts have meaning.

c_n = a_n - b_ni = \frac{1}{L}\int_{-L}^{L}f(t)cos(\frac{nt\pi}{L})dt - i\frac{1}{L}\int_{-L}^{L}f(t)sin(\frac{nt\pi}{L})dt = \frac{1}{L}\int_{-L}^{L}f(t)(cos(\frac{nt\pi}{L}) - isin(\frac{nt\pi}{L}))dt = \frac{1}{L}\int_{-L}^{L}f(t)e^{-i\frac{n\pi}{L}t}dt

c_n= \frac{1}{L}\int_{-L}^{L}f(t)e^{-i\frac{n\pi}{L}t}dt

Now, that we have combined our Fourier Coefficients in one value c_n, we will set out for a new and better objective. **Our objective is to create a formula for a function that would give us the amplitudes of the sinusoidal that compose our function f(t) not based on some index, n, but based on the sinusoidal frequency, F.**

To do this, we will first try multiplying L on both sides of our equation.

Lc_n\int_{-L}^{L}f(t)e^{-i\frac{n\pi}{L}t}dt

Now consider that F = \frac{n}{2L} because if we look back to the Fourier Series, we can see that every sinusoidal term had \frac{nt\pi}{L} inside their sinusoidal, like sin(\frac{nt\pi}{L}). This means that the period of our sinusodials just through scaling would be P = \frac{2\pi}{\frac{nt\pi}{L}} = \frac{2L}{n}, meaning F which is the reciprocal of P, is \frac{n}{2L}.

Knowing this let’s try using our formula for F to transform our current formula, Lc_n = \int_{-L}^{L}f(t)e^{-i\frac{n\pi}{L}t}dt

First, realize we can rewrite our left hand side as a function of F and L, because n = 2FL by rearranging our frequency formula and thus the left hand side can be written as Lc_\mathrm{2FL} or our function X(F,L).

Thus, X(F,L) = \int_{-L}^{L}f(t)e^{-i\frac{n\pi}{L}t}dt

Adjusting our write side to get rid of L, we can change the integrand using our formula as such.

Thus, X(F,L) = \int_{-L}^{L}f(t)e^{-i2Ft\pi}dt

Because L really just represents a boundary for where we are analyzing our function, whereas with real signals we are indefinitely analyzing, we can just let it go to infinity, and essentially eliminate L from our equation, giving us our **Fourier Transform**.

Thus, X(F) = \int_{-\infty}^{\infty}f(t)e^{-i2Ft\pi}dt

**Essentially, this formula tells us that by choosing a specific frequency on a continuous domain**, **we can get the amplitude of that constituent sinusoidal of our function in the form of a complex number with its own components. We essentially have a graph of amplitudes and frequencies as shown below.**

**Discrete Fourier Transform**

Unfortunately, we can’t truly compute an integral using computers because they can only take signals digitally. Thus, we have to create a discretized analog of the Fourier Transform that computers can use to break down signals into a frequency domain where they can then be further processed using mathematical methods.

**How, you ask?**

Well, naturally the summation is a discretized version of the integral, so we can take our original Fourier Transform, use a summation instead and try to see what happens. Because lots of new variables are introduced in the discretized Fourier Transform and something called a “frequency bin” is introduced, I shall first show you the equation of the discrete Fourier transform in its full glory.

X_k = \sum_{n = 0}^{N-1}x_ne^-\frac{i2kn\pi}{N}

**Crazy right? **Just for reference, I’ll also just show the original, continuous Fourier Transform here

X(F) = \int_{-\infty}^{\infty}f(t)e^{-i2Ft\pi}dt

**So how did we get** **this crazy Discrete Fourier Transform?** Well, first let us start with the summation. Because we can sample at discrete points in time, instead of calculating our transform over an indefinitely long period of time (using an integral from negative infinity to positive infinity), we calculate our summation when our signal first starts (using a discrete index n that starts at 0), to when our signal ends which in this case is N-1 where N is the number of points we are going to sample. Notice that it is not N, because we chose our countable index, n to start at 0 instead of 1. We are still on the same interval as the normal Fourier Transform, but we just rewrote it in a discretized version that makes sense for Digital Signal Processing.

You can honestly think of n as a discrete version of time. As for f(t), all I did was change the name of f to x, and replaced t with its discretized version, n. In essence, x_n and f(t) are the same thing.

We still keep the e term and its exponent mostly stays the same, but instead in the exponent, we replace F with \frac{k}{N} and t with its discretized version n. Now, \frac{k}{N} might be confusing because we don’t know what k represents, but think about it. If we are only getting discretized data from our signal, we can’t get information on the amplitudes of the sinusoidals of every frequency. If you want, you can think of it as I said before. A discrete signal inherently contains less information than an analog signal, and thus we can’t give an amplitude to every frequency. Thus F is not sufficient in our case, as the DFT itself will output a discrete signal on our “frequency axis”.

Instead, we introduce frequency bins to alleviate the issue. If we can’t get information on every frequency, then we can group them into small bins like 0-1 Hz, 1-2 Hz, 2-3Hz, and assign each bin an amplitude. Basically, instead of looking at every frequency, we only look at specific frequencies.

k is actually an index that represents all the bins until the number of points we have. That is, we have N bins, and k is a pointer to each of those bins. If we want the amplitude of our second frequency, we set k = 2. There is more math behind this, but essentially, just accept for now that \frac{k}{N} is analogous to F. Because k is an index, we actually don’t have a traditional frequency domain in our DFT because k is not a frequency. It is an index. We can calculate corresponding frequencies to each k value using something called a “sampling frequency” if we want. **But yeah, that’s really it!**

**If we want to find the amplitude of a certain frequency (that is given it is a bin value), we can just use the DFT and have a graph as shown below.** **Because corresponding sines and cosines (that have the same frequency) can be combined, the value at each frequency represents the amplitude of those combined sinusoidals.**

**Wrapping Up an Introduction to Digital Signal Processing**

Now, there are more complicated things in Digital Signal Processing, but essentially with the discrete Fourier Transform (DFT), we can do lots of manipulation work with the signals that computers collect as we can decompose them. There is an algorithm called the Fast Fourier Transform (FFT) that allows us to compute the DFT quickly, but I think that will be an article for another time only because I felt like DFT was already pushing it for an *Introduction to Digital Signal Processing*. If you have questions or enjoyed this blog post, consider subscribing or leaving a comment! Thanks and stay safe! Below I will post resources to help you get started with digital signal processing and get a more formal introduction to digital signal processing.

**Talking about Digital Signal Processing, I betcha that the new Roku Express uses tons of different signal processing techniques handling internet connections, audio signals, and more. If you have some time and love streaming shows, why not get one? (I personally have one, and man is it good!)**

byteofmath.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to ("byteofmath.com” (amazon.com)).

**Resources**

https://www.analog.com/en/design-center/landing-pages/001/beginners-guide-to-dsp.html