Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Representation of aperiodic signals; Discrete-time Fourier transform properties: periodicity, linearity, symmetry, time shifting and frequency shifting, differencing and summation, time and frequency scaling, differentiation in frequency, Parseval's relation; Convolution and modulation, duality, polar representation; Calculation of frequency and impulse responses; Summary of relationships between continuous-time and discrete-time Fourier series and Fourier transforms.
Instructor: Prof. Alan V. Oppenheim
Lecture 11: Discrete-Time F...
Related Resources
Discrete-time Fourier Transform (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
[MUSIC PLAYING]
PROFESSOR: Last time we began the development of the discrete-time Fourier transform. And just as with the continuous-time case, we first treated the notion of periodic signals. This led to the Fourier series. And then we generalized that to the Fourier transform, and finally incorporated within the framework of the Fourier transform both aperiodic and periodic signals.
In today's lecture, what I'd like to do is expand on some of the properties of the Fourier transform, and indicate how those properties are used for a variety of things. Well, let's begin by reviewing the Fourier transform as we developed it last time. It, of course, involves a synthesis equation and an analysis equation. The synthesis equation expressing x of n, the sequence, in terms of the Fourier transform, and the analysis equation telling us how to obtain the Fourier transform from the original sequence.
And I draw your attention again to the basic point that the synthesis equation essentially corresponds to decomposing the sequence as a linear combination of complex exponentials with amplitudes that are, in effect, proportional to the Fourier transform. Now, the discrete-time Fourier transform, just as the continuous-time Fourier transform, has a number of important and useful properties. Of course, as I stressed last time, it's a function of a continuous variable. And it's also a complex-valued function, which means that when we represent it in general it requires a representation in terms of its real part and imaginary part, or in terms of magnitude and angle.
Also, as I indicated last time, the Fourier transform is a periodic function of frequency, and the periodicity is with a period of 2 pi. And so it says, in effect, that the Fourier transform, if we replace the frequency variable by an integer multiple of 2 pi, the function repeats. And I stress again that the underlying basis for this periodicity property is the fact that it's the set of complex exponentials that are inherently periodic in frequency. And so, of course, any representation using them would, in effect, generate a periodicity with this period of 2 pi.
Just as in continuous time, the Fourier transform has important symmetry properties. And in particular, if the sequence x sub n is real-valued, then the Fourier transform is conjugate symmetric. In other words, if we replace omega by minus omega, that's equivalent to applying complex conjugation to the Fourier transform. And as a consequence of this conjugate symmetry, this results in a symmetry in the real part that is an even symmetry, or the magnitude has an even symmetry, whereas the imaginary part or the phase angle are both odd symmetric. And these are symmetry properties, again, that are identical to the symmetry properties that we saw in continuous time.
Well, let's see this in the context of an example that we worked last time and that we'll want to draw attention to in reference to several issues as this lecture goes along. And that is the Fourier transform of a real damped exponential. So the sequence that we are talking about is a to the n u of n, and let's consider a to be positive. We saw last time that the Fourier transform for this sequence algebraically is of this form. And if we look at its magnitude and angle, the magnitude I show here. And the magnitude, as we see, has the properties that we indicated. It is an even function of frequency. Of course, it's a function of a continuous variable. And it, in addition, is periodic with a period of two pi.
On the other hand, if we look at the phase angle below it, the phase angle has a symmetry which is odd symmetric. And that's indicated clearly in this picture. And of course, in addition to being odd symmetric, it naturally has to be, again, a periodic function of frequency with a period of 2 pi.
OK, so we have some symmetry properties. We have this inherent periodicity in the Fourier transform, which I'm stressing very heavily because it forms the basic difference between continuous time and discrete time. In addition to these properties of the Fourier transform, there are a number of other properties that are particularly useful in the manipulation of the Fourier transform, and, in fact, in using the Fourier transform to, for example, analyze systems represented by linear constant coefficient difference equations.
There in the text is a longer list of properties, but let me just draw your attention to several of them. One is the time shifting property. And the time shifting property tells us that if x of omega is the Fourier transform of x of n, then the Fourier transform of x of n shifted in time is that same Fourier transform multiplied by this factor, which is a linear phase factor. So time shifting introduces a linear phase term.
And, by the way, recall that in the continuous-time case we had a similar situation, namely that a time shift corresponded to a linear phase. There also is a dual to the time shifting property, which is referred to as the frequency shifting property, which tells us that if we multiply a time function by a complex exponential, that, in effect, generates a frequency shift. And we'll see this frequency shifting property surface in a slightly different way shortly, when we talk about the modulation property in the discrete-time case.
Another important property that we'll want to make use of shortly is linearity, which follows in a very straightforward way from the Fourier transform definition. And the linearity property says simply that the Fourier transform of a sum, or linear combination, is the same linear combination of the Fourier transforms. Again, that's a property that we saw in continuous time. And, also, among other properties there is a Parseval's relation for the discrete-time case that in effect says something similar to continuous time, specifically that the energy in the sequence is proportional to the energy in the Fourier transform, the energy over one period. Or, said another way, in fact, or another way that it can be said, is that the energy in the time domain is proportional to the power in this periodic Fourier transform.
OK, so these are some of the properties. And, as I indicated, parallel somewhat properties that we saw in continuous time. Two additional properties that will play important roles in discrete time just as they did in continuous time are the convolution property and the modulation property. The convolution property is the property that tells us how to relate the Fourier transform of the convolution of two sequences to the Fourier transforms of the individual sequences. And, not surprisingly, what happens-- and this can be demonstrated algebraically-- not surprisingly, the Fourier transform of the convolution is simply the product of the Fourier transforms.
So, Fourier transform maps convolution in the time domain to multiplication in the frequency domain. Now convolution, of course, arises in the context of linear time-invariant systems. In particular, if we have a system with an impulse response h of n, input x of n, the output is the convolution. The convolution property then tells us that in the frequency domain, the Fourier transform is the product of the Fourier transform of the impulse response and the Fourier transform of the input.
Now we also saw and have talked about a relationship between the Fourier transform, the impulse response, and what we call the frequency response in the context of the response of a system to a complex exponential. Specifically, complex exponentials are eigenfunctions of linear time-invariant systems. One of these into the system gives us, as an output, a complex exponential with the same complex frequency multiplied by what we refer to as the eigenvalue. And as you saw in the video course manual, this eigenvalue, this constant, multiplier on the exponential is, in fact, the Fourier transform of the impulse response evaluated at that frequency.
Now, we saw exactly the same statement in continuous time. And, in fact, we used that statement-- the frequency response interpretation of the Fourier transform, the impulse response-- we use that to motivate an intuitive interpretation of the convolution property. Now, formally the convolution property can be developed by taking the convolution sum, applying the Fourier transform sum to it, doing the appropriate substitution of variables, interchanging order of summations, et cetera, and all the algebra works out to show that it's a product.
But as I stressed when we discussed this with continuous time, the interpretation-- the underlying interpretation-- is particularly important to understand. So let me review it again in the discrete-time case, and it's exactly the same for discrete time or for continuous time. Specifically, the argument was that the Fourier transform of a sequence or signal corresponds to decomposing it into a linear combination of complex exponentials. What's the amplitude of those complex exponentials? It's basically proportional to the Fourier transform.
If we think of pushing through the system that linear combination, then each of those complex exponentials gets the amplitude modified, or multiplied, by the Fourier transform of-- by the frequency response-- which we saw is the Fourier transform of the impulse response. So the amplitudes of the output complex exponentials is then the amplitudes of the input complex exponentials multiplied by the frequency response. And the Fourier transform of the output, in effect, is an expression expressing the summation, or integration, of the output as a linear combination of all of these exponentials with the appropriate complex amplitudes.
So, it's important, in thinking about the convolution property, to think about it in terms of nothing more than the fact that we've decomposed the input, and we're now modifying separately through multiplication, through scaling, the amplitudes of each of the complex exponential components. Now what we saw in continuous time is that this interpretation and the convolution property led to an important concept, namely the concept of filtering. Kind of the idea that if we decompose the input as a linear combination of complex exponentials, we can separately attenuate or amplify each of those components. And, in fact, we could exactly pass some set of frequencies and totally eliminate other set of frequencies.
So, again, just as in continuous time, we can talk about an ideal filter. And what I show here is the frequency response of an ideal lowpass filter. The ideal lowpass filter, of course, passes exactly, with a gain of 1, frequencies around 0, and eliminates totally other frequencies. However, an important distinction here between continuous time and discrete time is the fact that, whereas in continuous time when we talked about an ideal filter, we passed a band of frequencies and totally eliminated everything else out to infinity.
In the discrete time case, the frequency response is periodic. So, obviously, the frequency response must periodically repeat for the lowpass filter. And in fact we see that here. If we look at the lowpass filter, then we've eliminated some frequencies. But then we pass, of course, frequencies around 2 pi, and also frequencies around minus 2 pi, and for that matter around any multiple of 2 pi. Although it's important to recognize that because of the inherent periodicity of the complex exponentials, these frequencies are exactly the same frequencies as these frequencies. So it's lowpass filtering interpreted in terms of frequencies over a range from minus pi to pi.
Well, just as we talk about a lowpass filter, we can also talk about a highpass filter. And a highpass filter, of course, would pass high frequencies. In a continuous-time case, high frequencies meant frequencies that go out to infinity. In the discrete-time case, of course, the highest frequencies we can generate are frequencies up to pi. And once our complex exponentials go past pi, then, in fact, we start seeing the lower frequencies again. Let me indicate what I mean.
If we think in the context of the lowpass filter, these are low frequencies. As we move along the frequency axis, these become high frequencies. And as we move further along the frequency axis, what we'll see when we get to, for example, a frequency of 2 pi are the same low frequencies that we see around 0. In particular then, an ideal highpass filter in the discrete-time case would be a filter that eliminates these frequencies and passes frequencies around pi.
OK, so we've seen the convolution property and its interpretation in terms of filtering. More broadly, the convolution property in combination with a number of the other properties that I introduced, in particular the time shifting and linearity property, allows us to generate or analyze systems that are described by linear constant coefficient difference equations. And this, again, parallels very strongly the discussion we carried out in the continuous-time case.
In particular, let's think of a discrete-time system that is described by a linear constant coefficient difference equation. And we'll restrict the initial conditions on the equation such that it corresponds to a linear time-invariant system. And recall that, in fact, in our discussion of linear constant coefficient difference equations, it is the condition of initial rest that-- on the equation-- that guarantees for us that the system will be causal, linear, and time invariant.
OK, now let's consider a first-order difference equation, a system described by a first-order difference equation. And we've talked about the solution of this equation before. Essentially, we run the solution recursively. Let's now consider generating the solution by taking advantage of the properties of the Fourier transform.
Well, just as we did in continuous time, we can consider applying the Fourier transform to both sides of this equation. And the Fourier transform of y of n, of course, is Y of omega. And then using the shifting property, the time shifting property, the Fourier transform of y of n minus 1 is Y of omega multiplied by e to the minus j omega. And so we have this, using a linearity property we can carry down the scale factor, and add these two together as they're added here. And the Fourier transform of x of n is X of omega.
Well, we can solve this equation for the Fourier transform of the output in terms of the Fourier transform of the input and an appropriate complex scale factor. And simply solving this for Y of omega yields what we have here. Now what we've used in going from this point to this point is both the shifting property and we've also used the linearity property. At this point, we can recognize that here the Fourier transform of the output is the product of the Fourier transform of the input and some complex function. And from the convolution property, then, that complex function must in fact correspond to the frequency response, or equivalently, the Fourier transform of the impulse response.
So if we want to determine the Fourier transform of the-- or the impulse response of the system, let's say for example, then it becomes a matter of having identified the Fourier transform of the impulse response, which is the frequency response. We now want to inverse transform to get the impulse response.
Well, how do we inverse transform? Of course, we could do it by attempting to go through the synthesis equation for the Fourier transform. Or we can do as we did in the continuous-time case which is to take advantage of what we know. And in particular, we know that from an example that we worked before, this is in fact the Fourier transform of a sequence which is a to the n times u of n. And so, in essence, by inspection-- very similar to what has gone on in continuous time-- essentially by inspection, we can then solve for the impulse response to the system.
OK, so that procedure follows very much the kind of procedure that we've carried out in continuous time. And this, of course, is discussed in more detail in the text. Well, let's look at that example then. Here we have the impulse response for that, associated with the system described by that particular difference equation. And to the right of that, we have the associated frequency response.
And one of the things that we notice-- and this is drawn for a positive between 0 and 1-- what we notice, in fact, is that it is an approximation to a lowpass filter, because it tends to attenuate the high frequencies and retain and, in fact, amplify the low frequencies. Now if instead, actually, the impulse response was such that we picked a to be negative between minus 1 and 0, then the impulse response in the time domain looks like this. And the corresponding frequency response looks like this. And that becomes an approximation to a highpass filter.
So, in fact, a first-order difference equation, as we see, has a frequency response, depending on the value of a, that either looks approximately like a lowpass filter for a positive or a highpass filter for a negative, very much like the first-order differential equation looked like a lowpass filter in the continuous-time case. And, in fact, what I'd like to illustrate is the filtering characteristics-- or an example of filtering-- using a first-order difference equation. And the example that I'll illustrate is a filtering of a sequence that in fact is filtered very often for very practical reasons, namely a sequence which represents the Dow Jones Industrial Average over a fairly long period.
And we'll process the Dow Jones Industrial Average first through a first-order difference equation, where, if we begin with a equals 0, then, referring to the frequency response that we have here, a equals 0 would simply be passing all frequencies. As a is positive we start to retain mostly low frequencies, and the larger a gets, but still less than 1, the more it attenuates high frequencies at the expense of low frequencies.
So let's watch the filtering, first with a positive and we'll see it behave as a lowpass filter, and then with a negative and we'll see the difference equation behaving as a highpass filter. What we see here is the Dow Jones Industrial Average over roughly a five-year period from 1927 to 1932. And, in fact, that big dip in the middle is the famous stock market crash of 1929. And we can see that following that, in fact, the market continued a very long downward trend.
And what we now want to do is process this through a difference equation. Above the Dow Jones average we show the impulse response of the difference equation. Here we've chosen the parameter a equal to 0. And the impulse response will be displayed on an expanded scale in relation to the scale of the input and, for that matter, the scale of the output. Now with the impulse response shown here which is just an impulse, in fact, the output shown on the bottom trace is exactly identical to the input. And what we'll want to do now is increase, first, the parameter a, and the impulse response will begin to look like an exponential with a duration that's longer and longer as a moves from 0 to 1. Correspondingly we'll get more and more lowpass filtering as the coefficient a increases from 0 toward 1.
So now we are increasing the parameter a. We see that the bottom trace in relation to the middle trace in fact is looking more and more smoothed or lowpass-filtered. And here now we have a fair amount of smoothing, to the point where the stock market crash of 1929 is totally lost. And in fact I'm sure there are many people who wish that through filtering we could, in fact, have avoided the stock market crash altogether.
Now, let's decrease a from 1 back towards 0. And as we do that, we will be taking out the lowpass filtering. And when a finally reaches 0, the impulse response of the filter will again be an impulse, and so the output will be once again identical to the input. And that's where we are now.
All right now we want to continue to decrease a so that it becomes negative, moving from 0 toward minus 1. And what we will see in that case is more and more highpass filtering on the output in relation to the input. And this will be particularly evident in, again, the region of high frequencies represented by sharp transitions which, of course, the market crash of 1929 would represent. So here, now, a is decreasing toward minus 1. We see that the high frequencies, or rapid variations are emphasized., And finally, let's move from minus 1 back towards 0, taking out the highpass filtering and ending up with a equal to 0, corresponding to an impulse response which is an impulse, in other words, an identity system. And let me stress once again that the time scale on which we displayed the impulse response is an expanded time scale in relation to the time scale on which we displayed the input and the output.
OK, so we see that, in fact, a first-order difference equation is a filter. And, in fact, it's a very important class of filters, and it's used very often to do approximate lowpass and highpass filtering.
Now, in addition to the convolution property, another important property that we had in continuous time, and that we have in discrete time, is the modulation property. The modulation property tells us what happens in the frequency domain when you multiply signals in the time domain. In continuous time, the modulation property corresponded to the statement that if we multiply the time domain, we convolve the Fourier transforms in the frequency domain. And in discrete time we have very much the same kind of relationship. The only real distinction between these is that in the discrete-time case, in carrying out the convolution, it's an integration only over a 2 pi interval. And what that corresponds to is what's referred to as a periodic convolution, as opposed to the continuous-time case where what we have is a convolution that is an aperiodic convolution.
So, again, we have a convolution property in discrete time that is very much like the convolution property in continuous time. The only real difference is that here we're convolving periodic functions. And so it's a periodic convolution which involves an integration only over a 2 pi interval, rather than an integration from minus infinity to plus infinity.
Well, let's take a look at an example of the modulation property, which will then lead to one particular application, and a very useful application, of the modulation property in discrete time. The example that I want to pick is an example in which we consider modulating a signal with-- a signal with another signal, x of n, or x1 of n as I indicated here, which is minus 1 to the n. Essentially what that says is that any signal which I modulate with this in effect corresponds to taking the original signal and then going through that signal alternating the algebraic signs.
Now we-- in applying the modulation property, of course, what we need to do is develop the Fourier transform of this signal. This signal which I rewrite-- I can write either as minus 1 to the n or rewrite as e to the j pi n since e to the j pi is equal to minus 1-- is a periodic signal. And it's the periodic signal that I show here. And recall that to get the Fourier transform of a periodic signal, one way to do it is to generate the Fourier series coefficients for the periodic signal, and then identify the Fourier transform as an impulse train where the heights of the impulses in the impulse train are proportional, with a proportionality factor of 2 pi, proportional to the Fourier series coefficients.
So let's first work out what the Fourier series is and for this example, in fact, it's fairly easy. Here is the general synthesis equation for the Fourier series. And if we take our particular example where, if we look back at the curve above, what we recognize is that the period is equal to 2, namely it repeats after 2 points. Then capital N is equal to 2, and so we can just write this out with the two terms. And the two terms involved are x1 of n is a0, the 0-th coefficient, that's with k equals 0, and a1, and this is with k equals 1, and we substituted in capital N equal to 2.
All right, well, we can do a little bit of algebra here, obviously cross off the factors of 2. And what we recognize, if we compare this expression with the original signal which is e to the j pi n, then we can simply identify the fact that a0, the 0-th coefficient is 0, that's the DC term. And the coefficient a1 is equal to 1. So we've done it simply by essentially inspecting the Fourier series synthesis equation.
OK, now, if we want to get the Fourier transform for this, we take those coefficients and essentially generate an impulse train where we choose as values for the impulses 2 pi times the Fourier series coefficients. So, the Fourier series coefficients are a0 is equal to 0 and a1 is equal to 1. So, notice that in the plot that I've shown here of the Fourier transform of x1 of n, we have the 0-th coefficient, which happens to be 0, and so I have it indicated, an impulse there. We have the coefficient a1, and the coefficient a1 occurs at a frequency which is omega 0, and omega 0 in fact is equal to pi because the signal is e to the j pi n. Well, what's this impulse over here? Well, that impulse is a-- corresponds to the Fourier series coefficient a sub minus 1.
And, of course, if we drew this out over a longer frequency axis, we would see lots of other impulses because of the fact that the Fourier transform periodically repeats or, equivalently, the Fourier series coefficients periodically repeat. So this is the coefficient a0, This is the coefficient a1 with a factor of 2 pi, this is 2 pi times a0 and 2 pi times a1. And then this is simply an indication that it's periodically repeated.
All right. Now, let's consider what happens if we take a signal and multiply it, modulate it, by minus 1 to the n. Well in the frequency domain that corresponds to a convolution. Let's consider a signal x2 of n which has a Fourier transform as I've indicated here. Then the Fourier transform of the product of x1 of n and x2 of n is the convolution of these two spectra. And recall that if you could convolve something with an impulse train, as this is, that simply corresponds to taking the something and placing it at the positions of each of the impulses. So, in fact, the result of the convolution of this with this would then be the spectrum that I indicate here, namely this spectrum shifted up to pi and of course to minus pi. And then of course to not only pi but 3 pi and 5 pi, et cetera, et cetera. And so this spectrum, finally, corresponds to the Fourier transform of minus 1 to the n times x2 of n where x2 of n is the sequence whose spectrum was X2 of omega.
OK, now, this is in fact an important, useful, and interesting point. What it says is if I have a signal with a certain spectrum and if I modulate-- multiply-- that signal by minus 1 to the n, meaning that I alternate the signs, then it takes the low frequencies-- in effect, it shifts the spectrum by pi. So it takes the low frequencies and moves them up to high frequencies, and will incidentally take the high frequencies and move them to low frequencies.
So in fact we, in essence, saw this when we took-- or when I talked about the example of a sequence which was a to the n times u of n. Notice-- let me draw your attention to the fact that when a is positive, we have this sequence and its Fourier transform is as I show on the right. For a negative, the sequence is identical to a positive but with alternating sines. And the Fourier transform of that you can now see, and verify also algebraically if you'd like, is identical to this spectrum, simply shifted by pi. So it says in fact that multiplying that impulse response, or if we think of a positive and a negative, that is algebraically similar to multiplying the impulse response by minus 1 to the n. And in the frequency domain, the effect of that, essentially, is shifting the spectrum by pi. And we can interpret that in the context of the modulation property.
Now it's interesting that what that says is that if we have a system which corresponds to a lowpass filter, as I indicate here, with an impulse response h of n. And it can be any approximation to a lowpass filter and even an ideal lowpass filter. If we want to convert that to a highpass filter, we can do that by generating a new system whose impulse response is minus 1 to the n times the impulse response of the lowpass filter. And this modulation by minus 1 to the n will take the frequency response of this system and shift it by pi so that what's going on here at low frequencies will now go on here at high frequencies.
This also says, incidentally, that if we look at an ideal lowpass filter and an ideal highpass filter, and we choose the cutoff frequencies for comparison, or the bandwidth of the filter to be equal. Since this ideal highpass filter is this ideal lowpass filter with the frequency response shifted by pi, the modulation property tells us that in the time domain, what that corresponds to is an impulse response multiplied by minus 1 to the n.
So it says that the impulse response of the highpass filter, or equivalently the inverse Fourier transform of the highpass filter frequency response, is minus 1 to the n times the impulse response for the lowpass filter. That all follows from the modulation property.
Now there's another way, an interesting and useful way, that modulation can be used to implement or convert from lowpass filtering to highpass filtering. The modulation property tells us about multiplying the time domain is shifting in the frequency domain. And in the example that we happened to pick said if you multiply or modulate by minus 1 to the n, that takes low frequencies and shifts them to high frequencies.
What that tells us, as a practical and useful notion, is the following. Suppose we have a system that we know is a lowpass filter, and it's a good lowpass filter. How might we use it as a highpass filter? Well, one way to do it, instead of shifting its frequency response, is to take the original signal, shift its low frequencies to high frequencies and its high frequencies to low frequencies by multiplying the input signal, the original signal, by minus 1 to the n, process that with a lowpass filter where now what's sitting at the low frequencies were the high frequencies. And then unscramble it all at the output so that we put the frequencies back where they belong. And I summarize that here.
Let's suppose, for example, that this system was a lowpass filter, and so it lowpass-filters whatever comes into it. Down below, I indicate taking the input and first interchanging the high and low frequencies through modulation with minus 1 to the n. Doing the lowpass filtering, which-- and what's sitting at the low frequencies here were the high frequencies of this signal. And then after the lowpass filtering, moving the frequencies back where they belong by again modulating with minus 1 to the n. And that, in fact, turns out to be a very useful notion for applying a fixed lowpass filter to do highpass filtering and vice versa.
OK, now, what we've seen and what we've talked about are the Fourier representation for discrete-time signals, and prior to that, continuous-time signals. And we've seen some very important similarities and differences. And what I'd like to do is conclude this lecture by summarizing those various relationships kind of all in one package, and in fact drawing your attention to both the similarities and differences and comparisons between them.
Well, let's begin this summary by first looking at the continuous-time Fourier series. In the continuous-time Fourier series, we have a periodic time function expanded as a linear combination of harmonically-related complex exponentials. And there are an infinite number of these that are required to do the decomposition. And we saw an analysis equation which tells us how to get these Fourier series coefficients through an integration on the original time function.
And notice in this that what we have is a continuous periodic time function. What we end up with in the frequency domain is a sequence of Fourier series coefficients which in fact is an infinite sequence, namely, requires all values of k in general. We had then generalized that to the continuous-time Fourier transform, and, in effect, in doing that what happened is that the synthesis equation in the Fourier series became an integral relationship in the Fourier transform. And we now have a continuous-time function which is no longer periodic, this was for the aperiodic case, represented as a linear combination of infinitesimally close-in-frequency complex exponentials with complex amplitudes given by X of omega d omega divided by 2 pi.
And we had of course the corresponding analysis equation that told us how to get X of omega. Here we have a continuous-time function which is aperiodic, and a continuous function of frequency which is aperiodic.
The conceptual strategy in the discrete-time case was very similar, with some differences resulting in the relationships because of some inherent differences between continuous time and discrete time. We began with the discrete-time Fourier series, corresponding to representing a periodic sequence through a set of complex exponentials, where now we only required a finite number of these because of the fact that, in fact, there are only a finite number of harmonically-related complex exponentials. That's an inherent property of discrete-time complex exponentials. And so we have a discrete, periodic time function.
And we ended up with a set of Fourier series coefficients, which of course are discrete, as Fourier series coefficients are, and which periodically repeat because of the fact that the associated complex exponentials periodically repeat. We then used an argument similar to the continuous-time case for going from periodic time functions to aperiodic time functions. And we ended up with a relationship describing a representation for aperiodic discrete-time signals in which now the synthesis equation went from a summation to an integration, since the frequencies are now infinitesimally close, involving frequencies only over a 2 pi interval, and for which the amplitude factor X of omega-- well, the amplitude factor is X of omega d omega divided by 2 pi. And this term, X of omega, which is the Fourier transform, is given by this summation, and of course involves all of the values of x of n.
And so the important difference between the continuous-time and discrete-time case kind of arose, in part, out of the fact that discrete time is discrete time, continuous time is continuous time, and the fact that complex exponentials are periodic in discrete time. The harmonically-related ones periodically repeat whereas they don't in continuous time.
Now this, among other things, has an important consequence for duality. And let's go back again and look at this equation, this pair of equations. And clearly there is no duality between these two equations. This involves a summation, this involves an integration. And so, in fact, if we make reference to duality, there isn't duality in the continuous-time Fourier series.
However, for the continuous-time Fourier transform, we're talking about aperiodic time functions and aperiodic frequency functions. And, in fact, when we look at these two equations, we see very definitely a duality. In other words, the time function effectively is the Fourier transform of the Fourier transform. There's a little time reversal in there, but basically that's the result. And, in fact, we had exploited that duality property when we talked about the continuous-time Fourier transform.
With the discrete-time Fourier series, we have a duality indicated by the fact that we have a periodic time function and a sequence which is periodic in the frequency domain. And in fact, if you look at these two expressions, you see the duality very clearly. And so it's the discrete-time Fourier series that has a duality.
And finally the discrete-time Fourier transform loses the duality because of the fact, among other things, that in the time domain things are inherently discrete whereas in the frequency domain they're inherently continuous. So, in fact, here there is no duality.
OK, now that says that there's a difference in the duality, continuous time and discrete time. And there's one more very important piece to the duality relationships. And we can see that first algebraically by comparing the continuous-time Fourier series and the discrete-time Fourier transform. The continuous-time Fourier series in the time domain is a periodic continuous function, in the frequency domain is an aperiodic sequence. In the discrete-time case, in the time domain we have an aperiodic sequence, and in the frequency domain we have a function of a continuous variable which we know is periodic. And so in fact we have, in the time domain here, aperiodic sequence. In the frequency domain we have a continuous periodic function.
And in fact, if you look at the relationship between these two, then what we see in fact is a duality between the continuous-time Fourier series and the discrete-time Fourier transform. One way of thinking of that is to kind of think, and this is a little bit of a tongue twister which you might want to get straightened out slowly, but the Fourier transform in discrete time is a periodic function of frequency. That periodic function has a Fourier series representation. What is this Fourier series? What are the Fourier series coefficients of that periodic function? Well in fact, except for an issue of time reversal, what it is the original sequence for which that's the Fourier transform. And that is the duality that I'm trying to emphasize here.
OK, well, so what we see is that these four sets of relationships all tie together in a whole variety of ways. And we will be exploiting as the discussion goes on the inner-connections and relationships that I've talked about. Also, as we've talked about the Fourier transform, both continuous time and discrete time, two important properties that we focused on, among many of the properties, are the convolution property and the modulation property. We've also shown that the convolution property leads to a very important concept, namely filtering. The modulation property leads to an important concept, namely modulation. We've also very briefly indicated how these properties and how these concepts have practical implications. In the next several lectures, we'll focus in more specifically first on filtering, and then on modulation. And as we'll see the filtering and modulation concepts form really the cornerstone of many, many signal processing ideas. Thank you.