Archive

Posts Tagged ‘transfer function’

Poles, ROCs and Z-Transforms

August 13, 2016 Leave a comment

Why should all the poles be inside the unit circle for a discrete-time digital system to be stable? And why must we care about regions of convergence (ROC)? With only a small investment in time, it is possible to gain a very clear understanding of exactly what is going on — it is not complicated if learnt step at a time, and there are not many steps.

Step 1: Formal Laurent Series

Despite the fancy name, Step 1 is straightforward. Digital systems operate on sequences of numbers: a sequence of numbers goes into a system and a sequence of numbers comes out. For example, if x[n] denotes an input sequence then an example of a discrete-time system is y[n] = x[n] + \frac12 x[n-1].

How can a specific input sequence be written down, and how can the output sequence be computed?

The direct approach is to specify each individual x[n], for example, x[n] = 0 for n < 0, x[0] = 1, x[1] = 2, x[2] = 3 and x[n] = 0 for n > 2. Direct substitution then determines the output: y[0] = x[0] + \frac12 x[-1] = 1 + \frac12 \cdot 0 = 1, y[1] = x[1] + \frac12 x[0] = 2 + \frac12 \cdot 1 = 2\frac12 and so forth.

The system y[n] = x[n] + \frac12 x[n-1] has a number of nice properties, two of which are linearity and time invariance. This is often expressed in textbooks by saying the system is LTI, where the L in LTI stands for Linear and the TI stands for Time Invariant. When first learning about discrete-time digital systems, usually attention is restricted to linear time-invariant systems, and indeed, it is for such systems that the Z-transform proves most useful.

Any LTI system has the property that the output is the convolution of the input with the impulse response. For example, the impulse response of y[n] = x[n] + \frac12 x[n-1] is found by injecting the input x[0] = 1 and x[n] = 0 for |n| \geq 1. This impulse as input produces the output y[0] = 1, y[1] = \frac12 and y[n] = 0 for n \notin \{0,1\}. It is common to denote the impulse response using the letter h, that is, h[0] = 1, h[1] = \frac12 and h[n] = 0 for n \notin \{0,1\}.

By the definition of convolution, the convolution of a sequence x[n] with a sequence h[n] is c[n] = \sum_{m=-\infty}^{\infty} h[m] x[n-m]. If h[0] = 1, h[1] = \frac12 and h[n] = 0 for n \notin \{0,1\} then c[n] = x[n] + \frac12 x[n-1] and this is equal to y[n]; we have verified in this particular case that the output of the original system is indeed given by convolving the input with the impulse response.

Let’s be honest: writing h[0] = 1, h[1] = \frac12 and h[n] = 0 for n \notin \{0,1\} is tedious! An alternative is to use a formal Laurent series to represent a sequence. Here, “formal” means that we do not try to evaluate this series! A Laurent series just means instead of limiting ourselves to polynomials in a dummy variable t, we allow negative powers of t, and we allow an infinite number of non-zero terms. (The variable t has nothing to do with time; it will be replaced in Step 3 by z^{-1} where it will be referred to as the Z-transform. To emphasise, t is just a dummy variable, a placeholder.)

Precisely, the sequence h[0] = 1, h[1] = \frac12 and h[n] = 0 for n \notin \{0,1\} can be encoded as 1 + \frac12 t; isn’t that much more convenient to write? In general, an arbitrary sequence h[n] can be encoded as the formal series \sum_{n=-\infty}^{\infty} h[n] t^n. The sole purpose of the t^n term is as a placeholder for placing the coefficient h[n]. In other words, if we are given the formal series 2 t^{-5} + 3 + t^7 then we immediately know that that is our secret code for the sequence that consists of all zeros except for the -5th term which is 2, the 0th term which is 3 and the 7th term which is 1.

Being able to associate letters with encoded sequences is convenient, hence people will write x(t) = 2 t^{-5} + 3 + t^7 to mean x[-5] = 2, x[0] = 3, x[7] = 1 and x[n] = 0 for n \notin \{-5,0,7\}. Do not try to evaluate x(t) for some value of t though – that would make no sense at this stage. (Mathematically, we are dealing with a commutative ring that involves an indeterminant t. If you are super-curious, see Formal Laurent series.)

Now, there is an added benefit! If the input to our system is x(t) = 1 + 2t + 3t^2 and the impulse response is h(t) = 1 + \frac12 t then there is a “simple” way to determine the output: just multiply these two series together! The multiplication rule for formal Laurent series is equivalent to the convolution rule for two sequences. You can check that y(t) = h(t)x(t) = (1+\frac12 t) (1 + 2t + 3t^2) = 1 + 2t + 3t^2 + \frac12 t + t^2 + \frac32 t^3 = 1 + 2\frac12 t + 4 t^2 + \frac32 t^3 (corresponding to y[0]=1, y[1]=2\frac12, y[2] = 4 etc) really is the output of our system if the input were x[n] = 0 for n < 0, x[0] = 1, x[1] = 2, x[2] = 3 and x[n] = 0 for n > 2.

  • A formal Laurent series provides a convenient encoding of a discrete-time sequence.
  • Convolution of sequences corresponds to multiplication of formal Laurent series.
  • Do not think of formal Laurent series as functions of a variable. (They are elements of a commutative ring.) Just treat the variable as a placeholder.
    • In particular, there is no need (and it makes no sense) to talk about ROC (regions of convergence) if we restrict ourselves to formal Laurent series.

Step 2: Analytic Functions

This is where the fun begins. We were told not to treat a formal Laurent series as a function, but it is very tempting to do so… so let’s do just that! 🙂 The price we pay though is that we do need to start worrying about ROC (regions of convergence) for otherwise we could end up with incorrect answers.

To motivate further our desire to break the rules, consider the LTI system y[n] = \frac12 y[n-1] + x[n]. Such a system has “memory” or an “internal state” because the current output y[n] depends on a previous output y[n-1] and hence the system must somehow remember previous values of the output. We can assume that in the distant past the system has been “reset to zero” and hence the output will be zero up until the first time the input is non-zero. The impulse response is determined by setting x[0] to 1 and observing the output: h[0] = 1, h[1] = \frac12, h[2] = \frac14 and in general h[n] = \frac1{2^n} for n \geq 0. Encoding this using a formal Laurent series gives the alternative representation h(t) = \sum_{n=0}^\infty \frac1{2^n} t^n = 1 + \frac12 t + \frac14 t^2 + \cdots.

There is a more “efficient” representation of h(t) = \sum_{n=0}^\infty \frac1{2^n} t^n based on the Taylor series expansion \frac1{1-\alpha t} = 1 + \alpha t + \alpha^2 t^2 + \alpha^3 t^3 + \cdots. Indeed, if we were to break the rules and treat h(t) as a function, we might be tempted to write h(t) = \frac1{1 - \frac12 t} instead of h(t) = \sum_{n=0}^\infty \frac1{2^n} t^n. It turns out that, with some care, it is mathematically legitimate to do this. (Furthermore, Step 3 will explain why it is beneficial to go to this extra trouble.)

We know we can encode a sequence using a formal Laurent series, and we can reverse this operation to recover the sequence from the formal Laurent series. In Step 2 then, we just have to consider when we can encode a formal Laurent series as some type (what type?) of function, meaning in particular that it is possible to determine uniquely the formal Laurent series given the function.

Power series (and Laurent series) are studied in complex analysis: recall that every power series has a (possibly zero) radius of convergence, and within that radius of convergence, a power series can be evaluated. Furthermore, within the radius of convergence, a power series (with real or complex coefficients) defines a complex analytic function. The basic idea is that a formal Laurent series might (depending on how quickly the coefficients die out to zero) represent an analytic function on a part of the complex plane.

If the formal Laurent series only contains non-negative powers of t then it is a power series f(t) = \sum_{n=0}^\infty \alpha_n t^n and from complex analysis we know that there exists an R, the radius of convergence (which is possibly zero or possibly infinite), such that the sum converges if |t| < R and the sum diverges if |t| > R. In particular, f(t) is an analytic function on the open disk \{ t \in \mathbb{C} \mid |t| < R\}.

If the formal Laurent series only contains non-positive powers, that is, f(t) = \sum_{n=-\infty}^0 \alpha_n t^n, then we can consider it to be a power series in t^{-1} and apply the above result. Since f(t) = \sum_{n=0}^\infty \alpha_{-n} \left(\frac1t\right)^n, there exists a \tilde R (possibly zero or possibly infinite) such that f(t) is an analytic function on \{ t \in \mathbb{C} \mid |t| > \tilde R\}. [From above, the condition would be |t^{-1}| < R which is equivalent to |t| > \tilde R^{-1} hence defining \tilde R = R^{-1} verifies the claim.]

In the general case, a formal Laurent series decomposes as \sum_{n=-\infty}^{-1} \alpha_n t^n + \sum_{n=0}^\infty \alpha_n t^n. Both sums must converge if it is to define an analytic function, hence in the general case, a formal Laurent series defines an analytic function on a domain of the form \{ t \in \mathbb{C} \mid \tilde R < |t| < R\}.

The encoding of a sequence as an analytic function is therefore straightforward in principle: given a formal Laurent series f(t) = \sum_{n=-\infty}^\infty \alpha_n t^n, determine the constants \tilde R and R, and provided the region \{ t \in \mathbb{C} \mid \tilde R < |t| < R\} is non-empty, we can use the analytic function f(t) to encode the sequence \alpha_n. We must specify the ROC \{ t \in \mathbb{C} \mid \tilde R < |t| < R\} together with f(t) whenever we wish to define a sequence in this way; without knowing the ROC, we do not know the domain of definition of f(t), that is, we would not know for what values of t does f(t) describe the sequence we want. It is possible for a function f(t) to describe different sequences depending on the ROC! An example of this is now given.

If f(t) = \sum_{n=0}^\infty t^n then we have seen that f(t) = \frac1{1-t}. But wait! This is not entirely true. If |t| < 1 then certainly \sum_{n=0}^\infty t^n = \frac1{1-t}.Yet if |t| > 1 then \sum_{n=0}^\infty t^n = \infty \neq \frac1{1-t}. The function t \mapsto \frac1{1-t} for |t| > 1 does not describe the series \sum_{n=0}^\infty t^n. It looks like a perfectly good function though, so what series does it describe?

The region |t| > 1 is unbounded. (In fact, it is an open disc centred at the point at infinity.) This motivates us to replace t by \tau = t^{-1} so that the domain becomes |\tau| < 1 and we can attempt looking for a power series in \tau. Precisely, \frac1{1-t} = \frac1{1-\tau^{-1}} = \frac{-\tau}{1-\tau} = -\tau \sum_{n=0}^\infty \tau^n = -\sum_{n=1}^\infty \tau^n = \sum_{n=-\infty}^{-1} (-1) t^n. Therefore, the single function t \mapsto \frac1{1-t} actually encodes two different series depending on whether we treat it as a function on |t|<1 or a function on |t|>1.

Readers remembering their complex analysis will not find this bizarre because a holomorphic (i.e., complex differentiable) function generally requires more than one power series to represent it. A power series about a point is only valid up until a pole is encountered, after which another point must be chosen and a power series around that point used to continue the description of the function. When we change points, the coefficients of the power series will generally change. In the above example, the first series \sum_{n=0}^\infty t^n is a power series around the origin while the second series \sum_{n=-\infty}^{-1} (-1) t^n is a power series around the point at infinity and therefore naturally has different coefficients. (Thankfully there are only these two possibilities to worry about: a power series about a different point c \in \mathbb{C} would look like \sum_{n=0}^\infty \alpha_n (t-c)^n and is not of the form we described in Step 1. Only when c=0 or c = \infty does the power series match up with the formal Laurent series in Step 1.)

While it might seem that introducing analytic functions is an unnecessary complication, it actually makes certain calculations simpler! Such a phenomenon happened in Step 1: we discovered convolution of sequences became multiplication of Laurent series (and usually multiplication is more convenient than convolution). In Step 3 we will see how the impulse response of y[n] = \frac12 y[n-1] + x[n] can be written down immediately.

Step 3: The Z-transform

Define y(t) = \sum_{n=-\infty}^\infty y[n] t^n and x(t) = \sum_{n=-\infty}^\infty x[n] t^n. Clearly, the constraint y[n] = \frac12 y[n-1] + x[n] implies a relationship between x(t) and y(t), but how can we determine this relationship?

Note that y[n] = \frac12 y[n-1] + x[n] is actually an infinite number of constraints, one for each n. The first step is to think of these as a single constraint on the whole sequences \{\cdots,x[-2],x[-1],x[0],x[1],x[2],\cdots\} and \{\cdots,y[-2],y[-1],y[0],y[1],y[2],\cdots\}. We can do this either intuitively or rigorously, arriving at the same answer either way.

Intuitively, think of a table (my lack of WordPress skills prevents me from illustrating this nicely). The top row looks like \cdots \mid y[-1] \mid y[0] \mid y[1] \mid \cdots. The next row looks like \cdots \mid \frac12 y[-2] \mid \frac12 y[-1] \mid \frac12 y[0] \mid \cdots. The last row looks like \cdots \mid x[-1] \mid x[0] \mid x[1] \mid \cdots. Stack these three rows on top of each other so they line up properly: the purpose of the table is to line everything up correctly! Then another way of expressing y[n] = \frac12 y[n-1] + x[n] is by saying that the first row of the table is equal to the second row of the table plus the third row of the table, where rows are to be added elementwise.

Rigorously, what has just happened is that we are treating a sequence as a vector in an infinite-dimensional vector space: just like ( y[-1], y[0], y[1] ) is a vector in \mathbb{R}^3, we can think of ( \cdots, y[-1], y[0], y[1], \cdots ) as a vector in \mathbb{R}^\infty. Each of the three rows of the table described above is simply a vector.

To be able to write the table compactly in vector form, we need some way of going from the vector ( \cdots, y[-1], y[0], y[1], \cdots ) to the shifted-one-place-to-the-right version of it, namely ( \cdots, y[-2], y[-1], y[0], \cdots ). In linear algebra, we know that a matrix can be used to map one vector to another vector provided the operation is linear, and in fact, shifting a vector is indeed a linear operation. In abstract terms, there exists a linear operator S \colon \mathbb{R}^\infty \rightarrow \mathbb{R}^\infty that shifts a sequence one place to the right: S(\ ( \cdots, y[-1], y[0], y[1], \cdots ) \ ) = ( \cdots, y[-2], y[-1], y[0], \cdots ).

Letting \mathbf{x} and \mathbf{y} denote the vectors ( \cdots, x[-1], x[0], x[1], \cdots ) and ( \cdots, y[-1], y[0], y[1], \cdots ) respectively, we can rigorously write our system as \mathbf{y} = \frac12 S \mathbf{y} + \mathbf{x}. This is precisely the same as what we did when we said the first row of the table is equal to the second row of the table plus the third row of the table.

Our natural instinct is to collect like terms: if I denotes the identity operator (think of a matrix with ones on the diagonal), so that I\mathbf{y} = \mathbf{y}, then \mathbf{y} = \frac12 S \mathbf{y} + \mathbf{x} is equivalent to \left( I - \frac12 S \right) \mathbf{y} = \mathbf{x}, that is, \mathbf{y} =  \left( I - \frac12 S \right)^{-1} \mathbf{x}. This equation tells us the output \mathbf{y} as a function of the input \mathbf{x} — but how useful is it? (Does the inverse \left( I - \frac12 S \right)^{-1} exist, and even if it does, how can we evaluate it?)

While in some cases the expression \mathbf{y} = \left( I - \frac12 S \right)^{-1} \mathbf{x} might actually be useful, often it is easier to use series rather than vectors to represent sequences: we will see that the linear operator S is equivalent to multiplication by t, which is very convenient to work with!

Precisely, since y(t) = \sum_{n=-\infty}^\infty y[n] t^n and \mathbf{y} = ( \cdots, y[-1], y[0], y[1], \cdots ) represent exactly the same sequence, it should be clear that the equation \mathbf{y} = \frac12 S \mathbf{y} + \mathbf{x} is equivalent to the equation y(t) = \frac12 t\,y(t) + x(t). Indeed, returning to the table, y(t) corresponds to the first row, x(t) corresponds to the last row, and \frac12 t\,y(t) = \frac12 t \sum_{n=-\infty}^\infty y[n] t^n = \sum_{n=-\infty}^\infty \frac12 y[n-1] t^n corresponds to the second row.

Now, from Step 2, we know that we can think of x(t) and y(t) as analytic functions (provided we are careful about the ROC). Therefore, we feel entitled to continue manipulating y(t) = \frac12 t\,y(t) + x(t) to obtain y(t) = \frac1{1-\frac12t} x(t). Since by definition the impulse response h(t) satisfies y(t) = h(t) x(t), we seem to have immediately found the impulse response h(t) = \frac1{1-\frac12t} of the original system; at the very least, it agrees with the longer calculation performed at the start of Step 2.

Remark: Actually, the only way to justify rigorously that the above manipulation is valid is to check the answer we have found really is the correct answer. Indeed, to be allowed to perform the manipulations we must assume the existence of a domain on which both x(t) and y(t) are analytic. If we assume y(t) is “nice” then, under that assumption, prove that y(t) is “nice”, that does not mean y(t) is nice. It is like saying “if it is raining then we can show it really is raining, therefore it must be raining right now!”. A rigorous justification would look something like the following. First we must make some assumption about the input sequence for otherwise we have no idea what ROC to use. If we are interested in inputs that are uniformly bounded and which are zero up until time 0 then we can safely assume that x(t) is analytic on |t| < 1. Since 1-\frac12t is non-zero whenever |t| < 1, we know h(t) = \frac1{1-\frac12t} is analytic on |t| < 1. Therefore y(t) = h(t) x(t) will be analytic on |t| < 1. This means that y(t) can be expanded as a power series in a neighbourhood of the origin, and the coefficients of that power series are what we believe the output of the system will be. To check this really is the output of the system, it suffices to show y(t) = \frac12 t\,y(t) + x(t) for |t| < 1. This is straightforward: \frac12 t\, y(t) + x(t) = \left(\frac12 t\, h(t) + 1\right) x(t) = \left( \frac{\frac12 t}{1-\frac12t} + \frac{1-\frac12 t}{1-\frac12 t}\right) x(t) = \frac1{1-\frac12t} x(t) = h(t)x(t) = y(t) as required, where every manipulation can be seen to be valid for |t| < 1 (there are no divisions by zero or other bad things occurring).

The remark above shows it is (straightforward but) tedious to verify rigorously that we are allowed to perform the manipulations we want. It is much simpler to go the other way and define a system directly in terms of h(t) and its ROC. Then, provided the ROC of x(t) overlaps with the ROC of h(t), the output y(t) is given by h(t)x(t) on the overlap, which can be correctly expanded as a Laurent series with regard to the ROC, and the output sequence read off.

All that remains is to introduce the Z-transform and explain why engineers treat z^{-1} as a “time shift operator”.

The Z-transform is simply doing the same as what we have been doing, but using z^{-1} instead of t. Why z^{-1} rather than z? Just convention (and perhaps a bit of convenience too, e.g., it leads to stable systems being those with all poles inside the unit circle, rather than outside). In fact, sometimes z is used instead of z^{-1}, hence you should always check what conventions an author or lecturer has decided to adopt.

The rule that engineers are taught is that when you “take the Z-transform” of y[n] = \frac12 y[n-1] + x[n] then you replace y[n] by Y(z), x(n) by X(z) and y[n-1] by z^{-1} Y(z). The reason this rule works was justified at great length above: recall that as an intermediate mental step we can think of the input and output as vectors, and this led to thinking of them instead as series, because multiplication by t = z^{-1} will then shift the sequence one place to the right. Thus, Y(z) = \frac12 z^{-1} Y(z) + X(z) is asserting that the sequence y[n] is equal to the sequence x[n] plus a half times the sequence y[n] shifted to the right by one place, which is equivalent to the original description y[n] = \frac12 y[n-1] + x[n].

Step 4: Poles and Zeros

A straightforward but nonetheless rich class of LTI systems can be written in the form y[n] = a_1 y[n-1] + a_2 y[n-2] + \cdots + a_p y[n-p] + b_0 x[n] + b_1 x[n-1] + \cdots + b_q x[n-q]. Importantly, this class is “closed” in that if you connect two such systems together (the output of one connects to the input of the other) then the resulting system can again be written in the same way (but generally with larger values of p,q). Another important feature of this class of systems is that they can be readily implemented in hardware.

Applying the Z-transform to such a system shows that the impulse response in the Z-domain is a rational function. Note that the product of two rational functions is again a rational function, demonstrating that this class of systems is closed under composition, as stated above. Most of the time, it suffices to work with rational impulse responses.

If H(z) = \frac{f(z)}{g(z)} is rational (and in reduced form — no common factors) then the zeros of H(z) are the solutions of the polynomial equation f(z)=0 while the poles are the solutions of the polynomial equation g(z)=0. [More generally, if H(z) is analytic, then the zeros are the solutions of H(z)=0 and the poles are the solutions of \frac1{H(z)} = 0.] Note that a pole of H(z) is a zero of its inverse: if we invert a system, poles become zeros and zeros become poles.

Poles are important because poles are what determine the regions of convergence and hence they determine when our manipulations in Step 3 are valid. This manifests itself in poles having a physical meaning: as we will see, the closer a pole gets to the unit circle, the less stable a system becomes.

Real-world systems are causal: the output cannot depend on future input. The impulse response h[n] of a causal system is zero for n < 0. Therefore, the formal Laurent series (Step 1) representation of h[n] has no negative powers of t. Its region of convergence will therefore be of the form |t| < R for some R > 0. (If R=0 then the system would blow up!) Since z = t^{-1}, the ROC of a causal transfer function H(z) will therefore be of the form |z| > R for some R < \infty. If H(z) is rational then it has only a finite number of poles, and it suffices to choose R to be the largest of the magnitudes of the poles.

Let’s look at an unstable system: y[n] = 2 y[n-1] + x[n]. This system is clearly unstable because its impulse response is readily seen to be 1, 2, 4, 8, 16, 32, \cdots. We put in a bounded signal (x[0]=1 and x[n] = 0 for |n| \geq 1) yet obtain an unbounded output (y[n] = 2^n for n \geq 0). Taking the Z-transform gives Y(z) = 2 z^{-1} Y(z) + X(z), so that H(z) = Y(z)/X(z) = \frac1{1-2z^{-1}}. For |z|>2, this gives us the correct series representation \frac1{1-2z^{-1}} = \sum_{n=0}^\infty 2^n z^{-n}.

If we put a signal that starts at time zero (i.e., x[n] = 0 for n < 0) into a causal system we will get an output even if the system is unstable. This is because the input X(z) will have a ROC of the form |z| > R_X and H(z) has a ROC of the form |z| > R_H, so Y(z) = H(z)X(z) will have a ROC |z| > \max\{R_X,R_H\}.

If we put a signal that started at time -\infty into our system, then there might be no solution! (Intuitively, it means the system will have to have blown up before reaching time 0.) For example, X(z) = \frac1{1-z} with ROC |z| < 1 represents the signal x[n] = 1 for n \leq 0 and x[n]=0 for n > 0. We cannot form the product Y(z) = H(z) X(z) because there is no value of z for which both H(z) and X(z) are valid: one’s ROC is |z|>2 while the other’s is |z| < 1.

There are many different definitions of a sequence being bounded. Three examples are: 1) there exists an M such that, for all n, |x[n]| < M; 2) \sum_{n=-\infty}^\infty | x[n] | < \infty; and 3) \sum_{n=-\infty}^{\infty} | x[n] |^2 < \infty. Out of these three, the easiest to detect whether x[n] is bounded given only X(z) is the second one: if the ROC of X(z) includes |z|=1 then, by definition (see a complex analysis textbook), \sum_{n=-\infty}^\infty x[n] z^{-n} is absolutely convergent for |z|=1, meaning \sum_{n=-\infty}^\infty |x[n] z^{-n}| < \infty, hence \sum_{n=-\infty}^\infty |x[n]| < \infty because |z|=1 implies |z^{-n}| = 1 for all n. This type of boundedness is called “bounded in l^1“. For the reverse direction, recall that the radius of convergence R is such that a power series in t will converge for all |t| < R and diverge for all |t| > R. Therefore, if the boundary of the largest possible ROC of X(z) does not contain the unit circle then x[n] must be unbounded in l^1. (The boundary case is inconclusive: the power series x(t) = \sum_{n=1}^\infty \frac1{n^2} t^n has a ROC |t| < 1 yet x[n] = \frac1{n^2} is bounded. On the other hand, x(t) = \sum_{n=0}^\infty t^n has a ROC |t| <1 and x[n] = 1 is unbounded.)

A sequence x[n] is bounded in l^1 if the largest possible ROC of X(z) includes the unit circle |z|=1. A sequence x[n] is unbounded in l^1 if the closure of the largest possible ROC does not include the unit circle.

If the ROC of the transfer function H(z) includes the unit circle, then an l^1-bounded input will produce an l^1-bounded output. Indeed, the ROC of both H(z) and X(z) will include the unit circle, therefore, the ROC of Y(z) will include the unit circle and y[n] will be l^1-bounded.

If all the poles of a causal H(z) are within the unit circle then the ROC will include the unit circle, hence a bounded input will produce a bounded output. Engineers therefore call a system stable if all the poles of H(z) are inside the unit circle.

Note that it follows from earlier remarks that if H(z) has a pole outside the unit circle then the impulse response will be unbounded. If H(z) has a pole on the unit circle then the impulse response will resonant at the corresponding frequency indefinitely. For example, if H(z) = \frac1{1-az^{-1}} where |a|=1 then h[n] = a^n for n \geq 0. This does not die away because |h[n]| = |a^n| = 1 for all n \geq 0. By writing a = e^{\jmath \omega} we get h[n] = e^{\jmath \omega n} and therefore we think of this as a resonance at frequency \omega. Furthermore, if a = r e^{\jmath \omega} where |r| < 1, then h[n] = a^n = r^n e^{\jmath \omega n} and we see that there is a damped resonance at frequency \omega. As the pole gets closer to the unit circle (that is, r \rightarrow 1) the impulse response takes longer to die out. Engineers care about pole placement!

Epilogue

  • Although often engineers can get away with just following recipes, it is safer in the long run and easier to remember the recipes if you have spent a bit more time to learn the underlying rationale for when and why the rules work.
  • Linear algebra and complex analysis are considered core subjects of an undergraduate mathematics degree because they have such broad applicability: here we saw them play integral roles in discrete-time signal processing.
    • This is because the “mathematical behaviour” of Laurent series matches the “physical behaviour” of discrete-time linear time-invariant systems.
  • There is a close relationship between the Z-transform and the Fourier transform: just make the substitution z = e^{\jmath \omega}. This is only allowed if the ROC contains the unit circle. The concept of a pole does not explicitly appear in the Fourier domain because, rather than permit z to be anywhere in the complex plane, the substitution z = e^{\jmath \omega} means we are only looking at the behaviour of H(z) on the unit circle.
    • When it comes to system design, understanding where the poles and zeros are can be more productive than trying to understand the whole frequency spectrum.