Relate Discrete System to Continuous Time System
Continuous Time System
Techniques in Discrete and Continuous Robust Systems
Ahmad A. Mohammad , J.A. De Abreu-Garcia , in Control and Dynamic Systems, 1996
3 Theoretical Aspects of the LBCT
This is best summarized in the following theorem:
Theorem: For any given stable, minimal balanced DTS (Adb, Bdb, Cdb, Dd, Σ, T), with distinct second order modes, there exists a unique, minimal, stable, and balanced CTS model (Acb, Bcb, Ccb, Dc = Dd, Σ), where
and Acb is the unique solution of the CT Lyapunov equations
Moreover, the CTS approximation of the DTS model has the following properties:
- 1)
-
The CTS model is minimal and stable if the DTS model is minimal and stable.
- 2)
-
There exists a solution for Eqs.(84) – (85), for a fixed choice of T, B, and C.
- 3)
-
The solution of 2) is unique.
- 4)
-
The |H|2 norm of the DTS is equal to the |H|2 norm of the CTS (notice that we only deal with the strictly proper part).
- 5)
-
The initial error in the step response is identically zero.
Proof:
- (1)
-
Guaranteed by Lyapunov equations.
- (2, 3)
-
Proved in section 2.
- (4)
-
A direct consequence of matching the Hankel Singular values of the DT and CT systems.
- (5)
-
First, it is noted that both systems share the same feedthrough term D. Moreover, from the initial value theorem, the initial response of the strictly proper part is identically zero.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0090526796800673
Continuous-Time Systems
Luis F. Chaparro , in Signals and Systems using MATLAB, 2011
Publisher Summary
A continuous-time system is a system in which the signals at input and output are continuous-time signals. This chapter connects signals with systems, especially the study of linear time-invariant dynamic systems. Simple examples of systems, ranging from the vocal system to simple circuits, illustrate the use of the linear time-invariant (LTI) model and point to its practical application. At the same time, modulators also show that more complicated systems need to be explored to be able to communicate wirelessly. Although a system is viewed as a mathematical transformation of an input signal (or signals) into an output signal (or signals), it is important to understand that such transformation results from an idealized model of the physical device. A system's approach to the theory of differential equations and some features of transforms are also discussed. The general characteristics attributed to systems are classified such as static or dynamic systems, lumped- or distributed-parameter systems, and passive or active systems are also discussed in the chapter. The Laplace transform allows transient as well as steady-state analysis and converts the solution of differential equations into an algebraic problem and is very significant in the area of classic control. The chapter also provides the concept of transfer function that connects with the impulse response and the convolution integral. The analysis of systems with continuous-time signals by means of transforms is presented. When developing a mathematical model for a continuous-time system it is important to contrast the accuracy of the model with its simplicity and practicality.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747167000053
REVIEWS
Wing-Kuen Ling , in Nonlinear Digital Filters, 2007
Controllability and observability
For a continuous time system, assume that x(0) = 0. ∀x 1, if ∃t 1 > 0 and u(t) such that x(t 1) = x 1, then the continuous time system is said to be reachable. Similarly, for a discrete time system, assume that x(0) = 0. ∀x 1, if ∃n 1 > 0 and u(n) such that x(n 1) = x 1, then the discrete time system is said to be reachable. For a continuous time system, if ∀x 0, x 1, ∃t 1 > 0 and u(t) such that x(0) = x 0 and x(t 1) = x 1, then the continuous time system is said to be controllable. Similarly, for a discrete time system, if ∀x 0, x 1, ∃n 1 > 0 and u(n) such that x(0) = x 0 and x(n 1 = x 1, then the discrete time system is said to be controllable. For LTI systems, the set of reachable state is R(|BAB… AnB|), where R(A) is defined as the range of A, that is R(A) = {y : y = A x}. Also, the LTI systems are controllable if and only if R(A) = R n Or in other words, rank(|BAB… An B |) = n.
For a continuous time system, ∀x 1, if ∃t 1 > 0 and u(t) such that x 1 can be determined from y(t) for t > t 1, then the continuous time system is said to be observable. Similarly, for a discrete time system, ∀x 1, if ∃n 1 > 0 and u(n) such that x 1 can be determined from y(n) for n > n 1, then the discrete time system is said to be observable. For LTI systems, the set of unobservable state is
where N(A) is defined as the null space of the kernel of A, that is N(A) ≡ {x: Ax = 0}. Also, the LTI systems are observable if and only if
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123725363500028
Continuous-Time Systems
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
2.3.2 Time-Invariance
A continuous-time system is time-invariant if whenever for an input , with a corresponding output , the output corresponding to a shifted input (delayed or advanced) is the original output equally shifted in time, (delayed or advanced). Thus
(2.9)
That is, the system does not age—its parameters are constant.
A system that satisfies both the linearity and the time-invariance is called Linear Time-Invariant or LTI.
Remarks
- 1.
-
It should be clear that linearity and time-invariance are independent of each other. Thus, one can have linear time-varying, or non-linear time-invariant systems.
- 2.
-
Although most actual systems are, according to the above definitions, non-linear and time-varying, linear models are used to approximate (around an operating point) the non-linear behavior, and time-invariant models are used to approximate (in short segments) the system's time-varying behavior. For instance, in speech synthesis the vocal system is typically modeled as a linear time-invariant system for intervals of about 20 ms attempting to approximate the continuous variation in shape in the different parts of the vocal system (mouth, cheeks, nose, etc.). A better model for such a system is clearly a linear time-varying model.
- 3.
-
In many cases time invariance can be determined by identifying—if possible—the input, the output and letting the rest represent the parameters of the system. If these parameters change with time, the system is time-varying. For instance, if the input and the output of a system are related by the equation , the parameter of the system is the function , and if it is not constant, the system is time-varying. Thus, the system , where A is a constant, is time invariant as can be easily verified. But the AM modulation system given by is time-varying as the function is not constant.
The Beginnings of Radio
The names of Nikola Tesla (1856–1943) and Reginald Fessenden (1866–1932) are linked to the invention of amplitude modulation and radio [65,78,7]. Radio was initially called "wireless telegraphy" and then "wireless" [26].
Tesla was a mechanical as well as an electrical engineer, but mostly an inventor. He has been credited with significant contributions to electricity and magnetism in the late 19th and early 20th century. His work is the basis of the alternating current (AC) power system, and the induction motor. His work on wireless communications using the "Tesla coils" was capable of transmitting and receiving radio signals. Although Tesla submitted a patent application for his basic radio before Guglielmo Marconi, it was Marconi who was initially given the patent for the invention of radio (1904). The Supreme Court in 1943 reversed the decision in favor of Tesla [6].
Fessenden has been called the "father of radio broadcasting." His early work on radio led to demonstrations on December 1906, of the capability of point-to-point wireless telephony, and what appears to be the first audio radio broadcasts of entertainment and music ever made to an audience (in this case shipboard radio operators in the Atlantic). Fessenden was a professor of electrical engineering at Purdue University and the first chairman of the electrical engineering department of the University of Pittsburgh in 1893.
AM Communication System
Amplitude modulation (AM) communication systems arose from the need to send an acoustic signal, a "message", over the airwaves using a reasonably sized antenna to radiate it. The size of the antenna depends inversely on the highest frequency present in the message, and voice and music have relatively low frequencies. A voice signal typically has frequencies in the range of 100 Hz to about 5 kHz (the frequencies needed to make a telephone conversation intelligible) while music typically displays frequencies up to about 22 kHz. The transmission of such signals with a practical antenna is impossible. To make the transmission possible, modulation was introduced, i.e., multiplying the message by a periodic signal such as a cosine , the carrier, with a frequency much larger than those in the acoustic signal. Amplitude modulation provided the larger frequencies needed to reduce the size of the antenna. Thus is the signal to be transmitted, and the effect of this multiplication is to change the frequency content of the input.
The AM system is clearly linear, but time-varying. Indeed, if the input is , the message delayed τ seconds, the output would be which is not as a time-invariant system would give. Fig. 2.4 illustrates the AM transmitter and receiver. The carrier continuously changes independent of the input and as such the system is time-varying.
FM Communication System
In comparison with an AM system, a frequency modulation (FM) system is non-linear and time-varying. An FM modulated signal is given by
where is the input message.
To show the FM system is non-linear, assume we scale the message to , for some constant γ, the corresponding output is given by
which is not the previous output scaled, i.e., , thus FM is a non-linear system. Likewise, if the message is delayed or advanced, the output will not be equally delayed or advanced—thus the FM system is not time-invariant.
Vocal System
A remarkable system that we all have is the vocal system (see Fig. 2.5). The air pushed out from the lungs in this system is directed by the trachea through the vocal cords making them vibrate and create resonances similar to those from a musical wind instrument. The generated sounds are then muffled by the mouth and the nasal cavities resulting in an acoustic signal carrying a message. Given the length of the typical vocal system, on average for adult males about 17 cm and 14 cm for adult females, it is modeled as a distributed system and represented by partial differential equations. Due to the complexity of this model, it is the speech signal along with the understanding of the speech production that is used to obtain models of the vocal system. Speech processing is one of the most fascinating areas of electrical engineering.
A typical linear time-invariant model for speech production considers segments of speech of about 20 ms, and for each a low-order LTI system is developed. The input is either a periodic pulse for the generation of voiced sounds (e.g., vowels) or a noise-like signal for unvoiced sounds (e.g., the /sh/ sound). Processing these inputs gives speech-like signals. A linear time-varying model would take into consideration the variations of the vocal system with time and it would be thus more appropriate.
Example 2.4
Consider constant linear capacitors and inductors, represented by ordinary differential equations
with initial conditions and , under what conditions are these time-invariant systems?
Solution: Given the duality of the capacitor and the inductor, we only need to consider one of these. Solving the ordinary differential equation for the capacitor we obtain (the initial voltage ):
Suppose then we delay the input current by λ s. The corresponding output is given by
(2.10)
by changing the integration variable to . For the above equation to equal the voltage across the capacitor delayed λ s, given by
we need that for , so that the first integral in the right expression in Equation (2.10) is zero. Thus, the system is time-invariant if the input current for . Putting this together with the condition on the initial voltage , we can say that for the capacitor to be a linear and time-invariant system it should not be initially energized—i.e., the input for and no initial voltage across the capacitor, . Similarly, using duality, for the inductor to be linear and time-invariant, the input voltage across the inductor for (to guarantee time-invariance) and the initial current in the inductor (to guarantee linearity). □
Example 2.5
Consider the RLC circuit in Fig. 2.6 consisting of a series connection of a resistor R, an inductor L and a capacitor C. The switch has been open for a very long time and it is closed at , so that there is no initial energy stored in either the inductor or the capacitor and the voltage applied to the elements is zero for . Obtain an equation connecting the input and output the current .
Solution: Because of the presence of the capacitor and the inductor, both capable of storing energy, this circuit is represented by a second order ordinary differential equation with constant coefficients. According to Kirchhoff's voltage law:
To get rid of the integral we find the derivative of with respect to t:
a second order ordinary differential equation, with input the voltage source , and output the current . Because the circuit is not energized for , the circuit represented by the above ordinary differential equation is linear and time-invariant. □
Remark
An RLC circuit is represented by an ordinary differential equation of order equal to the number of independent inductors and capacitors in the circuit. If two or more capacitors (two or more inductors) are connected in parallel (in series), sharing the same initial voltage across them (same initial current) we can convert them into an equivalent capacitor with the same initial voltage (equivalent inductor with the same current).
A system represented by a linear ordinary differential equation, of any order N, having constant coefficients, and with input and output :
(2.11)
is linear time-invariant if the system is not initially energized (i.e., the initial conditions are zero, and the input is zero for ). If one or more of the coefficients are functions of time, the system is time-varying.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000120
The Z-transform
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
10.6 State Variable Representation
Modern control theory uses the state-variable representation of systems, whether continuous- or discrete-time. In this section we introduce the discrete-time state-variable representation of systems. In many respects, it is very similar to the state-variable representation of continuous-time systems.
State variables are the memory of a system. In the discrete-time, just like in the continuous-time, knowing the state of a system at a present index n provides the necessary information from the past that together with present and future inputs allows us to calculate the present and future outputs of the system. The advantage of a state-variable representation over a transfer function is the inclusion of initial conditions in the analysis and the ability to use it in multiple-input and multiple-output systems.
Assume a discrete-time system is represented by a difference equation (which could have been obtained from an ordinary differential equation representing a continuous-time system) where is the input and the output:
(10.42)
where and the initial conditions of the system are . As in the continuous-time, the state-variable representation of a discrete-time system is not unique. To obtain a state-variable representation from the difference equation in (10.42) let
(10.43)
We then obtain from the difference equation the state-variable equations:
(10.44)
and the output equation
These state and output equations can then be written in a matrix form
(10.45)
by appropriately defining the matrices A and B, the vectors c and d as well as the state vector and the input vector .
Fig. 10.13 displays the block diagrams for a delay, a constant multiplier and an adder that are used in obtaining block diagrams for discrete-time system. Different from the continuous-time system representation, the representation of discrete-time systems does not require the equivalent of integrators, instead it uses delays.
Example 10.27
A continuous-time system is represented by the ordinary differential equation
To convert this ordinary differential equation into a difference equation we approximate the derivatives by
Find the difference equation when and , and find the corresponding state and output equations.
Solution: Replacing the approximations of the derivatives for and letting we obtain the difference equation
which can be realized as indicated by the block diagram in Fig. 10.14. Because the second-order difference equation is realized using two unit delays, this realization is called minimal.
Letting the outputs of the delays be the state variables
we then have the following matrix equation for the state:
The output equation is then
The state variables are not unique. Indeed we can use an invertible transformation matrix F to define a new set of state variables
with a matrix representation for the new state variables and the output given by
Example 10.28
A discrete-time system is represented by the difference equation
where is the input and the output. Obtain a state-variable representation for it.
Solution: Notice that this difference equation has as well as in the input. The transfer function corresponding to the difference equation is
i.e., it is not a "constant-numerator" transfer function. A direct realization of the system in this case will not be minimal. Indeed, a block diagram obtained from the difference equation (see Fig. 10.15) shows that three delays are needed to represent this second-order system. It is important to realize that this representation despite being non-minimal is a valid representation. This is different from an analogous situation in the continuous-time representation where the input and its derivatives are present. Differentiators in the continuous-time representation are deemed not acceptable, while delays in the discrete-time representations are. □
Solution of the State and Output Equations
The solution of the state-variable equations
can be obtained recursively:
where , the identity matrix. The complete solution is then obtained,
(10.46)
By definition of the state variables in Equation (10.43) the initial conditions of the state variables coincide with the initial conditions of the system,
(10.47)
Using the Z-transform we can obtain a close solution to the state and output equations. Indeed, if the state and output equations are in their matrix form
(10.48)
calling the Z-transforms of the state variables , for ; of the input , for , and of the output we have the following matrix expression for the Z-transforms of the state equations in (10.48):
using the Z-transform of , and is the vector of initial conditions of the state variables and I the unit matrix. Assuming that the inverse of exists, i.e., , we can solve for :
(10.49)
by expressing the inverse of a matrix in terms of its adjoint and determinant. We can then obtain the Z-transform of the output as
(10.50)
If the initial conditions are zero, and the input is , then we find that the transfer function is given by
(10.51)
Example 10.29
Consider the state-variable representation of the system in Example 10.27 with matrices:
Determine the transfer function of the system.
Solution: Instead of finding the inverse we can use Cramer's rule to find . Indeed, writing the state equations with zero initial conditions
according to Cramer's rule we have
and the Z-transform of the output is obtained after replacing the Z-transforms of the state variables:
Example 10.30
The transfer function of a LTI discrete-time system is
Obtain a minimal state-variable realization (i.e., a realization that only uses two delays, corresponding to the second-order system). Determine the initial conditions of the state variables in terms of initial values of the output.
Solution: The given transfer function is not "constant-numerator" transfer function and the numerator indicates the system has delayed inputs. Letting be the input and the output with Z-transforms and , respectively, we factor as follows:
so that we have the following equations:
The corresponding block diagram in Fig. 10.16 is a minimal realization as it only uses two delays. As shown in the figure, the state variables are defined as
The state and output equations are in matrix form
Expressing the state variables in terms of the output:
we then have and . The reason for these initial conditions is that in the difference equation
the input is delayed two samples and so the initial conditions should be and . □
Example 10.31
Consider the state-variable representation of a system having the matrices
Determine the zero-input response of the system for arbitrary initial conditions .
Solution: The zero-input response in the time-domain is
requiring one to compute the matrix . To do so, consider
which can be verified by pre-multiplying by the above equation, giving
or that the infinite summation is the inverse of . In positive powers of z the Z-transform of is
Now, using the fact that for a matrix
provided that the determinant of the matrix, , we then have
We then need to determine the inverse Z-transform of the four entries to find . If we let
then
which is used in finding the zero-input response
Canonical Realizations
Just like in the continuous case, discrete-time state-variable realizations have different canonical forms. In this section we will illustrate the process for obtaining a parallel realization to demonstrate the similarity.
The state-variable parallel realization is obtained by realizing each of the partial fraction expansion components of a transfer function. Consider the case of simple poles of the system, so that the transfer function is
so that
The state variable for or would be so that
For the whole system we then have
and the output is
or in matrix form
In general, whenever the transfer function is factored in terms of first- and second-order systems, with real coefficients, a parallel realization is obtained by doing a partial fraction expansion in terms of first- and second-order components and realizing each of these components. The following example illustrates the procedure.
Example 10.32
Obtain a parallel state-variable realization from the transfer function
Show a block realization of the corresponding state and output equations.
Solution: If we obtain a partial fraction expansion of the form
where
and then
so that and . This allows us to write the output as
or for two difference equations
for which we need to obtain state variables. Minimal realizations can be obtained as follows. The first system is a "constant-numerator" system, in terms of negative powers of z, and a minimal realization is obtained directly. The transfer function of the second difference equation can be written
from which we obtain the equations
Notice that the realization of these equations only require two delays. If we let
we get
so that
The following script shows how to obtain the state-variable representation for the two components of using the function tf2ss. The second part of the script shows that by using the function ss2tf we can get back the two components of . Notice that these are the same functions used in the continuous case. Fig. 10.17 shows the block diagram of the parallel realization. □
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000211
Mathematical Preliminaries
Erdal Kayacan , Mojtaba Ahmadieh Khanesar , in Fuzzy Neural Networks for Real Time Control Applications, 2016
1.4 Stability Analysis
In order to prove the stability of a nonlinear ordinary differential equation, the Lyapunov stability theory is the most powerful and applicable method. In order to investigate the stability of a system using this method, a positive energy-like function of the states on the trajectory of the differential equation is taken into account, which is called the Lyapunov function. A Lyapunov function decreases along the trajectory of the ODE.
Let the ordinary differential equation of the system be of the following form:
(1.19)
where and be the state vector of the system. The following theorem holds:
Theorem 1.1
(Stability of Continuous Time Systems [ 4]) Let x = 0 be an equilibrium point and be a domain containing x = 0. Let be a continually differentiable function such that:
(1.20)
and(1.21)
then x = 0 is stable. Moreover, if:(1.22)
then x = 0 is asymptotically stable.It is also possible to investigate the stability of a discrete time difference equation using Lyapunov theory. Let the discrete time difference equation of a system be of the following form:
(1.23)
where and be the state vector of the system. The following theorem holds:
Theorem 1.2
(Stability of Discrete Time Systems [5]) Let x = 0 be an equilibrium point and be a domain containing x = 0. Let be a continually differentiable function such that
(1.24)
then x = 0 is stable. Moreover, if:(1.25)
then x = 0 is asymptotically stable.Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128026878000013
22nd European Symposium on Computer Aided Process Engineering
Nina P.G. Salau , ... Argimiro R. Secchi , in Computer Aided Chemical Engineering, 2012
2 Formulation and Solution of the Estimation Problem
Consider the following nonlinear dynamic and continuous-time system with discrete-time measurements to be used in the state estimation formulations:
(1)
Where u denotes the deterministic inputs, x denotes the states, and y denotes the measurements. The process-noise vector, rn(t), and the measurement-noise vector, v k , are assumed to be a white Gaussian random process with zero mean and covariance Q and R k , respectively. The Hybrid Extended Kalman Filter formulation uses a continuous and nonlinear model for state estimation, linearized models of the nonlinear system for state covariance estimation, and discrete measurements. This is often referred to as continuous-discrete extended Kalman filter [5]. Here, the system is linearized at each time step to obtain the local state-space matrices as below:
(2)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444595201501135
Techniques in Discrete-Time Stochastic Control Systems
Gianni Ferretti , ... Riccardo Scattolini , in Control and Dynamic Systems, 1995
C OTHER ALGORITHMS
The identification of the parameters of the continuous-time system is directly dealt with by Zhao et al. [29], considering the following model in the continuoustime domain:
(15)
Performing a multiple integration on both sides of (15) and approximating the continuous-time integration as a function of the sampled measurements through the linear integral filter defined in [29, 30], yields:
where
l is the length factor of the linear integral filter, the coefficients p j i are defined [30] according to the adopted formula of numerical integration (trapezoidal, Simpson, etc.) and τ* = τ/h = d′ + ε. The estimation of the time delay τ = τ* h and the parameters αi , βi is then performed by minimizing, via Newton's algorithm, the following LS performance criterion:
Some bias-eliminating techniques are also proposed by Zhao et al. in [29], in order to deal with noisy measurements.
Finally, a rather complex algorithm, based on a Bayesian approach and on the estimation of a set of different models, each one related to a different value of the delay, has been proposed by Juricic [31].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0090526705800076
From the Ground Up!
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
0.5.4 The Phasor Connection
The fundamental property of a circuit made up of constant-valued resistors, capacitors and inductors is that its steady-state response to a sinusoid is also a sinusoid of the same frequency [23,21,14]. The effect of the circuit upon the input sinusoid is on its magnitude and phase and it depends on the frequency of the input sinusoid. This is due to the linear and time-invariant nature of the circuit. As we will see in Chapters 3, 4, 5, 10 and 11, this behavior can be generalized to more complex continuous-time as well as discrete-time systems.
To illustrate the connection of phasors with dynamic systems consider the RC circuit ( and F) in Fig. 0.7. If the input to the circuit is a sinusoidal voltage source and the voltage across the capacitor is the output of interest, the circuit can easily be represented by the first-order ordinary differential equation
Assume that the steady-state response of this circuit (i.e., as ) is also a sinusoid
of the same frequency as the input, but with amplitude C and phase ψ to be determined. Since this response must satisfy the ordinary differential equation, we have
Comparing the two sides of the above equation gives
and for a steady-state response
Comparing the steady-state response with the input sinusoid , we see that they both have the same frequency , but the amplitude and phase of the input are changed by the circuit depending on the frequency . Since at each frequency the circuit responds differently, obtaining the frequency response of the circuit is useful not only in analysis but in design of circuits.
The above sinusoidal steady-state response can also be obtained using phasors. Expressing the steady-state response of the circuit as
where is the corresponding phasor for we find that
Replacing , , obtained above, and
in the ordinary differential equation we obtain
so that
and the sinusoidal steady-state response is
which coincides with the response obtained above. The ratio of the output phasor to the input phasor
gives the response of the circuit at frequency . If the frequency of the input is a generic Ω, changing above for Ω gives the frequency response for all possible frequencies.
The concepts of linearity and time-invariance will be used in both continuous as well as discrete-time systems, along with the Fourier representation of signals in terms of sinusoids or complex exponentials, to simplify the analysis and to allow the design of systems. Thus, transform methods such as Laplace and the Z-transform will be used to solve differential and difference equations in an algebraic setup. Fourier representations will provide the frequency perspective. This is a general approach for both continuous and discrete-time signals and systems. The introduction of the concept of transfer function will provide tools for the analysis as well as the design of linear time-invariant systems. The design of analog and discrete filters is the most important application of these concepts. We will look into this topic in Chapters 5, 7 and 12.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000090
Signals and Systems
Rashid Ansari , Lucia Valbonesi , in The Electrical Engineering Handbook, 2005
Invertibility
An invertible system is one in which an arbitrary input to the system can be uniquely inferred from the corresponding output of the system. The cascade of the system and its inverse, if it exists, is an identity system (i.e., the output of the cascade is identical to the input).
All properties so far were defined for discrete-time systems. Equivalent properties can be defined for continuous-time systems by requiring output to satisfy the following conditions, with notation analogous to that in the discrete-time case:
- •
-
Linearity: T(ax 1(t) + bx 2(t)) = ay 1(t) + by 2(t).
- •
-
Shift invariance: T(x(t − t 0)) = y(t − t 0).
- •
-
Causality: y(t) is function of x(τ) for −∞ < τ ≤ t.
- •
-
BIBO stability: |x(t)| ≤ BI for all t ⇒ |y(t)| ≤ B O for all t.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012170960050061X
Source: https://www.sciencedirect.com/topics/computer-science/continuous-time-system