668 54 20MB
English Pages [446] Year 1968
COMMUNICATION SYSTEMS
lt.-
)
By the Some Author
S|GNALS, SYSTEMS AND COMMUNTCATTON (t965)
I
I I
COMMUNICATION SYSTEMS B. P. LATHI Professor of Electrical Engineering
Bradley University
John Wiley & Sons, lnc. New York . London ' SydneY
r
14'13 12
20 19 18 17 16'ls
11
Copyright G) f968 by John lViloy & Sons, Inc'
All Righta
Resorved
Reproduction or translation of any part of this work beyond that peimitted by Sections 107 or lO8 ofthe 1976 United States Copyiigt, R., without the permission of the copyright owner is unlawfJ. R"qr".,, for permission or further information should be Inc' addressed to rhe Permissions Department, John Wiley & Sons'
rsBN Librrry
0 47r
51832
I
of Congress Catalog Caril Number: 6t-ll008 Print€at in tho Uniterl Statos of Americc
Preface
The purpose of this book is to introduce the student to communication systems and the broad prihciples of modern communication theory at an early stage in the undergraduate curriculum. It begins with the study of speciflc communication systems and gradually develops the
underlying role of the signal-to-noise ratio and the bandwidth in limiting the rate of information transmission. Since the book is intended for an introductory course, it was necessary to ignore many of the finer points regarding the power density spectra of random processes. The student is introduced to the concept of the power density spectrum of nonrandom signals. This concept, is then extended to random signals without any formal development. A rigorous treatment, of random processes is deemed unnecessarily distracting in such an introductory course, for it would defeat its very purpose. After completing this course, a student can then fruitfully undertake a rigorous course in communication theory using statistical concepts. Throughout the book, the stress is on a physical appreciation of the concepts rather than mathematical manipulation. fn this respect the book closely follows the philosophy of my earlier book, S,ignals, Systems and, Communicat'ion. Wherever possible, the concepts and results are interpreted intuitively. The basic concepts of information theory are not introduced. as axioms but are developed heuristically.
PREFACE
Commun'i,cation Bystems can be used for a semester or a quarter by judiciously choosing the topics. Any of the following four combinations of chapters will form a well balanced first course in communication systems.
$
2ut''
r-2-3-4-5-6-7-8-9 \o-td
)0
Other combinations will no doubt prove suitable in some cases. Chapter I (Signal Analysis) is essentially a review. The f,'ourier series is introduced as a representation of a signal in orthogonal signal space. This is done because of the growing importance of geometrical representation of signals in communication theory. This aspect, however, is not essential for the material covered in this book. Thus the student may skip the first 30 pages (Sections 1.1 through I.3). The book is self-contained and there are no prerequisites whatsoever. No knowledge of probability theory is assumed on the part, of students.
The modicum of probability theory that is required in Chapter 9 (on digital communication) is developed in that chapter. I would like to thank Mr. fvar Larson for assisting me in proofreading, Professors J. L. Jones and R,. B. Marxheimer for helpful suggestions, and Professor Philip Weinberg, the department head, for making available to me the time to complete this book. I am also pleased to acknowledge the assistance of Mrs. Evelyn Kahrs for typing the manuscript. B. P. Lersr Peori,a, Illi,nois
Januarg, 7968
Contents
I
SIGNAL ANALYSIS
l.I 1.2 1.3 1.4 I.5 1.6 L.7 1.8
Analogy between Vectors and Signals 3 Some Examples of Orthogonal
X'unctions 2l Representation of a Periodic Function by the Fourier Series over the
Entire Interval (-o < I The Complex Fourier
Spectrum
< oo)
29
30
Representation of an Arbitrary X'unction over the Entire fnterval (-.o, oo): The tr'ourier
Transform
36 Some Remarks about the
Continuous Spectrum Function 40 Time-Domain and FrequencyDomain Representation of a Signal 42 Existence of the tr'ourier
Transform
43
vI
)
Viii
CONTENTS
I.9 f
.t0
1.11
l.l2
X'ourier Transforms of Some Useful Functions 44
Singularity X'unctions 46 X'ourier Transforms Involving fmpulse Functions 52 Some Properties of the Fourier
Transform 1.13
63
Some Convolution
Relationships 1.14
82
Graphical Interpretation of
Convolution
83
1.15 Convolution of a Function with a
l.16
2
Unit Impulse Function
86
Theorem
89
The Sampling
TRANSMISSION OF SIGNALS AND POWER DENSITY
ilt
SPECTRA
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 3
Signal Transmission through
Linear Systems lll The Filter Characteristic of Linear Systems ll3 Distortionless Transmission Il5 Ideal Filters 117 Causality and Physical Realizability: The Paley-Wiener Criterion l2O Relationship between the Bandwidth and the Rise Time 122 The Energy Density Spectrum 125 The Power Density Spectrum 130
COMMUNICATION SYSTEMS: AMPLITUDE MODULATIO N
3.1
X'requency Division Multiplexing
and Time Division
3.2
!48
Multiplexing
Amplitude Modulation: Suppressed Carrier Systems (AM-SC) I50
149
tx
CONTENTS
3.3 3.4
3.5 3.6 3.7 3.8 3.9
Amplitude Modulation with Large Carrier Power (AM) 167 Single Sideband Transmission r78 Effects of Frequency and Phase Errors in Synchronous Detection 186 Carrier Reinsertion Techniques of Detecting Suppressed Carrier
(ssB)
Signals
191
Comparison of Various AM Systems 195 Vestigial Sideband Transmission Frequency Division
Multiplexing
196
200
COMMUNTCATION SYSTEMS: ANGLE MODULATION
4.1 4.2 4.3
Narrowband X'M 214 216 Wideband FM Multiple Frequency
Modulation 223 4.4 Square Wave Modulation
4.5
Linear and Nonlinear
4.6
228 Some Remarks on Phase
4,7 4.8 4.9 4.I0
210
225
Modulation
Modulation
229
Power Contents of the Carrier and the Sidebands in Angle-Modulated
Carriers
230
Noise-ReductionCharacteristics of Angle Modulation 231 Generation of FM Signals 232
Demodulation of FM
Signals
236
COMMUNICATION SYSTEMS: PULSE MODULATION
5.1 5.2
Pulse-Amplitude
Modulation
Other X'orms of Pulse
Modulation
251
241
241
CONTENTS
5.3 5.4 5.5
6
Time Division Multiplexing 254 Bandwidth Required for Transmission of PAM Signals 286 Comparison of Frequency Division Multiplexed and Time Division Multiplexed Systems 259
NOISE
6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.f 0
Shot
2U
Noise
Source
7
279
Multiple Noise Sources: Superposition of Power Spectra 281 Equivalent Noise Bandwidth 287 Noise Figure of an Amplifier 288 Experimental Determination of a Noise
Figure
298
Power Density and Available Power Density 300
Effective Noise Temperature 803 Noise Figure in Terms of Available
6.11 6.L2
265
Thermal Noise 274 Noise Calculations: Single Noise
Gain 303 Stages 806 Amplifier
Cascaded The Cascode
Btt Appendix. Proof of the Generalized Nyquist Theorem 3f l
PERFORMANCE OF COM.
MUNICATION
SYSTEMS
7.1 7.2
Bandpass Noise
7.3
Noise
7.4
Systems 326 Noise in Angle-Modulated
7.5
Noise
3t8
Representation
Bf 8
Noise Calculations in Communication
Systems
325
in Amplitude-Modulated
Systems
335
in Pulse-Modulated
Systems
349
CONTENTS 7.6
xr
Comparison of Coded and Uncoded Systems 362
Appendix A. Justification for Calculating Output Signal and Noise Power Individually in X'M 363 Appendix B. Signal-to-Noise Ratio in Time Division Multiplexed PAM
Systems
8
365
INTRODUCTION TO INFORMATION 372
TRANSMISSION
8.1 8.2 8.3
Measure of Information 372 Channel Capacity 378
8.4
Exchange of Bandwidth for Signal-to-Noise Ratio 383 Efficiency of PCM Systems 387
8.5
Transmission of Continuous
Signals
381
Appendix. Information for Nonequiprobable
9
Messages
390
ELEMENTS OF DIGITAL COMMUNICATION 9.1 Detection of Binary Signals: The
9.2 9.3 9.4 9.5 9.6
393
Matched Filter 394 Decision Threshold in a Matched X'ilter 400
Amplitude Shift Keying (ASK) 409 Phase Shift Keying (PSK) 4Lr X'requency Shift Keying (FSK) 414 Some Comments on Matched
Detection 4Lg Appendix A. Schwarz fnequality Filter
421
BIBLIOGRAPHY
425
INDEX
427
.)
COMMUNICATION SYSTEMS
chapter
I
Signal Analysis
There are numerous ways of communicating. Two people ma,y communicate with each other through speech, gestures, or graphical symbols. fn the past, communication oYer & long distance w&s accomplished by such means as drumbeats, smoke signals, carrier pigeons, and light beams. More recently, these modes of long distance communication have been virtually superceded by communication by electrical signals. This is because electrical signals can be transmitted over & much longer distance (theoretically, any distance in the universe) and with a, very high speed (about 3 x 108 meters per second). In this book, we are concerned strictly with the latter mode, that is, communication by electrical signals. The engineer is chiefly concerned with efficient, communication. This
involves the problem of transmitting messages as fast as possible with the least error. We shall treat these aspects quantitatively throughout this book. It is, however, illuminating to discuss qualitatively the factors that limit the rate of communication. For convenience, we shall consider the transmission of symbols (such as alpha-numerical symbols of English language) by certain electrical waveforms. In the process of transmission, these waveforms are contaminated by omnipresent noise signals which are generated by numerous natural and man-made events. Man-made events such as faulty contact switches, turning on and off of electrical equipment, ignition radiation, and
SIGNAL ANALYSIS
fluorescent lighting continuously radiate random noise signals. Natural phenomena such as lightning, electrical storms, the sun's radiation, and intergalqptic radiation are the sources of noise signals. tr'luctuation noise such as thermal noise in resistors and shot noise in active devices is also an important source of noise in all electrical systems. When the message-bearing signals are transmitted over a channel, they are corrupted with random noise signals and may consequently become
unidentiflable at the receiver. To avoid this difficulty, it is necessary to increase the power of the message-bearing waveforms. A certain ratio of signal power to noise power must be maintained. This ratio, S/.0[, is an important, parameter in evaluating the performance of a system. We shall now consider increasing the speed of transmission by compressing the waveforms in time scale so that we c&n transmit more messages during a given period. When the signals are compressed, their variations are rapid, that is, they wiggle faster. This naturally increases their frequencies. , Hence compressing a signal gives rise to the problem of transmitting signals of higher frequencies. This necessitates the increased bandwidth of the channel over which the messages are transmitted. Thus the rate of communication can be increased by increasing the channel bandwidth. In general, therefore, for faster and more accurate communication, it is desirable to increase B/-l[, the signal-to-noise power ratio, and the channel bandwidth. These conclusions are arrived at by qualitative reasoning and are hardly surprising. What is surprising, however, is that the bandwidth and the signal-to-noise ratio can be exchanged. We shall show later that to maintain a given rate of communication with a given accur&cy, we c&n exchange the S/1[ ratio for the bandwidth, and vice versa. One may reduce the bandwidth if he is willing to increase the S/-ly' ratio. On the other hand, a small S/-l[ ratio may be adequate if the bandwidth of the channel is increased correspondingly. This is expressed by the Shannon-Hartley law, / B\
C:Blog(1 +n/
where C is the channel capacity or the rate of message transmission (to be discussed later), and B is the bandwidth of the channel (in Hz). X'or a given C, we may increase B and reduce S/-l[, and vice versa. In order to study communication systems we must be familiar with various ways of representing signals. We shall devote this chapter to signal analysis.
ANALOGY BETWEEN VECTORS AND SIGNALS
I.I
ANALOGY BETWEEN VECTORS AND SIGNALS
it can be familiar phenomenon. Therefore we always search for analogies when studying a new problem. In the study of abstract problems, similarities are very helpful, particularly if the problem can be shown to be analogous to some concrete phenomenon. It is then easy to gain some insight into the new problem from the knowledge of the corresponding phenomenon. Fortunately, there is a perfect analogy between vectors and signals which leads to a better understanding of signal analysis. We shall now briefly review the properties of vectors. A problem is better understood or better remembered if
associated
with
some
Vectors A vector is specified by magnitude and direction. We shall denote all vectors by boldface type and their magnitudes by lightface type; for example, A is a certain vector with magnitude 1. Consider two vectors V, and V, as shown in Fig. 1.1. Let the component of V,
along V, be given by CrrVr. How do we interpret physically the component of one vector along the other vector? Geometrically the component of a vector V. along the vector V, is obtained by drawing a perpendicular from the end of V, on the vector Vr, as shown in X'ig. l. I. The vector V, can now be expressed in terms of vector v''
v, : crrv, a v,
(1.1a)
However, this is not the only way of expressing vector V, in terms of vector V2. X'igure 1.2 illustrates two of the infinite alternate possibilities. Thus, in Fig. 1.2a,
V,:C,V,fV,,
(r.rb)
Y1:C2Y2J-Y",
(
and in Fig.1.2b,
l.1c)
each representation, V, is represented in terms of V, plus another vector, which will be called the error vector. If we are asked to approximate the vector V, by a yector in the direction of Vr, then V, represents
In
the error in this approximation. For example, in Fig. l. I if we approximate Y. by CrJr, then the error in the approximation is V,. If Y, is
SIGNAL ANALYSIS
\v," CrVz
CzVz
Y2
(')
Y2
(b)
Figuro 1.2
approximated by CrY, as in Fig. 1.2a, then the error is given by V,,, and so on. What is so unique about the representation in Fig. 1.1? It is immediately evident from the geometry of these figures that the error vector is smallest in X'ig. f.1. We can now formulate a quantith,tive definition of a component of a, vector along another vector. The com' ponent of a vector V, along the vector V, is given by CrrYr, where C* is chosen such that the error vector is minimum. Let us now interpret physically the component of one vector along another. It is clear that the larger the component ofa vector along the other vector, the more closely do the two vectors resemble each other in their directions, and the smaller is the error vector. If the component of a vector V, along V, is CrrV2, then the magnitude of C* is an indication of the similarity of the two vectors. If Cn is zero, then the vector has no component along the other vector, and hence the two vectors
are mutually perpendicular. Such vectors are known as orthogonal aectors. Orthogonal vectors are thus independent vectors. If the vectors are orthogonal, then the parameter C* is zero. X'or convenience, we define the dot product of two vectors A and B as
A.B:
ABcos0
where 0 is the angle between vectors A and definition that
B. It
follows from the
A.B:B.A
According to this notation,
the component of A along
B:
A
cos
0:
:
B
cos 0
A.B B
and
the component of B along A
A.B A
ANALOGY BETWEEN VECTORS AND SIGNALS
Similarly,
the component of V, along
nr:\* :
Therefore
CrzVz
rvtz- V, .V, V,.V, Vr, -Vr.V,
(1.2)
Note that if V, and V, are orthogonal, then
Vr'Vr :
g
Cn:
0
and
(1.3)
Signals
The concept of vector comparison and orthogonality can be extended consider two signals, fr(t) and fr(t). Suppose we want to approximat,e fr(t) in terms of fr(t) over a certain interval
to signals.* Let us (tr lt
( fr) as follows:
(1.4) for (tat
-
fn=
5 kc
*'='*-
, rlllr
,
O+
f*=
5
kc
(t)
+
h,=25k ^'f =5o a" - Aol
,*
(7.7)
Similarly, it can be seen from Eq. 2.24c and Fig. 7.3 that, E,,(ar), the power density spectrum of n"(t), is identical to S,,(o).
E,"(r) : * This result is true only if
(7.8a)
S,,(co)
rz(l) is a random
signal. If m(r) is not a random signal,
there is a possibility of additional spectrum around ar
:
0.
BANDPATH NOISE REPRESENTATION
323
S"(o)
t0 0)a (b)
En((, + oc)
3,(@+oc)+s,D(a-o")
ou* sr"(no(t), ttrerr A +f@ )n"(t) and n,(t). The resultant E(t) in this case can be approximated by A + f (t) + n"(t)
PERFORMANCE OF COMMUNICATION SYSTEMS
332
nr(t)
A+f@
n"(t)
tr'iguro 7.9
as shown
in X'ig. 7.9.
E(t1- A +f(t) *n"(t) and ',!(t)
-
0
We come to the same conclusion analytically. Tt A and z"(l)both,* then Eq. 7.28a can be approximated as
+f
(t) )n"(t)
E(t)-\/M T l% =lA +f (t)llt L + A, +l(q) 2n"(t)
-lA+ft,lt[,.ffi1 :A+f(t)tn"(t) It is evident from this equation that the useful sigual in the output is /(l) and the noise is zr,,(f). Hence s, and
: f'(t)
N,:
(7.29a)
n"1t!
:
Yn
(7.zeb)
Using Eqs. 7.25a, 7.25b, 7.29a, and 7.29b, we get
s,/Ir, B,/I,
2fr(t) Az + fr(t)
(7.30)
The improvement ratio increases as.4 is reduced. But for the envelope detector, A can not be reduced below - l"f (r)l,1r*
A > lf (t)l^"* * This statomenf should be interpreted with some caution. Since the signals
no(I)
arrd n"(t) are random signals with some amplitude distribution, thero will be instances when no(i) arrd n"(t) wiII be greater t}:arr A f /(i). However, if A + /(r) is much larger,
such instancog will be rare. most of the tfune.
A correct statement would be A + l@ ) n"(t)
and n,(t)
SYSTEMS
NOISE IN AMPLITUDE-MODULATED
333
It
can be easily seen that the output signal-to-noise power ratio in AM for highest possible degree of modulation (I00o/o modulamaximum is tion). For a special case when/(l) is a sinusoidal signal, the amplitude of /(l)is A for L00o/o modulation. Hence A2 f'(t):z
and
B,irr,:52 sJlr,
Thus the maximum improvement in the signal-to-noise power ratio that can be achieved in this case is $. If synchronous detection is used for the demodulation of AM with large carrier, the results are id.entical to those obtained for envelope detector. This can be easily seen from the fact that S, and -0y'0, the input signal and noise powers, are identical in both c&ses:
S,
A'
Ni -
+ f\t) 2nr\t)
The synchronous detector multiplies the incoming signal fn(t) by cos corf. Ilence eo(t),lhe
output, is given by eo(t)
:
.f ,(t) cos
a"t
SubstitutingEq. 7.26 for foQ) and eliminating the terms with spectra at 2at", we get, the flnal output e,(f), (7.31) e"(t): +lA +f (t) ln"(t)) The output contains the useful signal $/(l) and the noise
s,:
*1,$)
No:
tn"z(t)
:
in"(t).
Hence
lnoz(t)
Thus
'S, l\') N,
(7.32)
nrz(t\
and
&/ir,
2flt) S,/&, A' + l'(t)
(7.33)
334
PERFORMANCE OF COMMUNICATION SYSTEMS
It
is therefore obvious that for AM, when the noise is small compared to the signal, the performance of the envelope detector is identical to that of the synchronous detector. Note that in deriving Eq. 7.33 we made no assumption regarding relative magnitudes of the signal and noise. Hence Eq. 7.33 is valid for all noise conditions for synchronous demodulation. b. Large Noise Case. Next we consider the performance of the envelope detector in AM with large noise, rzo(f) >lA +l@). This implies that n"(t) and n"(t) >lA +f (t)1. Under these conditions,
Eq. 7.28a becomes
(7.34)
where B(l) and 0(t) arc the envelope and the phase of nr(t) as given in Eqs. 7.13a and 7.13b.
Since -R(l)
R(t)
: \/",'(t) +."\t)
0(t\
:
-tan-r
t@l
Ln"(t)J
> lA +.f (r)1, Eq. 7.34 may be further approximated Eqtl
-
ntry[r
:
R(t)
.+#cos
+ lA + /(r)l
as
o(r)]
cos o(l)
(7.35)
A glance at Eq. 7.35 shows that the output contains no term proportional to f (t). The signal /(f)cos 0(l) represents /(l) multiplied by a time-varying function (actually a noise) cos 0(l) and is of no use in recovering/(r). Thus the output contains no useful signal. It is evident from this discussion that for a large noise, the signal is completely mutilated by the envelope detector. This behavior account's for the so-called threshold effect in envelope detectors. By threshold we mean the value of an input signal-to-noise ratio below which the output signal-to-noise ratio deteriorates much more rapidly than the input signal-to-noise ratio. The threshold effect starts appearing in the region where the carrier power to noise power ratio approaches
unity.
NOISE IN ANGLE-MODULATED
SYSTEMS
335
It should be stressed that the threshold effect is a property ofenvelope detectors. we observed no such effect for synchronous detectors. The output signal of the synchronous detector is given by Eq. 7.31: e,(t)
:
+lA + f (t)
{
n,(t)l
In deriving this equation, we placed no restrictions on the signal or noise magnitudes. Hence it, is true under all noise conditions- The output e,(t) always contains a term $/(t) and hence the threshold effect does not appear. The S/N improvement ratio in Eq. 7.33 holds under all noise conditions. We have also seen that for DSB-SC and SSB-SC (which use synchronous detectors) there were no threshold effects. We conclude that for AM with small noise, the performance of t'he envelope detector is almost equal to that of the synchronous detector. But for large noise, the envelope detector shows the threshold effect and proves inferior to the synchronous detector.
7.4 NOISE IN ANGLE.MODULATED
l.
SYSTEMS
Frequency Modulation
A schematic diagram of the modulator demodulator for X'M is shown in Fig. 7.10. The first filter at the receiver filters out the noise that,lies outside the band (a-l, t Aar) over which the useful signal exists. If Aco is the carrier frequency deviation, then obviously the passband of this filter is, according to Eq. 4.27, (a" - Lo, ar, -| Aco). The output of the demodulator eo(t) contains the message signal and noise of bandwidth Aco. Since the message signal has a bandwidth a*, we c&n remove the noise outside the signal band by a low-pass filter with cutoff frequency co- (X'ig. 7.10).
336
PERFORMANCE OF COMMUNICATION SYSTEMS
To calculate the output signal power and the output noise power, we shall assume that each can be calculated independently of the other. Thus to calculate the output signal power, the noise over the channel will be assumed to be zero, and to calculate the output noise power, the message signal/(f) will be assumed to be zero. The justification for this procedure is given in Appendix A of this chapter. Consider flrst the signal without noise. The X'M carrier is given by
: A cosl,,t + n,[ t ttl al
f"(t)
We observed in Section 4.7 lhat for X'M, the carrier power with or without modulation is the same and is given by 4212. Thts a2 s.:,2
(7.36)
The output of the demodulator is proportional to the instantaneous frequency rtto. If the constant of proportionality is a, then the output signal is s"(t)
:
q6t.
: "!,1,", * r',[rat al :
d@"
-f
ukrf (t)
The useful signal is alcrf (t) and
S,: o'hr'l\t)
(7.37)
To compute -l[n and 1[,, we observe that the bandwidth of the signal at the demodulator input is 2 Ao where Aa.r is the maximum deviation of the carrier frequency (see Eq. 4.27). Thus
N,:-
I T
fo"+La
S,(a)d,a | J a"-La
where S,(ar) is the power density spectrum of nnQ).
(7.38)
ff
the noise is white
with power density spectrum of magnitude l{12, then
N.:'
:
I fa"+h,o -y'/ | T Jq.-Lo -d. a*
E,"(t) : ( \0
If
the channel noise is white,
(7.45)
Jr
S"(co)
2
and
(a2atzlf
s,"(-) :l * fo
latl
:
q.
in the
range
(e.1e)
is given by
Probability (r
>
o)
n@) a*
(e.20)
Probability (r
o even if the signal is absent. The probability that, r ) a when the signal is absent is given by the shaded area in Fig. 9.7a. It is evident that by using a as the threshold, we commit an error (called false alarm) with probability equal to the shaded area in X'ig. 9.7a. On the other hand, even if the signal is present, the output p(r)
p(r) Signal present)
Signal
Figure 9.8
I DECISION THRESHOLD IN
A
MATCHED FILTER
amplitude r can fall below o. In this instance our decision is "no signal present," even if the signal is actually present. This type of error is called false dismissal error and its probability is given by the shaded area in Fig. 9.7b. Thus for a given threshold a, we commit two different kinds of errors, the false alarm and the false dismissal. If the signal s(t) is equally likely to be present and absent, then, on the average, half the time s(t) will be present and the remaining half time s(f) will be absent. When s(l) is present, we commit false dismissal type of error, and when s(f) is absent, we commit false alarm type of error. Ilence the error probability in the decision will be given by the mean of the two shaded areas in Figs. 9.7a and 9.7b. This is half the sum of two areas. From Fig. 9.8, it is obvious that the sum of areas is minimum if we choose D d--
(e.25)
2
Ilence the optimum threshold is given by Eq. 9.25.
Error Probability We have seen that when the signal s(l) is equally likely to be present and absent, then the probability of the error in the decision is given by
half the sum of the areas in Fig. 9Ja and 9.7b. Also, the optimum decision threshold a : E 12. Hence the two areas are identical. Therefore the error probability P(e) is given by either of the areas. We shall here use the area in X'ig. 9.7o. PPl
i
I
:J* p@) d'r : t
(e.26) , f* ,-,'txa 6, t/ ntf E J, The integral on the right-hand side of Eq' 9.26 cannot be evaluated in a closed form. It is, however, extensively tabulated in standard tables under probability integral or error function erf (r).
We deflne the error function erf (u) as* erf (r)
:
*['*"-*''
o,
* At present there exist in the literature several definitions of erf which are essentially oquivalent with minor differences.
(s.27)
(r) and orfc (c)
I 0.1
1o-2
\
1f3
\
P(e) 1o-4
\
10-s
1o-5
1o-7
-10 -5
0510 l0los,o(f)
15
db+
Figure 9.9
h(t)=s17-11
I DECISION THRESHOLD IN
A
MATCHED FILTER
407
and the complementary error function erfc (o) erfc
It
(r)
:
1 fl:u ),
"-uz/z
as
6,
(e.28)
is obvious from these definitions that,
erf (r) f erfc (r) : I A useful approximation for erfc (r) is given by errc (z)
- Lr- (, -
.!,)
,",,
The error in this approximation is about L0
l/,for r > 3.
I
(e.2e)
ror
for a
r>2
:
(e.30)
2 and is less
than
Using definition 9.27b, we can express Eq. 9.26 as
But since
a:
P(r)
: "rrr(ffi)
P(t)
:
E12
,rr"
(e.31)
I lE\ \nl ,.)
Fig.9.9 shows the error probability P(e) as a function of
(e.32)
j rY
How do we interpret the probability of error? The probability of an event implies the likelihood of the event or the relative frequency of the event. Thus if we have made -l[ decisions (N --+ oo), then .l[", the total number of wrong decisions, is given by
P(,) : and Thus
if P(r) :
N N
{ :
P(e)Ir #o, on the average, one in 100 decisions will be in error.
Exomple 9.1 (Binory PCM)
n'or binary PCM (discussed in Chapter 7), s(f) is a rectangular pulse of height ^4 and width 7. The impulse response of the matched filter is given by
hO:s(7-t) Note that s(T
- f) is s(i) folded about the vertical axis and shifted to the right by 7 seconds. This is identical to s(f). Hence h(t)
:
s(t)
This filter can be realized by an arrangernent shown in Fig. 9.10c.
ELEMENTS OF DIGITAL COMMUNICATION
408
The enorgy E of s(t) is given by
E:AzT We are also given that
A:
Ko,
when o, is the root mean squa,re value of noise signal.
onz:N'i:"\')
7, there are l/7 pulses per second. To transmit pulses per second, the bandwidth B required for transmission is l/27
Since the pulse duration is
l/7
B: I
2T
Tf -4rP is the power density spectrum ofnoise, then
N.:,4r8 '27 :{ on': and
-4r Obviously,
E
J/
:
rf -w (e.33)
2Ton2
: A' _ Kzorz _ Kz 26n' 26n' 2 2Tonz A2T
ForavalueofK:10, E -_:50
rY
and the probability of error P(e) is given by P(e)
: :
etfc
erfc (b)
Use of Eq. 9.30 yields
P(e)-
t/25
0.284
x
(e.34) 10-6
(e.35)
This result can also be read off directly from Fig. 9.9. For El,,4r:50, l0logro ELrf :16.9 db. This yields P(e) - 0.284 x 10-6. Thus if the pulse amplitude is made I0 times the root mean squaro value of noise (K : l0), the error probability is of the order of 10-6, which is aeceptable in most practical cases. In this discussion, we have assumed an idealized rectangular pulse for s(f). However, because of finite channel bandwidth, this pulse will become
I
l
AMPLTTUDE SH|FT KEYTNG
(ASK)
409
in the process of transmission. Hence the impulse response should also be trapezoidal to match the matched filter point received. This should be kept in mind in our future signal waveform pulses are used for s(f). rectangular where idealized discussion trapezoidal (see Section 2.6)
9.3 AMPLTTUDE SHrFT KEYING (ASK) The binary PCM in Example 9.1 can be transmitted over wires easily. But when the transmission is through space via radiation, we must use
amplitude modulated binary PCM in Example 9.I. The amplitude modulation shifts the low frequency spectrum of binary PCM to a high frequency (at carrier frequency). This scheme is known as amplitude shift keying (ASK). One of the binary symbols is transmitted by a sinusoidal pulse s(f) given by s(r)
0 4lJ''',t'l'', *J__lr, ,^
to,)l'
)-*l"t,ll'o'
Q.E.D.
(Ae.4)
I ELEMENTS
OF DIGITAL COMMUNICATION
Note that tho inequality of Ag.4 becomes equality
f X'rom Eq.
1rr,,,;' d,
:
if and only if
lolz
A9.l it can be seen that this is possible only if
Fr(u) : kF{Qo) where /c is an arbitrary constant.
PROBLEMS
l. In a binary transmission, one of the messages is represented by a rectangular pulso s(f) shown in X'ig. P-9.1a. The other message is transmitted by the absence of the pulse. The matched filter impulse response is D(t) : s(T - t) : s(r). Calculate the signal-to-noise power ratio s,z@lnj(Q ab t: 7. Assume white noise with a power density ,[12.
t __.> (b)
Figure P-9.1
ft is decided to use a simple R-C filter (Fig. P-9.1b) instead of a matched filter at the receiver. Calculate the maximum signal-to-noise power ratio ls"2(t)lrl.\t)) that can be attained by this type of filter and compare it with that obtained by the corresponding matched filter. lHint: Observe that sr(l) is maximum at, t:7. The signal-to-noise ratio is a function of time constant ,BC. Find the value of RC which yields the maximum signal-tonoise ratio.]
2. Calculate the transfer function of the matched filter for a signal pulse given by s(r)
:
t; 6\/
Gaussian
e-tztzoz
2n
The noise on the channel is a white noise with power density specftum Calculate the maximum S/.0[ ratio achieved by this filter.
.[
12.
3. Show that s,(f), the output of the matched filter to the input signal s(f) is s5rmmetrical about t : T.
423
PROBLEMS
4. Two messages are transmitted by mark and space using a single binary pulse shown in Fig. P-9.4. (o) Design the optimum receiver
if
the
channel noise is a white noise of power density
Jl12
Qr:
10-4).
(6) Find the error probability of the optimum receiver assuming that the probability of s(t) being present is 0.5.
5. If the messages in Problem 4 are transmitted by two binary pulses as shown in Fig. P-9.5, design the optimum receiver and find the error probability of the receiver. Compare this scheme with the one in Problem 4.
- v,tvt
Figure P-9.5
6. A Gaussian signal has a zero mean and the mean square value is or2. X'ind the probability of observing the signal amplitude above 10o,. 7. If two messages are transmitted by waveforms sr(t) and sr(i) show:r in X'ig. P-9.?, design the optimum receiver for a white channel noise.
Figure P-9.7
a
424
ELEMENTS
OF DIGITAL COMMUNICATION
Calculate the error probability of the optimum receiver' Compare this scheme with the one using only a single triangular pulse (as in Problem 4) or two triangular pulses (as in Problem 5). How d.oes this scheme compare
with FSK? 8. In the text, the matched filter was obtained for the case of white noise. Proceeding along the same lines, obtain the matched filter for a colored noise (nonuniform power density) with a given power density E,(cr). lHi,nt: In Schrvarz inequality Eq. 9.7a, let .F1(ar)
:
S(ar)f1(a,)
where S(a;) is obtained by factorizing S,(rr,) : S(co)E(-co), and 3(rrr) has all poles and zero in LHP of the complex frequency plane.l
I
Bibliography
Chopters 1,2 Bracewell, R,. M., The ?ourier Transform and, Its Applications, McGraw-Hill, New York, 1965. Craig, E. J., Laplace anil Xourier Transforms for El.ectrical Eng,i,neers,Holt, Rinehart, and Winston, New York, 1964. Javid, M. and E. Brenner, Anal,ysis, Tran^snxission anil, Iiltering of Signals, McGraw-I{ill, New York, 1963. Lathi, B. P., Signals, Systems, anil Communi,cation, John Wiley and Sons, New York, 1965. Marshall, J . L., Signal Theory,International Textbook Co., Scranton, Pa. Papoulis, A., The Xourier Integral and, its Applications, McGraw-Hill, New
York,
1962.
Chopters 3, 4, 5, 6, 7
Black, H. 5., Moil,ulation Theory, D. Yan Nostrand Co., Princeton, N.J., 1953.
Bennett, W. R. and J. R. Davey, Data Transm,iss,i,on, McGraw-Hill, New
York, 1965. Downing, J. J., Mod,ulation Systems anil
No,ise,
Prentice-Hall, Englewood
Cliffs, N.J., 1964. X'reeman, J. J., Pri,nciples oJ Noi,se, John Wiley and Sons, New York, 1958.
Ilancock, J., Principiles of Communication Theory, McGraw-Ifill, New York, 1961. 425
426
BTBLTOGRAPHY
Panter, P. F., Mod,ulation,
York,
No'i,se and'
Spectral Analysis, McGraw-Hill, New
1965.
H.8., Signals and, Noise in Communicati,on Systems, D. Van Nostrand Co., Princeton, N.J., 1965.
Rowe,
Schwartz, M., InJormation Transmission, Mod'ul,ation anil, Noi,se, McGrawHill, New York, 1959.
8,9 Abramson, N., Information Theory and, Coil,ing, McGraw-Hill, New York, Chapters r963.
Harman, W. W., Princi'ples of the Stati,st'i'cal, Theory of Communiention, McGraw-Ilill, New York, 1963. Lathi B. P., An Introilucti'on to Ranilom Si,gnals anil Communiw,ti,on Theory, International Textbook Co., 1968. Reza, F. M., An Introd,uctionto Information Theory, McGraw-Hill, New York, 196r.
Schwartz M.,
W. R. Bennett and S. Stein,
Communi'ent'i,on Systems and,
McGraw-Hill, New York, 1966. 'Wozencraft,J.M.andI.M.Jacobs, Pr'i,nciplesofCommunicationEngineer'ing, Techni,ques,
John Wiley and Sons, New York, 1965.
lndex
Abramson, N.,426
Amplitude modulation, with large carrier,
Carson, J., 231 Cascaded amplifier, 311
t67 suppressed carrier, 150
Amplitude shift keying (ASK), 409 Analog (continuous) data communication, 393 Analogy between signals and vectors, 3 Angle modulation, 210 noise reduction characteristics of , 231, 33s
Armstrong, E. H., 231 Atwater, H. A.,236 Available power density, 300, 301 of R-L-C network, 302 Available power gain, 303, 305
pulse
relationship to rise time, 122 Bennett, W. R., 425
219,230
Boltzmann's constant, 2'l
Causality condition, 120 Causal signal, 120 Channel capacity, 2, 37 8 Chopper amplifier, 163 Coded (communication) systems, 353 Coded systems and uncoded systems, comparison of, 362 Coherent detection, 153, 419 Communication systems, amplitude modulation, 148 angle modulation, 210
modulation,241
Comparison of AM systems, 195 Comparison of frequency division multiplexed and time division multiplexed systems, 259 Complementary error function, 407 Continuous (analog) data communication,
Balanced modulator, 1.62 Bandwidth of a system, 117
Bessel functions, 2t, Bipolar PCM, 361 Black, H. S., 89,425
Carrier reinsertion techniques of detecting suppressed carrier signals, 191
393 Convergence in the mean, 16
I,
Convolution integral, 80 graphical interpretation of, 83 Convolution relationships, 82 Convolution theorem, 80 frequency, Sl time, 80
27 5
Bracewell, R, M.,425 Brenner, E., 425 Carrier frequency deviation, 217 421
, 428 Correspondence between time domain and frequency domain, 63, 81
Gaig,E. J.,425
INDEX Fourier signals, Legendre, 2l trigonometric, 23 Fourier spectrum, see Frequency spectrum
Davenport, W. B., 268 Davey, J. R., 425 Decision threshold in matched filters, 400
Demodulation, AM signals, 172 FM signals, 236 SSB signals, 184, 185 suppressed carrier signals, 162 Detection ofbinary signals, 319 Digital communication, 393 Dirac, P. A. M., 49
Distortionless transmission, I 15 Downing, J., 425
Effective noise temperature, 303 Efficiency of PCM, 387 Elias, P., 380 Emde, F., L23,2L9 Energy density spectrum, 125
interpretation of, 127 Energy signals, 126 Envelope detector, 175
Equivalent noise bandwidth, 287 Error function, 405 Error probability in matched filter detec-
tion, 407 Eternal exponential function, Fourier transform of, 58 Exchange of bandwidth for signal-to-noise
ratio, 231, 383 ideal law for, 386 Fading, 195 selective, 195 False ahrm type error, 404 False dismissal type error, 405 Filters, ideal, 117
reeli?able, 120
FM signal generation, 232 diode reactance method, 234
diect,233
Fourier transform, 36 existence of, 43 properties of, 63 Freeman, J, J., 27 5, 425 Frequency Frequency Frequency Frequency Frequency 259 Frequency Frequency Frequency Frequency
conversion, 157 converters, 157
differentiation property, 79 discriminator, 236 division multiplexing, 148, 200, domain representation, 3I, 42
mixers, 157 mixing, 157 modulation (FM), 212 multiple frc,qtency, 223 square wave, 225
Frequency-shifting property, 7 3 Frequency shift keying (FSK), 414 Frequency spectrum, complex, 30 continuous, 40 discrete,
3 1
line, 31 magnitude, 31
phase,3l Frequency translation techniques, 155 Frequency translation theorem, 73 Gate function, periodic, 20, 33 transform of, 60 Gaussian distribution, 401 Generalized functions, 49 Generalized Nyquist theorem, 277, 3Ll Generalized thermal noise, relationship, 277
,311
Gibbs phenomenon, 18 Graphical evaluation of a component of a signal, 8 Graphical evaluation of convolution, 83 Guard band, 256 Guard time, 256
indirect,232 reactance-tube method, 234 saturable reactor metlod, 234 FM signals, demodulation of, 236 Fourier series, exponential, 26 generalized, l7
Hancock, 1.C.,425 Hanson, G.H.,297 Harman, W. W.; 426 Harris, W. A.,2'll
Hilbert transform, 110, 184 Homodyne detection, 153
-l INDEX
429
Ideal filters, 1 17
Impflse function, definition of, 49 Fourier transform of, 52 sampling property of, 51 as a sequence of exponential pulse, 49 as a sequence of Gaussian pulse, 49 as a sequence as a sequence
of sampling function, 49 of sampling square func-
tion,51 of triangular pulse, 49 Impulse train function, Fourier transform as a sequence
of,6l Incoherent detection, 420 Independent random signals, 281 Information content of nonequiprobable messages, 390
Information measurc,372 from engineering point ofview, 373 from intuitive point of view, 372 Instantaneous frequency, 212 Instantaneous sampling, 245 recovering the signal from, 248
Intermediate frequency (IF), 201 Jacobi polynomials, 21 Jacobs,
I. M.,420,426
Jahnke, 8.,123,219 Javid, M.,425 Johnson, 1.8., 276 Kaplan, W., 21
Lathi, B. P., 112, 268, 27 5, 420, 425, 426 Legendre Fourier series, 21
Lighthill, M. J.,49 Linearity property of Fourier transform, 70
Linearization of frequency modulation, 228 Linear modulation,228 Linear systems, filter characteristic of, 113
tansfer, function of, 112 transmission of signals through, 111 McWhorter, M., 311 Marshall, L L.,425 Mason, S. J., 20 Matched filter, 394 Mean square eror evaluation, 15 Modulation tndex, l7'l ,219 Modulation theorem, 73 for power signals, 134
Multiple frequency FM modulation, 223 -Diimensional space, I1 Narrowband FM,2L4
n
Natural samplng,242 Nielsen, E. G., 297 ,298 Noise, flicker, 274 Johnson, 276
partition,274 shot, 265
lhermal,274 wbite,276 Noise calculations, cascaded amplifier, 306 linear bilateral networks, 277
multiple sources, 281 single source, 279 Noise figure, 288,290 average,290
in cascaded amplifier, 306 in common base transistor amplifrers, 295 in common emitter fuansistor amplifiers, 298 experimental determination of, 298 integrated, 290 spectral, 290 Noise in communication systems, in AM,
326,331 in angle modulation, 335 in DSBSC, 326 in FM, 335 in PAM, 349 in PCM (binary), 354 in PCM (s'ary), 359 in PM, 347 in PPM, 350 in SSB-SC, 328 Nonlinear modulation, 228 North, D. O.,271 Nyquist, H.,276 Nyquist generalized theorem in noise calculations,
2'l7,3Il
Nyquist interval, 91 Nyquist rate of sampling, 255 Orthogonality in complex functions, 20 Orthogonal signals,6 closed or a complete set of, L6 Paired echo
distortion, 144
Paley, R. E. A. C., 121 Paley-Wiener criterion,
9
3, l2O
430
INDEX
PAM signals, bandwidth requirement of, 256
Rowe, H. E.,426
sampling rute,255 transmission of, 250 Panter, P. F.,234,426 Papoulis, A., 5L,425
Sampling, instantaneous, 245
Parseval's theorem,
1,7
, L27
PCM,25I efficiency of, 387 noise in, 354,359 Periodic function, Fourier transform of, 59
Pettit, J., 311 Phase modulation (PM), 212 some remarks on,229 Phase shift keying (PSK),411 Phase-shift method ofgenerating SSB, 180 Plancherel's th eorem, 127 Power content of sidebands and carrier,
in AM, 176 in FM, 230 Power density spectrum, 130 interpretation of, 140 of a periodic signal, 137 Power signals, t26,l3O Probability density function, 402 Pulse amplitude modulation (PAM), 241 noise in, 349 Pulse code modulation, 251 noise h, 354 Pulse duration modulation (PDM), 251 Pulse position modulation (PPM), 251 Pulse signals, 126 Pulse width modulation (PWM), 251
naturut,242 Sampling function, 33, 34 Sampling (sifting) property, Sampling theorem, 89 frequency domain, 94
5l
time domain, 89
uniform, 89 Scaling property, 70 significance of, 71
Schwartz, L,, 49 Schwartz, M., 222, 254, 426 Schwarz inequality, 397 , 421 Selective fading, 195 Shannon-Hartley law, 2, 380 Shot noise, 265 in diodes, 265
multielectrode ttbes, 27 2 power density spectrum of, 268 transistors, 295 Sine integral, 123 Single sideband signals (SSB), 178 demodulation of, 184 generation of, 179
Singularity functions, 46 Space charge limited operation of a diode, 269 Spangenberg, K. R., 2'l 2
Spectral density function, 40 Square wave FM modulation, 225
Stein, S.,426 Superheterodyne rccever, 202
Symmetry property of a Fourier transform, Quadrature multiplexing, 196, 2O3 Quantization, 354 Quantization noise, 35 8
69 Synchronous detection, 1 5 3, 420 effects of frequency and phase errors in, 186
Rack, A. J., 271 Reactance tube circuit, 234
Temperature limited operation of a diode,
Rectifier, detector, L7 2 Rectifier modulator, l7L Reza, F. M.,426 Ring modulator, 159
Temple, G.,49 Thompson, B. J., 271 Threshold effect, in AM, 334
Rise
time, 122, t25
relationship to bandwidth, 122 Rodrigues'formula, 21 Root, W. L., 268
269
in FM, 342 Threshold improvement through preemphasis, 343 Threshold of detection, 400
I INDEX Time-autocorrelation function, 146 Timedifferentiation property, 7 6 Time division multiplexing, 148, 254, 259 Time domain representation, 16, 31., 42 Time integration property, 76 Time-shifting property, 75 Transit time, 267 Trigonometric Fourier series, 23 Tuller, W. G., 353 Uncoded (communicati6n) systems, 353 Uncorrelated random signals, 284
431
Unipolar PCM, 361 Van der Ziel, A.,86,295,29'l Vestigial sidebands, 186, 196
Viterbi, A., 353 Watson, G. N., 230 Wideband FM,216 bandwidth of, 217 Wiener, N., 121 Wozencraft, J. M., 420, 426 Zimmerman, H., 20
a I
I
I
I
t I