410 12 44MB
English Pages 950 [1000] Year 1983
Table of contents :
Computer history and concepts. Computer structures. Number systems and codes. Boolean algebra and logic networks. Sequential networks. The arithmeticlogic unit. The memory element. Software. Input, output, and secondary storage devices. Timesharing systems. Assembly and system level programming. Survey of highlevel programming languages. Basic. Cobol. Fortran. Pascal. PL/I. Hardware and software documentation. Databases and filesystem organization. Computer graphics. Artificial intelligence and robotics. Character printers. Graphics plotters. Survey of microprocessor technology. Microcomputers and programming. Subroutines, interrupts, and aritmetic. Interfacing concepts. Microcomputer operating systems. Audio output: speech and music. Voice recognition. Index.
.« 4.
^m/mmm.my^fmm4^^*t
«.%^4»'«i.«i'4K*^1«)««>4Mk^4i^^'«»»«^i^
APPLICATIONS
CONCEPTS HAT«!)WARE '^6ftware
0\'crvie\v bv Adam Osborne
Foreword bvThomas
C Bartee
«.
i«
m^m^t
a"a7aa7i7Ei
i^i3N
McGrawHill N
aDMPUTER HANDBOOK Harry Helms 992 pages, 475
illustrations
Here is the allinclusive and highly definitive handbook the computer world has been waiting
for!
Whether your
interest
is
professional, per
sonal, business, or academic, Hill
Computer Handbook
The McGraw
a standard
is
ref
erence that will be essential in answering virtually any question that may arise in the use of today's computers. Written by a staff of worldrenowned
comprehenand practical information and techniques on mainframe computer, minicomputer, and microcomputer hardware, software, theory, and applications. experts, this working tool offers
sive, authoritative,
Clearly written
and extensively
illustrated for
quick comprehension, this book has a speit assumes no prior knowledge computer science; thus, nonspecialists and enthusiasts can benefit from the knowledge as well as any professional.
cial feature:
of
Another key feature is that is organized for easy reference. This relevant anthology begins with the elementary concepts applicable to all computer systems, large or it
small.
It
nents of
on
then examines the basic compoall
computer systems and moves
to specific
systems.
By gathering the broad spectrum
of
com
puter science information into one place,
The McGrawHill Computer Handbook will prove invaluable to both the experienced user and the beginner. Here is a sampling of topics that are detailed
* *
in this text:
basic computer theory computer structures (continued on back flap)
Digitized by the Internet Archive in
2012
http://archive.org/details/mcgrawhillcomputOOhelm
The McGrawHill
Computer Handbook
The McGrawHill
Computer Handbook Editor
in
Chief
Harry Helms Overview by
Adam Osborne Foreword by
Thomas
C. Bartee
McGrawHill Book Company New York St. Louis San Francisco
Auckland Bogota Hamburg Johannesburg London Madrid Mexico Montreal New Delhi Panama Paris Sao Paulo Singapore Sydney Tokyo Toronto
Library of Congress Cataloging
Main
entry under
in Publication
Data
title:
The McGrawHill computer handbook. Includes index. 1.
Computers
— Handbooks,
manuals,
etc.
Programming (Electronic computers) — Handbooks, 3. Programming languages (Electronic manuals, etc. I. Helms, computers) — Handbooks, manuals, etc. 2.
Harry L.
II.
QA76.M37 ISBN
McGrawHill Book Company. 831044
001.64
1983
0070279721
Copyright
© 1983 McGrawHill,
Inc. All rights reserved. Printed in the
United States of America. Except as permitted under the United States Copyright Act of 1976, no part of
this publication
may
be reproduced or
distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher.
34567890 KGP/KGP
8 9 8 7 6 5 4
ISBN 0n7DE7T72l The
editors for this
book were Patricia AllenBrowne and Margaret Lamb, was Teresa F. Leaden, and the designer was
the production supervisor
Mark
E. Safran.
It
was
Printed and bound by
set in
Times Roman by University Graphics,
The Kingsport
Press.
Inc.
Contents
Contributors
Overview
xiii
Foreword
xv
1.
xi
Computer History and Concepts 11
Introduction
12
Historical Perspective
13
A
14 15
16 17 18
11
Classification of
3.
12
41
Automatic
43 Truth Tables
41
Expressions
43
Boolean Algebra Theorems
45
Using the Boolean Algebra
46
Theorems 49 The Karnaugh Map Method
47
of Boolean
413
Simplification
Computer Structures
42
and Boolean
44
113 47
Logic Networks
48
Additional Logic Gates
420 421
21 5.
21
Introduction
22
Functional Units
23
Input Unit
24
Memory
25
Arithmetic and Logic Unit
26
Output Unit
28
27
Control Unit
29
28
Basic Operational Concepts
29
Bus Structures
21
22
25
Unit
Introduction
42 Boolean Algebra
Computers 15 The Nature of a Computer System 16 17 Principles of Hardware Organization 110 Conventions on Use of Storage 111 Elements of Programming Principles of the SpaceTime Relationship
2.
Boolean Algebra and Logic Networks 41
11
25 27
Sequential Networks 51
Introduction
52
The FlipFlop Element
53
State Tables and State Diagrams
Diagram 210
Number Systems
32
Binary Codes
33
Error Detection and Correction
51
56
31
31
35 38
6.
510
Converting a Logic Diagram into a State
Table
212
31
51
54 Converting a State Table into a Logic
55
Number Systems and Codes
51
514
56
Design Examples
57
Important Sequential Networks
518
The ArithmeticLogic
61
Unit
61
Introduction
62
Construction of the
63
Integer Representation
61
ALU
62 63
526
1
CONTENTS
VI
64 65
66 67
68
69
The Binary Half Adder 64 The Full Adder 65 A Parallel Binary Adder 67 Positive and Negative Numbers Addition in the IS Complement
719
Drum
721
68
69
Addition
in the
System
610
Storage
751
Magnetic Drum 753 Magnetic Disk Memories
755
722 Flexible Disk Storage Systems
Floppy Disk
System
2S Complement
723
— the
760
Magnetic Tape
763
724 Tape Cassettes and Cartridges 725 Magnetic Bubble and
610 Addition and Subtraction in a Parallel
Memories
612
Arithmetic Element
Adder Designs
Full
612
The BinaryCodedDecimal (BCD) Adder 616
613 Positive and Negative
727
771
772
ReturntoZero and ReturntoBias
Recording Techniques
BCD
769
CCD
726 Digital Recording Techniques
614
611
Numbers
Magnetic
720 Parallel and Serial Operation of a
773
728 NonReturntoZero Recording
Techniques
619
775
614 Addition and Subtraction in the 9S 615
Complement System 619 The Shift Operation 624
616 Basic Operations
625
617 Binary Multiplication
627
618 Decimal Multiplication
619 Division
631
632
620 Logical Operations 621
8.
637
Number Systems
FloatingPoint
Software
81
81
Introduction
82
Languages and Translators
83
Loaders
Numbers
84
Linkers
85
Operating Systems
87
The Memory Element
9.
644
Input, Output,
91
71
and Secondary
93
Introduction
72
RandomAccess Memories
73
LinearSelect
74
Decoders
75
Memory Access Memory Chips to a
Dimensions of
76
Connecting
MediumTerm Storage Devices
95
Speed and Capacity Comparisons
10. Timesharing
Computer Bus 716 RandomAccess Semiconductor Memories 721 78 722 Bipolar IC Memories 79 726 Static MOS Memories 710 Dynamic Memories 729 711 ReadOnly Memories 731 712 Magnetic Core Storage
in a
Introduction
The User Viewpoint and Some
103
Choice of Time Slice
104
The The
105
737
742
718
1010
106
Performance Measurement
A
1016
Timesharing System Simulator
IBM
744
746
Memory
109
MIT CTSS System APL System 1012
Timesharing Option (TSO) For System/360/370 1028 109 The G.E. Information Service Network 1035
740
Circuitry
103
1019
716 Driving the X and YSelection 717
919
101
107
108
714 Assembly of Core Planes into a Core
Lines
916
101
101
Consequences
TwoDimensional
715 Timing Sequence
Systems
102
713 Storage of Information in Magnetic
Memory
98
94
71
77
Array
92
LongTerm Storage and Intermediate
79
75
Cores
91
InputOutput
73
Memory
Organization
Introduction
91
92 InputOutput Devices
72
71
810
640
Storage Devices 7.
82
84
622 Performing Arithmetic Operations with FloatingPoint
81
Buffer Register and Associated
717
CoreMemory Organization and Wiring Schemes 749
11.
Assembly and System Level Programming 111 111
Introduction
112
The Raw Machine: Load 112
111 Initial
Program
1
CONTENTS 113 1
14
The Assembler 1126
Editors 115
The Library
1134
116 Other Translators 1
17
18
1
FORTRAN 151
Introduction
155
Program Format 152 Key Punch and Terminal Entry 153 Constants and Variables Type Declarations 154
156
Arrays
154
135
Task and Job Scheduling, 1138
151
155
57
Assignment of Values
158
Arithmetic Operators
159
Relational Operators
1
Commands
151
152 153
1135
Running the Program, Monitoring
1
15.
114
Relocating Loaders and Linkage
Survey of HighLevel Programming
Languages
156
Introduction
157
1510
121
1513
Development of HighLevel Languages 122 123 HighLevel Language 1
157
Control and Transfer Statements
151
121
1512 Subprograms 121
55
1
156
1510 The Equals Symbol
12.
Intrinsic Functions
1513
22
Summary
1515
Naming Programs
1515
1516 Character Manipulation
1516
123
Descriptions 124
1514 Parameters 1515
1517
Equivalence Operators
1518
File
1517
125
Organization
1517
1519 ListDirected (Stream) Input/
Output 13.
BASIC
1521 131
Introduction
132
System Commands Program Structure
133
132
Variables and Constants
Arithmetic Operators
136
Relational Operators
137
Logical Operators
1311
135
16.
138
138
139
1311
Numeric Functions
1312 String Functions
1312 1313
1313 Assembly Language Statements and
Routines
1315
1314 Graphics Statements 1315
Input and Output Statements
1317 Reserved
1318
Introduction
162
Program Structure
163
Identifiers
164
Data Types
165
Definitions
166
Arrays
161
163
and Declarations
167
Assignment Operations Relational Operators
169
Control Statements
1611
1612 Input and Output
Packed Arrays
166 168
169
169
1610
1610
1616 Reserved
COBOL
165
165
Predefined Functions
1614 Sets
1321
163
164
1615 Files and Records
14.
162
163
168
1613
1320
Words
161
161
1610 Functions and Procedures
1316
1316 Specialized Input and Output
Statements
Pascal
137
Order of Operations 138 Program Logic and Control
1310 Subroutines
1518
1521
135
135
139
Reserved Words
131
134
138
1518
1520 Formatted Input/Output
131
Words
1611
1612
141
141
Introduction
142
Writing
143
COBOL
144
InputOutput Conventions
141
COBOL
Programs
Statements
143
1416
17. PL/I 171
1437
171 Introduction
172 Writing
171
PL/ 1 Programs
173
Subprograms 1443 146 Complete Programs 1447
173
Basic Statements
174
InputOutput Conventions
147
Additional Features
175
148
Comparison of 1968 and 1974
Subprograms 1748 176 Complete Programs 1762
Standards
177
145
1454
1448
VII
Additional Features
1714
1763
1732
153
1
Viii
18.
CONTENTS Hardware and Software Documentation 81
2019 The StorageTube Display Display
181
Introduction
Good 21.
Documentation 182 182 183 Types of Documentation 183 184 Reference Documentation 185 Tutorial Documentation
Developing Documentation
211
Robotics 211
183
Introduction
184
Databases and FileSystem
213 Expert Systems
191
Introduction
192
Files
193
Computations on
Descriptions
197
Binding
Database
a
1914 1915
1916
Classification of Printers
223
Printer Character Set
224
CharacterAtATime Impact Printers
196
1911
1914
for Fully
225
1918
226
Applications
1921
1922
1927
Pile
221
222
223
Formed Characters
225
LineAtATime Impact Printers
for
Formed Characters 229 LineAtATime Nonimpact Printers 221 for Fully Formed Characters Fully
1910 FileSystem Organization
1912
Introduction
222
1916
Systems
1913
221
194
Classification of Operating
The The The The The The
215
221
22. Character Printers
Hierarchical View of Data
Current Practice
1911
214
216 Robots and Personal Computers
191
196
199
213
Robotics
192
195
198
in
211
213
191
Organization
A
211
212 Computers and Intelligence
215 Tasks
194
and
Artificial Intelligence
214 Robotics
19.
2034
181
182 Characteristics of
186
2031
2020 The Refresh LineDrawing
1933
Sequential File
DotMatrix Printers
228
CharacterAtATime Impact Dot
229
LineAtATime Impact DotMatrix
1957
Printers
1968
Direct File
2219
2210 Nonimpact DotMatrix Printers
1984
Multiring File
2213
2216
Matrix Printers
1940
IndexedSequential File
Indexed File
227
2220 2211
20.
Computer Graphics 201
Introduction
202
The Origins
203
How
23. Graphics Plotters
201
of
Computer
231
Some Common
205
New
206
GeneralPurpose Graphics
207 208
The User Interface 2010 The Display of Solid Objects
209
PointPlotting Techniques
Questions
233 Paper Motion
206
208
Display Devices
Plotters
235
2011
2013 2014
241
The Evolution
2021
241
242
The Elements
243
Microprocessor and Microcomputer
of a Microprocessor
243
Software
2023
2410
244 Fundamentals of Microprocessor
2023
2018 InherentMemory Devices
239
of the
Microprocessor
2022
CRT
238
24. Survey of Microprocessor Technology 241
2015 Display Devices and Controllers
The
236
2010
2019
2014 LineDrawing Displays
2016 Display Devices
236
of Physical Capabilities
237 Plotting Programs
2012
Incremental Methods
2013 Circle Generators
Range
Drafting Systems
2012 LineDrawing Algorithms
2017
232
233
236 DedicatedUse Plotters: Automatic
209
2010 Coordinate Systems 2011
231
Introduction
234 Tabletop and FreeStanding
205
204
Software
231
232 Marking and Scanning Methods
the InteractiveGraphics Display
Works
2224
201
204
Graphics
Paper Handling
2028
System Design
2415
1
CONTENTS 28. Microcomputer Operating
25. Microcomputers and
Programming
Systems
251
281
251
251
Introduction
252
254
Program Planning 254 Data Types 258 Data Storage 251
255
Instructions
253
256
283
Components
Specific Problems
282 vs. Mainframes The Portability/Compatibility
286
CP/M
287
Advanced
Problem
2512
Assembly Language 2516 Data Movement Instructions
259
Boolean Manipulation
2519
283
284 8Bit Operating
Systems 287 288 16Bit Operating Systems
2528
2511
Branching Instructions
2512
A
2531
Logic Controller Example
2537
2513 Another Approach to the Logic
Speech and
29. Audio Output: Music 291
2543
Controller
291
Introduction
291
292 AudioResponse Units
26. Subroutines, Interrupts, and Arithmetic 261 Subroutines
262 Interrupts
266
Some
261
2611
303
301 Historical Overview System Goals 302 System Overview 303
304
InputSignal Processing
305
Data Compression
306
Discrete
301
302
2614
Movement
2624
Programmed Arithmetic Operations
2625
307
Concepts
308 271
Introduction
272
Input/Output Ports
272
273
274
Handshaking 276 Program Interrupts
2711
275
MainMemory
276
Direct
277
Further Microprocessor Bus
271
Concepts 279
Serial
I/O
3012
Pattern Parameters and
Analysis
3014
3010 WordIdentification Techniques 2721
2731
3021 3011
Hardware Implementation
3012 Status
3023
2736
279
2710 BitSlice Microprocessors 2711
309
2734
Analog Conversion
Microprocessor Clocks
Terms
Pattern Detection within a
Word
Interfacing
278
308
Definitions of Linguistic
3011
271
Memory Access
304
306
WoodBoundary
Determination
27. Interfacing
301
30. Voice Recognition
Controller
Additional Data
Instructions
267
293
266
ManualMode Logic Example 2612
265 Arithmetic
291
Music and Sound Synthesizer 294 Speech Synthesizers 293 293
263 Additional Pseudoinstructions
264
288
289
289 Conclusions
2525
2510 Rotate Instructions
261
281
282
285
8080/8085 Microcomputer
Instructions
281
Introduction
284 Micros
258
257
281
282 Operating System
2511
Organization
IX
2755
2761
Glossary G1
Index follows Glossary
3022
^Tf^smFFi
Contributors
Thomas C. Harvard University Conroy, Thomas F. International Business Machines Bartee,
Radio Shack Technical Publications
Erickson, Jonathan Gault,
James W.
Corp.
North Carolina State University
Gear, C. William
University of Illinois
State University of New York at Buffalo
Givone, Donald D.
Hamacher, V. Carl
University of Toronto
Hellerman, Herbert
State University of New York at Binghamton
Helms, Harry L.
Technical Writer and Consultant
Hohenstein and Associates
Hohenstein, C. Louis
House, Charles H. Kohavi, Zvi
HewlettPackard
University of Utah
Koperda, Frank
International Business Machines Corp.
Miastkowski, Stan
Rising Star Industries,
Newman, William M.
Queen Mary
Pimmel, Russell L.
University of Missouri
Roesser, Robert P.
University of Detroit
Tucker, Allen,
Dataface
Georgetown University
Jr.
Vranesic, Zvonko G.
University
Wiatrowski, Claude A. Wiederhold, Gio
Zaky, Safwat G.
London
Sutherland, Sproull and Associates
Sproull, Robert F.
Stout, David F.
Inc.
College,
M.
of Toronto
Mountain Automation Corp.
Stanford University
University of Toronto
XI
Overview
TRENDS
THE MICROCOMPUTER INDUSTRY
IN
Without a doubt, IBM's Personal Computer strategy cast the shape of the and probably for the rest of the decade. It microcomputer industry in 1982 was not simply sales volume or market share that made IBM's Personal Computer such a formidable factor in 1982. Rather, it was the marketing strategy IBM adopted. It is a strategy all other microcomputer industry participants who wish to survive will have to adopt. Prior to the IBM Personal Computer, industry leaders (such as Apple, Commodore, and Radio Shack) all strove to build unique hardware which, whenever possible, would run programs that could not be run on any competing machine. Furthermore, these companies vigorously fought competitors attempting to
—
build lookalike microcomputers.
That was the old minicomputer industry philosophy, adopted by too many microcomputer manufacturers. Well in advance of IBM's entry, the ultimate fallacy of this philosophy was evident. The reason was CP/M, an operating system standard on none of the three leading personal computers (Apple, Commodore, and Radio Shack). Nevertheless, CP/M not only survived but thrived. CP/M was kept alive by more than the flock of secondary microcomputer manufacturers. A large number of Radio Shack and Apple microcomputers were also running CP/M, even though it required additional expense for unauthorized software and additional hardware for the Apple. Was there a message in the extremes to which customers would go to thwart the intent of microcomputer manufacturers and run CP/M? Indeed there was, and it was that de facto industry standards are overwhelmingly desirable to most customers. That message was not lost on IBM. Few messages of validity are; that is why IBM has grown to be so successful.
And its
so
entry a
when IBM introduced its Personal Computer, it went about making new de facto standard. Any hardware manufacturer who wanted to
could build a product compatible with the
IBM
Personal Computer. Software
vendors were given every encouragement to adopt the
IBM
standard.
The
first
XIII
Xiv
OVERVIEW independent microcomputer magazine dedicated to the
IBM
Personal
Com
puter grew to be one of the most successful magazines in the business within
months of
six in
it
in
its first
publication. People bought the
There are already several microcomputers
IBM
magazine and advertised
unprecedented volumes.
that are compatible with the
IBM
built
by companies other than
Personal Computer. Already there are
probably more software companies devoting themselves to the than any other with the possible exception of
IBM
standard
CP/M.
Within a short time, I predict there will be two de facto industry standards microcomputers: CP/M running on the Z80 for 8bit systems and IBM (MDOS and CP/M 86) running on the 8086 or 8088 for 16bit systems. Companies not supporting one or both of these standards have a tortuous, uphill fight for
ahead of them. One can well argue that 8bit microprocessors are generally obsolete and that the 8086 and 8088 are among the least powerful 16bit microprocessors. But what has that to do with anything? If these microprocessors are adequate for the tasks they are being asked to perform, then any theoretical inadequacies will not be perceived by the end user. And even if the de facto standards are not the best in whatever area they have become standard, what does it matter providing their shortcomings are not apparent to the user?
The
difference between the microcomputer and minicomputer industries
is
becoming consumer products. It will be far more difficult for microcomputer industry managers to impose their will on a mass market of consumer buyers than it was for minicomputer manufacturers to manipulate a relatively small customer base (which was commonly done in that microcomputers are rapidly
the early 1970s).
This message has not been learned by many present participants in the microcomputer industry. But this message will determine more than anything else
who
will
be the survivors when the inevitable industry shakeout occurs.
Adam Osborne President
Osborne Computer Corporation
1983
mmm
Foreword
Since the 1950s the digital computer has progressed from a "miraculous" but expensive, rarely seen, and overheated mass of netic cores to a familiar, generally
vacuum
compact machine
and magfrom hundreds of
tubes, wires, built
thousands of minuscule semiconductor devices packaged
in
small
plastic
containers.
As
They run our cash registers, check and manage the family bank account.
a result, computers are everywhere.
out our groceries, ignite our spark plugs,
Moreover, because of the amazing progress in the semiconductor devices which form the basis for computers, both computers and the applications now being developed are only fragments of what will soon be in existence.
The Computer Handbook which
follows presents material from the basic
The Handbook progresses from hardware through software to such diverse topics as artificial intelligence, robotics, and voice recognition. Microprocessors, the newest addition to computer hardareas of technology for digital computers.
ware, are also discussed in some detail.
Computers are beginning
to
touch the
lives of
hospital will be full of computers (there could be in a
modern
hospital;
everyone. If you are
more computers than
ill,
the
patients
medical instrument designers make considerable use of
microprocessors). Schools have been using computers for bookkeeping for years and now use computers in many courses outside computer science. Even elementary schools are beginning to use computers in basic math and science courses. Games designed around microprocessors are sold everywhere. New ovens, dishwashers, and temperature control systems all use microprocessors. The wide range of applications brings up some basic philosophical questions about what the future will bring. System developers can now produce computers which speak simple phrases and sentences reasonably well bank computers probably give the teller your balance over the telephone in spoken form and some of the newer cars produce spoken English phrases concerning car operation. Computers also listen well but have to work hard to unscramble what is said unless the form of communication is carefully specified. This is, however, a hot research area and some details are in this book. The speech recognition problem previously described is a part of a larger problem called pattern recognition. If computers can become good at finding patterns, they can scan xrays, check fingerprints, and perform many other use
—
XV
XVi
FOREWORD ful functions (they
already sort mail).
at finding patterns
even
in this
area faces
many
in the
The human
brain
is,
however, very good
presence of irrelevant data (noise), and research
challenges
if
computers are
to
become competitive.
If,
however, computers can become good at recognizing speech and returning
answers verbally,
it
might be possible even
data verbally. This would
make
it
to enter
programs
possible literally to
tell
for
computers and
the computer what to
do and have the computer respond, provided the directions were clear, contained no contradictions, etc. While verbal communication might facilitate the programming of a computer, there would still be the problem of what language to use. There are many proponents of English, but English need not be precise and lacks the rigidity now required by the computer and its translators. Certainly much has been done in this area and the steady march from machinelike assemblers to today's highlevel programming languages testifies to the need for and emphasis on this area.
Much more
is
needed, however, and the sections of this Handbook fairly rep
resent the state of this art and point to the direction of future work.
Robotics also presents an outstanding area for future development. Factories
now use many robotic devices and research labs are beginning to fill with robots in many strange and wonderful forms. Waving aside the possibility and desirability of robots for maids, waitresses, waiters, ticket sales persons,
functions already exploited by television and movies, there are
and other
many medical
operations and precision production operations which can and will be performed
by computerguided robots (often because they are better than humans). We often complain that others do not understand us, and at present computers do not understand us; for a while we will have to be content with computers which will simply follow our directions.
Computer memories are making
human memories, however. Largely due to the ingenuity of memory device designers, the memory capacity of large machines now competes with our own but the different organization of the brain seems to give it substantial gains on
advantages for creative thought. Artificial intelligence delves into
some areas formerly relegated however.
For example,
in
to
"human"
this area. In
thought, computers do quite well,
such straightforward
mathematical systems as
Euclid's geometry, computers already perform better than might be expected; in
a recent test a computer proved
all
the theorems in a high school test in
minutes. I
think that to be really comfortable with computers
it is
necessary to have
some knowledge of both hardware and software. In order to make computers more widely used, there is a tendency to make consumeroriented personal computers appear to be "friendlier" than they really are. This limits their flexibility and presents users with a mystery element which needs to be and can be dissolved by a little knowledge of actual computer principles. A handbook such as this can be very helpful to users in dissolving some of the mystery. At the same time, such a handbook can open new doors in exploration and serve as a contin
uing reference.
Thomas
C. Bartee
Harvard University 1983
The McGrawHill
Computer Handbook
Computer History and Concepts Herbert Hellerman
11
12
13 14 15 16 17
18
11
INTRODUCTION HISTORICAL PERSPECTIVE A CLASSIFICATION OF AUTOMATIC COMPUTERS THE NATURE OF A COMPUTER SYSTEM PRINCIPLES OF HARDWARE ORGANIZATION CONVENTIONS ON USE OF STORAGE ELEMENTS OF PROGRAMMING PRINCIPLES OF THE SPACETIME RELATIONSHIP
INTRODUCTION The modern generalpurpose this book, is the
computer system, which is the subject of and complex creation of mankind. Its versatility applicability to a very wide range of problems, limited only by most
digital
versatile
follows
from
human
ability to give definite directions for solving a
its
such directions
in the
form of a
problem.
precise, highly stylized
A program gives
sequence of statements
A
computer system's job is to reliably and rapidly execute programs. Present speeds are indicated by the rates of arithmetic operations such as addition, subtraction, and comparison, which lie in the range of about 100,000 to 10,000,000 instructions per second, depending on the size and cost of the machine. In only a few hours, a modern large computer can do more information processing than was done by all of mankind before the electronic age, which began about 1950! It is no wonder that this tremendous amplification of human informationprocessing capability is precipdetailing a problemsolution procedure.
itating a
new
revolution.
Adapted from Digital Computer Systems
Principles. 2d ed., by Herbert Hellerman. Copyright
©
1973. Used by
permission of McGrawHill, Inc. All rights reserved.
11
12
THE McGRAWHILL COMPUTER HANDBOOK
To most
people, the words "computer" and "computer system" are probably
synonymous and
refer to the physical equipment, such as the central processing
card reader, and printers visible to anyone visiting a computer room. Although these devices are essential, they make up only the visible "tip of the iceberg." As soon as we start to use a modern computer system, we are confronted not by the machine directly but by sets of rules called programming languages in which we must express whatever it is we want to do. The central importance of programming language is indicated by the fact that even the physical computer may be understood as a hardware interpreter of one particular language called the machine language. Machine languages are designed for machine efficiency, which is somewhat dichotomous with human convenience. Most users are shielded from the inconveniences of the machine by one or more languages designed for good manmachine communication. The versatility of the computer is illustrated by the fact that it can execute translator programs (called generically compilers or interpreters) to transform programs from useroriented languages into machinelanguage form. It should be clear from the discussion thus far that a computer system consists of a computer machine, which is a collection of physical equipment, and also programs, including those that translate user programs from any of several languages into machine language. Most of this book is devoted to examining in some detail theories and practices in the two great themes of computer systems: equipment (hardware) and programming (software). It is appropriate to begin, in the next section, by establishing a historical perspective. unit, console, tapes, disks,
12
HISTORICAL PERSPECTIVE Mechanical aids
many many
and calculating were known
in antiquity.
One
of
ancient devices, the abacus, survives today as a simple practical tool in parts of the world, especially the East, for business
calculations. tians,
to counting
and
it
and even
scientific
(A form of the abacus was probably used by the ancient Egypwas known in China as early as the sixth century B.C.) In the hands
of a skilled operator, the abacus can be a powerful adjunct to hand calculations.
There are several forms of abacus; they all depend upon a positional notation for representing numbers and an arrangement of movable beads, or similar simple objects, to represent each digit. By moving beads, numbers are entered, added, and subtracted to produce an updated result. Multiplication and division are done by sequences of additions and subtractions. Although the need to mechanize the arithmetic operations received most of the attention in early devices, storage of intermediate results was at least as important.
Most
devices, like the abacus, stored only the simple current result.
Other storage was usually of the same type as used for any written material, e.g., clay tablets and later paper. As long as the speed of operations was modest and the use of storage also slow, there was little impetus to seek mechanization of the control of sequences of operations. Yet forerunners of such control did appear in somewhat different contexts, e.g., the Jacquard loom exhibited in 1801 used perforated (punched) cards to control patterns for weaving.
— COMPUTER HISTORY AND CONCEPTS Charles Babbage (17921871) was probably the
13
to conceive of the
first
essence of the generalpurpose computer. Although he was very versatile,
accomplished both as a mathematician and as an engineer,
computing machines. this direction
It is
his lifework
worth noting that Babbage was
first
was
his
stimulated in
because of the unreliability of manual computation, not by
its
slow
speed. In particular, he found several errors in certain astronomy tables. In
determining the causes, he became convinced that errorfree tables could be
produced only by a machine that would accept a description of the computation by a human being but, once set up, would compute the tables and print them all without human intervention. Babbage's culminating idea, which he proposed in great detail, was his Analytic Engine, which would have been the first generalpurpose computer. It was not completed because he was unable to obtain sufficient financial support.
As Western
industrial civilization developed, the need for
clear that
if
mechanized com
As the 890 census approached in the United States, it became new processes were not developed, the reduction of the data from
putation grew.
1
it was time for the next one. Dr. Herpunched cards and simple machines for processing them in the 1 890 census. Thereafter, punchedcard machines gained wide acceptance in business and government. The first third of the twentieth century saw the gradual development and use of many calculating devices. A highly significant contribution was made by the mathematician Alan Turing in 1937, when he published a clear and profound theory of the nature of a generalpurpose computing scheme. His results were expressed in terms of a hypothetical "machine" of remarkable simplicity, which he indicated had all the necessary attributes of a generalpurpose computer. Although Turing's machine was only a theoretical construct and was never seriously considered as economically feasible (it would be intolerably slow), it drew
one census would not be complete before
man
Hollerith applied
the attention of several talented people to the feasibility of a generalpurpose
computer.
World War
II
gave great stimulus to improvement and invention of comput
ing devices and the technologies necessary to them.
Howard Aiken and an IBM
team completed the Harvard Mark I electric computer (using relay logic) in 1944. J. P. Eckert and J. W. Mauchly developed ENIAC, an electronic computer using vacuum tubes in 1946. Both these machines were developed with scientific calculations in mind. The first generation of computer technology began to be massproduced with the appearance of the UNI VAC I in 1951. The term "first generation" is associated with the use of vacuum tubes as the major component of logical circuitry, but it included a large variety of memory devices such as mercury delay lines, storage tubes, drums, and magnetic cores,
name a few. The second generation of hardware featured the transistor (invented in 1948) in place of the vacuum tube. The solidstate transistor is far more efficient than the vacuum tube partly because it requires no energy for heating a source of electrons. Just as important, the transistor, unlike the vacuum tube, has almost
to
unlimited
life
and
reliability
and can be manufactured
at
much
lower cost. Sec
ondgeneration equipment, which appeared about 1960, saw the widespread installation
and use of generalpurpose computers. The third and fourth gen
14
THE McGRAWHILL COMPUTER HANDBOOK computer technology (about 1964 and 1970) mark the increasing use of integrated fabrication techniques, moving to the goal of manufacturing most of a computer in one automatic continuous process without manual
erations of
intervention.
Hardware developments were roughly paralleled by progress in programming, which is, however, more difficult to document. An early important development, usually credited to Grace Hopper, is the symbolic machine language which relieves the programmer from many exceedingly tedious and errorprone tasks. Another milestone was FORTRAN (about 1955), the first widely used which included many elements of algebraic notation,
highlevel language,
like
indexed variables and mathematical expressions of arbitrary extent. Since
FORTRAN FORTRAN
was developed by IBM, whose machines were most numerous, quickly
became pervasive and,
after several versions, remains
today a very widely used language.
Other languages were invented to satisfy the needs of different classes of computer use. Among the most important are COBOL, for businessoriented data processing;
ALGOL,
the
first
widely accepted language in the interna
community, particularly among mathematicians and scientists; and PL/ IBM and introduced in 1965 as a single language capable of satisfying the needs of scientific, commercial, and system programming. Along with the introduction and improvements of computer languages, there was a corresponding development of programming technology, i.e., the methods of producing the compiler and interpreter translators and other aids for the programmer. A very significant idea that has undergone intensive development is the operating system, which is a collection of programs responsible for monitoring and allocating all systems resources in response to user requests in a way that reflects certain efficiency objectives. By 1966 or so, almost all medium to large computers ran under an operating system. Jobs were typically submitted by users as decks of punched cards, either to the computer room or by remotejobentry (RJE) terminals, i.e., card reader and printer equipment connected by telephone lines to the computer. In either case, once a job was received by tional I
developed by
the computer, the operating system
A
made almost
all
the scheduling decisions.
large computer could run several hundred or even thousands of jobs per 24
hour day with only one or two professional operators
The 1960s saw a
in the
machine room.
great intensification of the symbiosis of the computer and
the telephone system (teleprocessing).
Much
of this was
RJE and
routine non
generalpurpose use, such as airline reservation systems. Considerable success
was
also achieved in bringing the generality
and excitement of a generalpur
pose computer system to individual people through the use of timesharing systems. Here, an appropriate operatingsystem program interleaves the requests of several
human
users
who may be remotely
located and communicating over
telephone lines using such devices as a teletype or typewriter terminal. Because of high computer speed relative to
human
"think" time, a single system could
comfortably service 50 to 100 (or more) users, with each having the "feel" of his
own
to the
private computer.
The timesharing system, by bringing people
computer, seems to have very great potential for amplifying
creativity.
closest
human
COMPUTER HISTORY AND CONCEPTS 13
15
A CLASSIFICATION OF AUTOMATIC
COMPUTERS Automatic computers may be broadly classified as analog or digital (Fig. 11). Analog computers make use of the analogy between the values assumed by some physical quantity, such as shaft rotation, distance, or electric voltage, and a variable in the problem of interest. Digital computers in principle manipulate numbers directly. In a sense all computers have an analog quality since a phys
Automatic computers
Analog
Digital
Operations
Problem
Operations
Problem
General
only
setup
only
setup
purpose
—Slide rule
—Abacus
Differential
analyzer
— Planimeter
—Adding
—Any
—Plugboard
procedure
accounting
described
machines
precisely
machines
Network analyzer
Digital
Desk Field analogs
differential
calculators
Special purpose
Card
analyzers
sorters
Radar Navigation Fire control
FIG. 11 ical
A
classification of
computers
representation must be used for the abstraction that
digital
computer, the analogy
is
is
a number. In the
minimal, while the analog computer exploits
it
to a very great extent.
Both analog and digital computers include a subclass of rather simple machines that mechanize only specific simple operations. For example, the slide rule is an analog computer that represents numbers as distances on a logarith
mic
scale. Multiplication, division, finding roots of
numbers, and other opera
done by adding and subtracting lengths. Examples of operationonly machines of the digital type include adding machines and desk calculators. A second class, more sophisticated than operationonly machines, may be termed problemsetup machines. In addition to performing arithmetic operations they can accept a description of a procedure to link operations in sequence tions are
to solve a problem.
The
machine's controls, as
specification of the procedure
in certain specialpurpose
may be
built into the
machines, or a plugboard
arrangement may be supplied for specifying the desired sequence of operations. The main idea is that the problemsolution procedure is entered in one distinct operation, and thereafter the entire execution of the
automatic.
work on the problem
is
16
THE McGRAWHILL COMPUTER HANDBOOK
The
electronic differential analyzer that
general form of analog computer.
It is
emerged
in the late
1940s
is
the most
constructed from a few types of carefully
engineered precision circuits (integrators,
summing
amplifiers, precision poten
and capacitors) each capable of a single operation. The problem is usually set up on the machine by plugboard. Since there is usually no provision tiometers,
for storing results internally, the output plotter. Precision, limited
by
drift
and
is
noise,
generally sent directly to a curve is
typically no higher than
1
part
Compared with generalpurpose
1000 of digital computers, analog computers suffer from lack of generality of the problems that can be handled, low precision, difficulty in performing complex operations (including multiplication and division at high speed), inability to store large amounts of information effectively, and equipment requirements that must grow directly with problem size. However, for the jobs to which it is suited, particularly mathematical full scale.
in
or simulation problems involving differential equations, the analog computer
can often give high speed,
if
required, at lower cost than a digital computer.
The high speed of the analog computer ation;
i.e., all its
is
the result of
highly parallel oper
its
parts are working concurrently on separate parts of the
same
problem.
A most important theoretical question that can be asked of a problemsetup machine is: What is the range of problems solvable by this machine? As a practical matter, this question is rarely asked in this form because plugboard machines are usually designed for specifically stated kinds of problems. Nevertheless, the question of ultimate logical power,
able by a given machine,
is
i.e.,
the range of problems solv
fundamental. In 1937 Turing
made
a direct contri
when he defined a remarkably simple, hypothetical "machine" (since named a universal Turing machine) and proved, in effect, that any solution procedure can be expressed as a procedure for this machine. By implication, any machine that can be made to simulate a universal Turing machine also has its generality. The class of such machines is called general purpose. Most commercially available electronic digital computers are, for bution to this subject
practical purposes, generalpurpose machines. bility,
ple,
14
amount of
storage,
They
differ in speed, cost, relia
and ease of communication with other devices or peo
but not in their ultimate logical capabilities.
THE NATURE OF A COMPUTER SYSTEM
A computer system is best considered as a collection of resources that are accesby programs written according
sible to its users
to the rules of the system's
programming languages. The resources are of two major variety of components in each: 1.
classes with a
wide
Equipment (hardware) a.
b.
Storages
To
hold both programs and data
Processing Logic information
Implementing arithmetic,
logical manipulation of
COMPUTER HISTORY AND CONCEPTS
Concerned with movement of information and sequenc
Control Logic
c.
17
ing of events
Transducers
d.
form
Devices for translating information from one physical
to another, e.g., a printer that converts electric signals to printed
characters on paper
Programs (software) Application Programs
a.
Programs written
to satisfy
some need of com
puter users outside the operation of the computer system
and inventorycontrol programs
entific, payroll,
—
in fact
itself, e.g., sci
most of the work
computers do b.
System Programs
Programs concerned with the means by which the to its users and manages its own language translators and operatingsystem programs
system provides certain conveniences resources, e.g.,
OF HARDWARE
15 PRINCIPLES
ORGANIZATION From now on we
shall use the
word computer
to
mean
only the hardware part
of a generalpurpose computing system. All
computers have certain qualitative
The reader
described.
listing these
in
common
will readily appreciate,
properties
similarities,
which
will
now be
however, the lack of precision
—our objective
at present
is
in
to describe these
such a way that the essential nature of the machine, and the basis of
its
generality, can be intuitively understood.
From
the viewpoint of the user, the machine manipulates two basic types of
information: (1) operands, or data, and (2) instructions, each of which usually specifies a single airthmetic or control operation (e.g.,
ADD, SUBTRACT),
and one or more operands which are the objects of the operation. Within the machine, both instructions and data are represented as integers expressed in the binary number system or in some form of binary coding. This is done because the "atom" of information is then a twostate signal (called or 1) which requires only the simplest and most reliable operation of electronic devices. Although the binary representation of instructions and data must appear within the machine for processing to take place, most users of computers may use the common decimal representation of numbers and alphabetic names of operations and data. Translator programs (usually supplied by the computer manufacturer) executed by the machine translate these convenient representations into the internal binary form. In other words, the binary representation of
information inside the computer
ogy but
The
is
is
important for reasons of electronic technol
not an essential principle of the generalpurpose computer.
following
is
a
list
of attributes
common
to generalpurpose
digital
computers:
L The and
machine
is
capable of storing a large amount of information (both data
instructions).
For economy reasons, there are usually at least three levels
18
THE McGRAWHILL COMPUTER HANDBOOK of Storage speed and capacity. iting factor in the 2.
The but
repertoire of instructions is
The amount
of storage
is
a fundamental lim
range of problems that can be handled. is
typically small (from about 16 to
256 types)
judiciously chosen to cover the requirements for any procedure.
3.
Operands are referenced by name; the names of operands can be processed by instructions.
4.
Instructions are accessed from storage and executed automatically. Nor
mally, the location in storage of the next instruction (or
program) counter. This pointer
by
1 )
is
is
held in an instruction
most often stepped
value (increased
in
to specify the location of the next instruction, but certain instructions
modify the program counter to contain a value that depends on
specifically
the outcome of comparisons between specified operands. This gives the pro
gram
the ability to branch to alternative parts of the program,
i.e.,
alterna
tive instruction sequences.
The
general organization of a typical computer
heart of the system
is
shown in shown
the central processing unit (CPU),
is
12.
Fig.
The
as comprising
a main storage, which holds both program and data, an arithmeticlogic unit
Main
Card reader and card punch
^
storage Printer
Picture display
and
keyboard
Routing circuits
I/O
Arithmeticlogic unit
channels Central processing unit
FIG. 12
(CPU)
General organization of a typical
(ALU), which contains processing
The program counter would
some diagrams the program
One
part of the
storage and the
system
CPU ALU
illustrated,
is
computer
an adder, shifter, and a few and the instruction currently being pro
circuitry such as
fast registers for holding the operands,
cessed.
digital
also be included in the
control facilities are
ALU,
shown as a
although in
distinct function.
a set of routing circuits which provide paths between
and input/output controllers or channels. In the type of
many
storage or input/output devices
may
be wired to one
channel; but only one device per channel can be transmitting information from or to
main storage
at
any one time. This
is,
of course, a restriction on the
ber of devices that can operate concurrently.
It is
num
imposed because of the econ
COMPUTER HISTORY AND CONCEPTS
omy
of sharing
common
paths to main storage and simpHcity
19
in controlling
movement of information between the devices and storage. The major parts of a computer may be described as follows: 1.
Storage Means for storing a rather large volume of information and a simple economical access mechanism for routing an element of information to/
from storage from/to a single point in several versions, ity,
2.
and
even
in
the
(register).
same system;
Storage
is
usually available
these vary in access time, capac
cost.
The switching networks
Data Flow
that provide paths for routing infor
mation from one part of the computer to another. 3.
Transformation
The
This function
usually concentrated in a single arithmeticlogic
(ALU). The sive circuits
is
centralization provides
used
is
and other data manipulation.
circuits for arithmetic
in
economy
time sequence for
unit
since a single set of fast expen
operations. Transformation
all
circuits operate
on information obtained from storage by control of the data
flow switching.
As
will
be seen
later,
many
of the
more complex
transfor
mations such as subtraction, multiplication, and division can be obtained economically by control of sequences of very elementary operations such as addition, shifting, etc. 4.
This
Control
is
a general term that includes the important function of per
forming time sequences of routings of information through the data
The
control function appears on
trol is
many
levels in a
flow.
computer. Usually the con
organized as a set of time sequences, or cycles. Each cycle period
commonly
is
(but not always) divided into equally spaced time units called
clock intervals. The term "cycle" refers to a specific type of sequence for selections on the data flow
example, there
is
performed
in a succession of clock intervals.
taining information about a transformation
ALU
For
an instruction fetch cycle during which an instruction conis
brought from storage to an
At each clock interval within the cycle, an elementary operperformed such as routing the storage location of the instruction to the storageaccess mechanism, signaling for storage access, or routing of the instruction obtained to an ALU register.
ation
5.
register.
is
Input/Output
Since information
in the processor
and storage of the com
puter are represented by electric signals, devices are provided to convert
information from humangenerated to machinereadable form on input, and in the opposite direction this
on output.
A
very
common scheme for performing An operator reads the infor
transducer function uses a punched card.
mation from handwritten or typed documents and enters the information on much like a typewriter keyboard, of a keypunch machine. This
a keyboard,
machine
The
translates the key strokes into holes on the card (see Fig. 13).
cards are then sent to the card reader, which contains the necessary equip
ment
to
READ
the cards,
i.e.,
sense the hole positions and translate
into the internal electricsignal representation.
The punched card
information in a nonvolatile form and can be read by
human
them stores
beings (by
reading either the hole configurations or the printed characters at the top of
1
THE McGRAWHILL COMPUTER HANDBOOK
110
.
A U
09
^
1
1
I
I 10
II
1
ml
a M
15 111
till
00 ooio 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 llllllll 000
looo 000 OOOi 0000 00 11)4 till
11
n
11
tin n
1
1
)
t
It 10 40 11 UH uis 111) till till till
77nH »»7Tn 10 10 1
1
1
11
11
II
II 41 I
I
M » « 47
44 41
M
SI
» UMUSI
tl
U 04 Ittttl
iinn n n
1510
n
It It
It 1
2222 2222 2222 2222 2222 2222
2
III 333 333
3333 3333 3333 3333 3333 3333
3
222 222 222 '222 222 2222 222 2222 2222 2222 2222 2222 222
333
333 333
3
14
1111
222
333 1333 333 3333 333 3333 3333 3333
II
1111 1111 1111 1111 till 1111 1111
I
1
1
t
I
4444 444 444
444 1444 444 4444 444
Mil 444 4444 444
4444 4444 4444 4444 4444 4444
4
an
ISS 555
5SS
i555 5555 555 5555 5S5 5555 5555 5555 5555 555
5555 5555 5555 5555 5555 5555
5
66tS
6e 666
6666 666 6666 666 6666 6i66 6666 6666 6666 6666 666
6666 6666 6666 6666 6666 6666
7777
771 777
7777 '177 7777 777 7777 7717 7777 7777 7777 7777 777
7777 7777 7777 7777 7777 7777
(III
III
nil
9999
999 99
I
!
1
I
I
I
I
I 10
II
lll
4444 4444
4
nil nil nil in nil llllllll I
n in nil nil nil nil nil nil
9999 199 9999 9999 1999 9939 999 9999 9999 9999 999 9999 9999 9999 9999 9999 9999 41 M 49 4t 4^11 »si UMUMi Sliltl r tmoi nttr nnnnnMniilTinioio lonii iiMnx nitii nun null sit II
71
I
4]
4T
— 9 edge Representation of Data: Each of the 80 columns
may
contain one or more holes (small dark rectangles) representing an alphanumeric character. The card shown was
punched to show the representations of the 10 decimal
digits,
26
letters of the alphabet,
An 80column IBM
FIG. 13
48character set used by the
and 12 special symbols (including blank) of a
common symbol
set.
card showing rowcolumn numbering and holes punched for the language
FORTRAN
A cardpunch machine may be controlled by the computer to produce punchedcard output of the results of processing.
card).
The punched card and devices. ers,
its associated machines are examples of input/output Other devices available include typewriters, punched paper tape, print
cathoderaytube displays, analogdigital converters.
There is no sharp distinction between the storage function and the input/ output function (the punched card was seen to contain both). However, a useful distinction can be made based on whether the output is directly machinereadable.
On
this basis, printers, typewriters,
punched cards, punched
cathoderay displays are input/output
magnetic tape are storage devices. A very common terminology classifies all devices and machines not a part of the central processing unit and its directly accessible storage as input/output. devices;
16
tape,
CONVENTIONS ON USE OF STORAGE Certain conventions are almost universally assumed in using computer storage. These are independent of the physical nature of the device constituting storage whether it uses magnetic tape, magnetic disks, semiconductors, etc. Two fundamental operations, viewed from the storage unit, are:
—
1.
READ (copy) and sends
it
to
Copies the contents of some specified portion of the storage place. Note that the copy operation is, to the
some standard
user, nondestructive; it
out.
i.e.,
information
in
storage
is
not modified by reading
1
COMPUTER HISTORY AND CONCEPTS
WRITE (replace)
2,
Results in replacement of the present contents of a spec
of storage from
ified portion
some standard
place.
Sometimes the technology of a storage device naturally tends conventions. In such a case
the
17
same
11
it is
to violate these
engineered with additional circuits to provide
functional appearance to the user as described above.
ELEMENTS OF PROGRAMMING For our present purposes, storage
may
is
assumed
to consist of
be visualized as a long row of pigeonholes. Each
called an operand which
contained
in
the
cell.
Each
referenced only by the
operand referenced
is
may
an array of cells which
cell
contains information
be likened to a number written on a piece of paper
cell is
given a
name of
the cell
name it
— the information
occupies, not by
its
in the cell is
content.
The
then used for computation. The machine hardware usu
name scheme whereby
the cell names are the integers 0, 1, The user may, however, choose different names such as X, Y, Z, I, etc., for the cells. The translation of user names to machine names is a simple routine process since each user name is simply assigned to one machine name. This justifies our use of mnemonic symbols for names of operands. Unless otherwise specified, numbers will denote operands, letters the names ally has a wiredin 2, etc.
of operands. For
(1)
is
example
X5
read "5 specifies X," which means the operand or usual number 5 replaces
the contents of the cell
(2)
named
X.
As another example consider
the statement
Y— HX
which means "the contents (operand) of the cell whose name is X is added to 1 and the result replaces the contents of the cell named Y." For brevity, we usually read this
statement as
"X
plus
1
contents of
X
remain unchanged.
A
Y." It change
specifies
although the statement generally results
in a
is
important to note that
in the
contents of Y, the
simple program consists of a sequence of
statements like the ones illustrated above. Although detailed rules of writing statements (symbols allowed, punctuation required, etc.) vary widely from pro
gram language
to
program language, many of the
principles of
can be illustrated adequately using a single language
(APL
programming
in this case).
Since a computer normally handles large volumes of information, a key notion
is
designation and processing of arrays of information.
sional array of cells will be called a vector.
(3)
An nation.
An example
A
of a vector
onedimenis
X=3,29,47.4,82,977.6
element or component of a vector
One
part
is
the
name
will
be denoted by a twopart desig
of the entire vector; the other, written between
112
THE McGRAWHILL COMPUTER HANDBOOK brackets, gives the position of the element being referenced. In the above
example (4)
X[2]=29 in X start at 1 from the meaning of a variable index. For example,
(assuming element position numbers
Note (5)
also the
left).
YX[I]
means "the content of
cell I is
of the cell so designated in
For example,
X
if
is
X
used as a position number
in
X, and the content
replaces the content of Y."
the vector specified in (3), the sequence
1—3 YX[I]
Y being respecified by the A variable such as
results in
is
47.4.
X[3]
or
X[I]
number
said to be subscripted or indexed
—the
variable
I is
called an index variable.
Index operations are extremely important because they allow us
in effect to
names from other cell names or constants. Why is it important to be able to compute names? One reason is that without this facility, it would be necessary to specify each cell explicitly by a unique name. Generating thousands of names would be tedious, and sooner or later we would probably devise a systematic naming procedure similar or identical to the indexedvariable idea. A second reason for the power of indexed variables is that the calculation of names can be included in the program for processing the data, thus greatly shortening the statement of the program but lengthening the time to execute it. For example, assume that 100 numbers have been entered into storage and called vector X. Two programs are shown in Fig. 14 to do the same job: compute S, the sum of the numbers. Figure l4a is easy to understand immediately; it is a straightline program systematically
consisting of
compute
all
cell
100 executed steps written
explicitly.
Figure l4b
is
a
much
program because it contains a loop. Note that in Fig. l4b, the "guts" of the program is statement 4, which adds the value in the I position of X and shorter
VSUM2
VSUMl 11) [2] [3]
s*o
SX[l]
[11
SSX[2] SSlX(3]
[2]
Il
[3]
TEST:(I>100)/0
[4]
SSIX[I]
[5]
IItl
[6]
[1001S*SIX[100]
^TEST V
V (a) Straight line
FIG. 14
Straightline
(Z>)
Loop
and loop programming
to
sum 100 numbers
X
COMPUTER HISTORY AND CONCEPTS new
S, to produce the
now
see,
S. This statement will
each time with a new value of
line 6 directs the
program
I.
be executed repetitively as we shall I by and where the statement labeled greater than 100; if so, branch to
Certainly, line 5 increases
to line 3 since this
TEST
is
line 0,
which by convention means
found. Line 3 says:
"Compare
I
for
from the program. Otherwise, continue
exit
statement (line 4)." With these rules,
hand, lines
4, 5,
cuted 101 times;
in
will
1 ,
is
to the next
and 6
113
it is
seen that for the case at
each be executed 100 times and
line 3 will
be exe
other words lines 3 to 6 constitute a program loop.
paring the straightline and loop programs of Fig. of written statements
100
is
in the first
14,
we
find that the
Com
number
case and only 6 in the second. This
somewhat offset by the fact that the compared to only 100 for the program. The additional executed statements in the loop program
advantage of a short written program
is
loop program requires 403 executed statements straightline
are required for index updating and testing.
18 PRINCIPLES
OF THE SPACETIME
RELATIONSHIP The computer designer or user must be aware of some rather fundamental notions of how a computer and a problem can be organized to "trade" space and time. The word "space" will roughly correspond to "amount of equipment."
One now be 15;
simple example of this tradeoff idea discussed.
the function
1^
Two ways is
the case of machine parts same function are shown in
in
of obtaining the
the appearance of six signals
— each of these can be
either
A
1^
A
0^
A
—* A
Delay
0^
A
1
1
0
0
1
1
1
0 1
Circulating
1^
inputs
Clock Ot*
pulse(s)
A (b)
Serial
system
Inputs
Outputs
Clock
at clock
pulse pulse times (a)
Parallel
system
A "AND" (output
FIG. 15
Parallel
and
=
1
only
if
circuit
= 1) ONOFF signals
both inputs
serial representation of
will
Fig.
Output 1
(
1
1
114
THE McGRAWHILL COMPUTER HANDBOOK
ON
(=1)
or
OFF ( = 0). The
circuit outputs are to
appear as
(
except at
,
timing or clock intervals when the signals appear at the output point(s).
To
ensure that the output appears only at clock intervals, each signal and a clock pulse are fed into an
AND circuit which gives a
line and clock line are both
In part (a) of Fig. 15
Each
signal uses
AND
requires six
its
own
1;
we
1
output only when the signal
at other times the output
is 0.
see one representation of our set of six signals.
line;
the output appears on six output lines (and
circuits). In part (b)
we
see a second possibility
signals circulate as pulses in a delayline structure
— the delay
is
— the
six
in this case six
clock times. Here the signals appear in time sequence on a single wire.
The
first
circuit
is
extensive (and expensive) in space but concise (inexpen
The second circuit has exactly dual properties. Notice also that as the number of signals grows, the parallel circuit grows proportionately but the time to receive all the signals remains the same. The serial circuit on the other hand requires no more lines (or AND circuits) to handle more signals, sivefast) in time.
although the delay must increase proportionately.
Many
of the desirable properties of a computer, especially
its
reliability,
from its use of simple components in a simple manner. Complex structures and operations are built up by using many simple components and intriresult
cate time sequences of the signals they generate or modify.
Because of the many devices
for processing, control,
and particularly storage,
The timespace relamethod of reducing cost at the expense an example of the idea of time sharing, i.e., using the same
great efforts are exerted to obtain economical structures. tionship discussed above provides one
of time. This
is
equipment (such as the adder circuit) successively in time by routing to it the numbers to be added in time sequence. The routing of information from place to place within the computer is therefore a fundamental operation. The paths a most provided for routing determine the dataflow structure of the machine important characteristic of any computer. The timespace relationship may also be illustrated by programming organization. Recall that in the procedures for summing a list of numbers, one can program straightline, thereby obtaining an expensive space (storage) program but a fastexecutiontime program. An alternative is to program the problem
—
utilizing a loop; this results in great storage savings but longer execution time.
program usually gains space by a much greater factor the preferred method for all but the shortest lists. The major point of the above discussions on timespace relationships is a fundamental property of data processing; in any task to be done, there is usually a choice of several solutions, which can be compared, to a first order, by the extent to which they trade space and time. From the brief introduction given in this chapter, some broad properties of computer systems should be discernible. First, a generalpurpose computer is In most cases, the loop
than
it
loses speed;
it is
one that can accept a precise stylized description of a procedure, called a program, for solving any problem solvable in a finite number of steps and can then execute the program automatically to process data
made
available to the
machine. or program, is important not only to the users of a computer two reasons, to its designers. (1) Product designers can perform
The algorithm, but
also, for
COMPUTER HISTORY AND CONCEPTS intelligently only if they
grammed.
(2)
understand how the products
The sequences
will
be used,
i.e.,
115
pro
of internal switching operations necessary to
—
implement arithmetic and other operations are also algorithms these are the algorithms which must be specified and implemented by the logical designer. A modern computer has been likened to a grand piano, on which the user can play Beethoven or "Chopsticks." Achieving the most value for an investment in equipment and manpower is a problem in optimizing resources that has
some
of the properties of combinatorial mathematics;
specifications or the criterion of optimization can in
i.e.,
make
a "slight" change in
a very great difference
performance. The generalpurpose nature of the computer rarely raises doubt
that "answers" to a welldefined problem can be obtained one
The
central question
is
usually
how
to obtain the
answers
way or another. way that opti
in a
mizes user convenience, problemsolution time, storage space,
reliability, or
some combination of such parameters. Needless to say, all these factors are interdependent, and some can be improved only at the expense of others. This has already been illustrated in the case of space versus time in the examples given earlier in this chapter. servation" laws relations tradeoffs
may
Some
fairly general,
but as yet undiscovered, "con
relate these parameters; but at this time, the general inter
can only be discussed qualitatively, although quantitative analysis of is readily possible and should be done in specific cases.
sea 2
mm
Computer Structures V. Carl
Hamacher
Zvonko G. Vranesic Safwat G. Zaky
21
22 23 24 25
26 27 28 29
21
INTRODUCTION FUNCTIONAL UNITS INPUT UNIT MEMORY UNIT ARITHMETIC AND LOGIC UNIT
OUTPUT UNIT CONTROL UNIT BASIC OPERATIONAL CONCEPTS BUS STRUCTURES
INTRODUCTION The
objective of this chapter
terminology or jargon.
We
to introduce
is
will give only a
characteristics of computers, leaving the to the
some
basic concepts and associated
broad overview of the fundamental
more
detailed (and precise) discussion
subsequent chapters.
meaning of the word
computer" or simply "computer," which is often misunderstood, despite the fact that most people take it for granted. In its simplest form, a contemporary computer is a fast electronic calculating machine, which accepts digitized "input" information, processes it according to a "program" stored in its "memory," and produces the Let us
first
define the
"digital
resultant "output" information.
Adapted from Computer Organization, by V. Carl Hamacher, Zvonko G. Vranesic, and Safwat M. Zaky. Copyright
© 1978. Used by permission of McGrawHill,
Inc. All rights reserved.
21
22
22
THE McGRAWHILL COMPUTER HANDBOOK
FUNCTIONAL UNITS The word computer encompasses
a large variety of machines, widely differing
use more specific words to represent some subclasses of computers. Smaller machines are usually called minicomin size, speed,
and
puters, which
is
cost. It is fashionable to
a reflection on their relatively lower cost, size, and computing
power. In the early 1970s the term microcomputer was coined to describe a very small computer, low in price, and consisting of only a few largescale inte
grated (LSI) circuit packages.
from minicomputers and microcomputers in size, processing power, cost, and the complexity and sophistication of their design. Yet the basic concepts are essentially the same for all classes of computers, relying on a few welldefined ideas which we will attempt to explain. Thus the following discussion should be applicable to most generalpurpose digLarge computers are quite
ital
diff'erent
computers.
A computer consists of five functionally independent main parts: input, memand logic, output, and control units, as indicated in Fig. 21. The input unit accepts coded information from the outside world, either from human
ory, arithmetic
r
n
n
r" Arithmetic
Input
and logic
Memory
Output
Control
I/C)
CPU
1
L
L, FIG. 21
Basic functional units of a computer
operators or from electromechanical devices. in the
memory
for later reference or
The information
is
either stored
immediately handled by the arithmetic and
which performs the desired operations. The processing steps are determined by a "program" stored in the memory. Finally, the results are sent back to the outside world through the output unit. All these actions are coordinated by the control unit. The diagram in Fig. 21 does not show the connections between the various functional units. Of course, such connections must logic circuitry,
exist. It is
customary
to refer to the arithmetic
and
logic circuits in conjunction
with the main control circuits as the central processing unit (CPU). Similarly, input and output equipment
O). This
is
is
combined under the term inputoutput unit
both input and output functions. The simplest such example
We
is
the often encoun
must emphasize that input and output funcare separated within the terminal. Thus the computer sees two distinct
tered teletypewriter terminal. tions
(1/
reasonable in view of the fact that some standard equipment provides
COMPUTER STRUCTURES
FIG. 22
A typical large computer— IBM
devices, even though the
same
human
S370/158 (IBM Corp.
operator associates
them
23
Ltd.)
as being part of the
unit.
main functional units may comprise a number of sepand often sizeable, physical parts. Fig. 22 is a photograph of such a computer. Minicomputers are much smaller in size. A basic minicomputer is often of desktop dimensions, as illustrated by the two machines in Fig. 23. Even a fairly complex minicomputer system, such as the one shown in Fig. 24, tends to be small in comparison with large computers. In large computers the
arate,
At
this point
FIG. 23
we should
take a closer look at the "information" fed into the
Two minicomputers— PDP/8M and PDPl
1/05 (Digital Equipment Corp.)
24
THE McGRAWHILL COMPUTER HANDBOOK
FIG. 24
computer.
A
minicomputer system (Digital Equipment Corp.)
It is
convenient to consider
as being of
it
tions and data. Instructions are explicit
two types, namely, instruc
commands which:
•
Govern the transfer of information within the machine, machine and I/O devices
•
Specify the arithmetic and logic operations to be performed
A set of instructions which perform a task is called a of operation the
is
program
to store a
(or several
as well as
between the
program. The usual mode
programs)
memory. Then, from the memory and
in the
CPU fetches the instructions comprising the program
performs the desired operations. Instructions are normally executed sequential order in which they are stored, although
from
tions
this
order as in the case where branching
behavior of the computer
is
it is
is
in
the
possible to have devia
required.
Thus the actual
under the complete control of the stored program,
except for the possibility of external interruption by the operator or by digital devices connected to the machine.
Data are numbers and encoded characters which are used instructions. This should not be interpreted as a is
often used to symbolize any digital information.
data,
may
it is
be considered as data
ple of this into
quite feasible that an entire
is
if it is
to
program
as operands
by the
hard definition, since the term
Even within our
(that
is,
definition of
a set of instructions)
be processed by another program.
An exam
the task of compilation of a highlevel language source program
machine instructions and data. The source program
the compiler program.
The compiler
is
the input data for
translates the source
machine language program. Information handled by the computer must be encoded
in
program
into a
a suitable format.
COMPUTER STRUCTURES
25
Since most presentday hardware (that is, electronic and electromechanical equipment) employs digital circuits which have only two naturally stable states, namely, ON and OFF, binary coding is used. That is, each number, character of text, or instruction is encoded as a string of binary digits (bits), each having
one of two possible values. Numbers are usually represented in the positional binary notation. Occasionally, the binarycoded decimal (BCD) format is employed, where each decimal
Alphanumeric
digit
is
encoded by 4
bits.
characters are also expressed in terms of binary codes. Several
appropriate coding schemes have been developed.
encountered ones are
Two
ASCII (American Standard Code
change), where each character
is
of the most widely
for
Information Inter
represented as a 7bit code, and
EBCDIC
(extended binarycoded decimal interchange code), where 8 bits are used to denote a character.
23 INPUT UNIT Computers accept coded information by means of input devices capable of "reading" such data.
The
which consist of
units,
simplest of these
an
is
electric type
writer electronically connected to the processing part of the computer.
typewriter
is
wired so that whenever a key on
corresponding letter or digit code, which
A
may
is
keyboards
its
automatically translated into
then be sent directly to either the
related input device
is
memory
the teletypewriter, such as the
SendReceive) terminal' In addition to
its
The
is
depressed, the
its
corresponding
or the
ASR
CPU.
33 (Automatic
typewriter function, this teletype
writer contains a paper tape readerpunch station. Its low price and sufficient versatility
make
the teletypewriter one of the most frequently used input (and
output) devices.
While typewriters and teletypewriters are unquestionably the simplest I/O and most awkward to use when dealing with large volumes of data. This necessitated the development of faster equipment, such as highspeed paper tape readers and card readers. A convenient way of preparing a hard copy of a program or data is to punch the coded information on paper cards, divided into columns (usually 80), where each column corredevices, they are also the slowest
sponds to one character. tion of the
A card reader may then be used to determine the loca
punched holes and thus read the input information. This
is
a consid
erably faster process, with typical readers being able to read upward of 1000
cards per minute. Fig. 25 shows a photograph of a card reader.
Many
other kinds of input devices are available.
tion graphic input devices,
24
MEMORY The
which
utilize a
We should particularly men
cathoderay tube
display.
UNIT sole function of the
memory
unit
is
to store
programs and data. Again,
function can be accomplished with a variety of equipment.
'
(CRT)
Product of Teletype Corporation.
It is
this
useful to distin
26
THE McGRAWHILL COMPUTER HANDBOOK
FIG. 25
A
punched card reader (IBM Corp. Ltd.)
guish between two classes of
memory
devices,
which comprise the primary and
secondary storage.
Primary storage, or the main memory, is a fast memory capable of operating where programs and data are stored during'their execution.
at electronic speeds, It typically consists
mer
of either magnetic cores or semiconductor circuits.
The
for
constitute core memories, while the latter are referred to as semiconductor
memories.
The main memory storing
Instead,
1
it
is
containing n
usual to deal with
The main memory bits,
them is
number cells in
of storage cells, each capable of
are seldom handled individually.
groups of fixed
size.
can be stored or retrieved
name
in
one basic operation. it is
useful to asso
with each word location. These names are numbers that
identify successive locations, is
Such groups are
organized so that the contents of one word,
provide easy access to any word in the main memory,
ciate a distinct
word
These
bit of information.
called words.
To
contains a large
which are hence called the addresses. A given its address and issuing a control command that
accessed by specifying
starts the storage or retrieval process.
The number
of bits in each
word
is
often referred to as the
the given computer. Large computers usually have 32 or
more
word length of bits in a
word,
minicomputers have between 12 and 24 (a favorite choice is 16), while some microcomputers have only 4 or 8 bits per word. The capacity of the main memis one of the factors that characterize the size of the computer. Small machines may have only a few thousand words (4096 is a typical minimum), whereas large machines often involve a few million words. Data is usually
ory
manipulated within the machine multiples of words.
A
in units of
typical access to the
data being read from the
memory
words, multiples of words, or sub
main memory
or written into
it.
results in
one word of
COMPUTER STRUCTURES
FIG. 26
27
Magnetic disk storage (IBM Corp. Ltd.)
As mentioned above, programs and data must
reside in the
during execution. Instructions and data can be written into
it
main memory
or read out under
It is essential to be able to access any word locamain memory as quickly as possible. Memories where any location can be reached by specifying its address are called random access memories (RAM). The time required to access one word is called the memory cycle time. This is a fixed time, usually 300 nanoseconds (ns) to 1 microsecond (ixs) for most modern computers. While primary storage is essential, it tends to be expensive. Thus additional, cheaper secondary storage is used when large amounts of data have to be stored, particularly if some of the data need not be accessed very frequently. Indeed, a wide selection of suitable devices are available. These include magnetic disks, drums, and tapes. Figures 26 and 27 show a bank of disk units and a tape unit, respectively.
control of the processing unit. tion within the
25 ARITHMETIC
AND LOGIC UNIT
Execution of most operations within the computer takes place
in the arithmetic
(ALU). Consider a typical example. Suppose two numbers main memory are to be added. They are brought into the arithmetic unit where the actual addition is carried out. The sum may then be stored in the memory. and
logic unit
located in the
Similarly, any other arithmetic or logic operation (for example, multiplication, division,
comparison of numbers)
ands into the
ALU, where
point out that not
all
is
done by bringing the required oper
the necessary operation
is
performed.
operands in an ongoing computation reside
We
should
in the
main
CPU normally contains one or more highspeed storage cells which may be used for temporary storage of often used operands. Each such register can store one word of data. Access times to registers memory,
since the
called registers,
are typically 5 to
1
times faster than
memory
access times.
28
THE McGRAWHILL COMPUTER HANDBOOK
FIG. 27
A
magnetic tape unit
(IBM
Corp. Ltd.)
and arithmetic units are usually many times faster in basic cycle time than other devices connected to the computer system. It is thus possible to design relatively complex computer systems containing a number of external devices controlled by a single CPU. These devices can be teletypes, magnetic tape and disk memories, sensors, displays, mechanical controllers, etc. Of
The
control
course, this fast
26
CPU
possible only because of the vast difference in speed, enabling the
is
to organize
and control the
activity of
many
slower devices.
OUTPUT UNIT The output
unit
is
the counterpart of the input unit. Its function
is
to return the
processed results to the outside world.
A
number
This
is
of devices provide both an output function and an input function.
the case with typewriters, teletypewriters, and graphic displays. This
dual role of some devices
is
the reason for combining input and output units
under the single name of I/O
shown
Of course,
A
photograph of a typical teletypewriter
is
there exist devices used for output only, the most familiar example
being the highspeed printer. ing as
unit.
in Fig. 28.
many
It is
possible to produce printers capable of print
as 10,000 lines per minute.
mechanical sense, but
still
These are tremendous speeds
in the
very slow compared to the electronic speeds of the
CPU. Sometimes
it is
necessary to produce the output data in some form suitable
for later use as input data.
Punched cards may be generated with a card punch.
Similarly, paper tape punches are available for producing a paper tape output.
COMPUTER STRUCTURES
FIG. 28
Finally,
A
teletypewriter
we should observe
marily for secondary storage,
that
may
(IBM
29
Corp. Ltd.)
some of the bulk storage devices, used prialso be employed for I/O purposes. As a
magnetic tape. Suppose that a particular job involves gathering data from a set of terminals, which is done over a relatively long period of time. It is likely that such a task can be conveniently and economically specific case, consider the
handled by a minicomputer. Using a large computer for probably be more expensive. However, finally collected,
it
must be processed
capabilities of the minicomputer.
in
let
some
intricate
purpose would
this
us assume that
way
A reasonable arrangement
when that is
to
the data
is
beyond the have the miniis
computer write the collected data onto a magnetic tape as part of its output (or storage!) process. The completed tape can be transported to the large computer, which can then input the data from the tape and carry out the actual processing. In this way the large (and expensive) computer is used only where necessary, with a corresponding reduction in the overall cost of processing this particular job.
27
CONTROL UNIT The
previously described units provide the necessary tools for storing and pro
cessing information. Their operation
way, which
is
must be coordinated
the task of the control unit.
It is
the whole machine, used to send control signals to
A
line printer will print a line only
if it is
in
some organized
effectively the nerve center of all
other units.
specifically instructed to
do
so.
This
may typically be effected by an appropriate Write instruction executed by the CPU. Processing of this instruction involves the sending of timing signals to and from the
printer,
which
is
We can say, in general, that
the function of the control unit.
I/O
transfers are controlled
by software
instruc
210
THE McGRAWHILL COMPUTER HANDBOOK which identify the devices involved and the type of
tions
transfer.
However, the
actual timing signals which govern the transfers during execution are generated
by the control
Data transfers between the
circuits.
CPU
and memory are
also
controlled by the control unit in a similar fashion.
Conceptually
it is
reasonable to think of the control unit as a welldefined
physically separable central unit which
machine. In practice
this is
somehow
seldom the
case.
Much
physically distributed throughout the machine. set of control lines (wires),
chronization of events in
An
computing process, as
it
nent), but
for timing
and syn
to see
what
is
happening inside the
when something goes wrong
in the
often does. In such situations the operator can use
the panel to discover the difficulty and hopefully faults cannot
is
a display panel, with switches and
is
particularly useful
is
of the control circuitry
connected by a rather large
which carry the signals used
which enables the operator
computer. The panel
It is
all units.
important part of the control unit
light indicators,
interacts with the rest of the
remedy
it.
Certainly,
some
be easily corrected (for example, failure of an electronic compo
many commonly occurring
difficulties
(minor software problems) can
be diagnosed and corrected by the operator. In
summary, the operation of a
typical generalpurpose
computer can be
described as follows:
•
It
accepts information (programs and data) through the input unit and trans
fers
it
to the
memory.
Information stored
•
ALU
in the
memory
is
fetched, under
•
Processed information leaves the computer through
•
All activities inside the
28 BASIC
program
control, into the
to be processed. its
output
unit.
machine are under the control of the control
unit.
OPERATIONAL CONCEPTS
In the previous section
it
was stated that the behavior of the computer
erned by means of instructions.
gram
To perform
consisting of a set of instructions
instructions are brought
is
gov
a given task, an appropriate pro
stored in the
from the memory
is
into the
specified operations. In addition to the instructions,
main memory. Individual
CPU, which it is
data as operands, which are also stored in the memory.
executes the
necessary to use some
A
typical instruction
may be
Add LOCA,R0 which adds the operand at memory location LOCA to the operand in a register in the CPU called RO, and places the sum into register RO. This instruction requires several steps to be performed. First, the instruction must be transferred from the main memory into the CPU. Then, the operand from LOCA must be fetched. This operand is
is
stored in register RO.
added
to the contents of
RO. Finally, the resultant
sum
COMPUTER STRUCTURES
211
Main memory
7Y
Iz
Iz
MAR
MDR Control
RO
PC
IR
.
General
•
purpose
•
registers
ALU Rn
CPU FIG. 29
Connections between the
Transfers between the main
CPU
memory and
and the main memory
CPU
by sending the address of the memory location to be accessed to the memory unit and issuing the appropriate control signals. Then data is transferred from or to the memory. Fig. 29 shows how the connection between the main memory and the CPU can be made. It also shows a few details of the CPU that have not been disthe
start
cussed yet, but which are operationally essential. The interconnection pattern for these
components
is
not
shown
explicitly, since at this point
we
will discuss
their functional characteristics only.
The
CPU
element. data.
It
Two
contains the arithmetic and logic circuitry as the main processing
also contains a
tains the instruction that circuits,
circuits
number
of registers used for temporary storage of
registers are of particular interest. is
being executed.
which generate the timing signals needed
to execute the instruction.
The
Its
instruction register (IR) con
output
is
available to the control
for control of the actual processing
The program counter (PC)
is
a reg
which keeps track of the execution of a program. It contains the memory address of the instruction currently being executed. During the execution of the current instruction, the contents of the PC are updated to correspond to the address of the next instruction to be executed. It is customary to say that the PC points at the instruction that is to be fetched from the memory. Besides the IR and PC there exists at least one other, and usually several ister
other, generalpurpose registers. Finally, there are two registers that facilitate communication with the main memory. These are the memory address register (MAR) and the memory data register (MDR). As the name implies, the MAR is used to hold the address of the location to or from which data is to be transferred. The MDR contains the
data to be written into or read out of the addressed location.
Let us now consider some typical operating steps. Programs reside in the main memory and usually get there via the input unit. Execution of a program starts by setting the PC to point at the first instruction of the program. The
212
THE McGRAWHILL COMPUTER HANDBOOK
PC
contents of the
are transferred to the
memory. After a
sent to the
MAR
and a Read control signal
certain elapsed time (corresponding to the
access time), the addressed word (in this case the
gram)
is
of the
MDR
to
read out of the
memory and
first
loaded into the
is
memory
instruction of our pro
MDR.
Next, the contents
are transferred to the IR, at which point the instruction
is
ready
be decoded and executed. If the instruction involves
an operation
be performed by the
to
be necessary to obtain the required operands. ory
(it
an operand resides
it
in the
will
mem
CPU), it will have to be fetched and initiating a Read cycle. When the oper
could also be in a general register
MAR
If
ALU,
in the
by sending its address to the and has been read from the memory into the MDR, it may be transferred from the to the ALU. Having fetched one or more operands in this way, the ALU can perform the desired operation. If the result of this operation is to be stored in the memory, it must be sent to the MDR. The address of the location where the result is to be stored is sent to the and a Write cycle is initiated. In the meantime the contents of the PC are incremented to point at the next
MDR
MAR
instruction to be executed. Thus, as soon as the execution of the current instruction
is
may
completed, a new instruction fetch
be started.
main memory and the CPU, it is necessary to have the ability to accept data from input devices and to send data to output devices. Thus some machine instructions with the capability of handling I/O transfers must be provided. Normal execution of programs may sometimes be altered. It is often the case that some device requires urgent servicing. For example, a monitoring device in In addition to transferring data between the
a computercontrolled industrial process dition.
To
program that this,
may have
detected a dangerous con
deal with such situations sufficiently quickly, the normal flow of the is
being executed by the
CPU
the device can raise an interrupt signal.
must be interrupted. To achieve
An
interrupt
is
a service request
where the service is performed by the CPU by executing a corresponding interrupt handling program. Since such diversions may alter the internal state of the CPU, it is essential that its state be saved in the main memory before servicing the interrupt. This normally involves storing the contents of the PC, the general registers, and some control information. Upon termination of the interrupt handling program, the
CPU's
state
is
restored so that execution of the interrupted
program may continue.
29
BUS STRUCTURES So
far
we have
discussed the functional characteristics of individual parts that
To form an operational system they must be connected some organized way. There are many ways of doing this, and we
constitute a computer.
together in
will consider the three If a
computer
is
most popular structures.
must be orgafull word means that data transfers between units are to
to achieve a reasonable
speed of operation,
nized in a parallel fashion. This means that of data at a given time.
be done
in parallel,
It
also
all
units
it
can handle one
which implies that a considerable number of wires
are needed to establish the necessary connections.
A
(lines)
collection of such wires.
COMPUTER STRUCTURES
213
I/O bus Input
7ZZZZZ^!ZZZZn
Memory bus
CPU
Output
Memory
1^
A
FIG. 210
twobus structure
which have some common identity, is called a bus. In addition to the wires which carry the data, it is essential to have some lines for control purposes. Thus a bus consists of both data and control lines. Fig. 210 shows the simplest form of a twobus structured computer. The CPU interacts with the memory via a memory bus. Input and output functions are handled by means of an I/O bus, so that data passes through the CPU on route to the memory. In such configurations the I/O transfers are usually under direct control of the CPU. It initiates the transfer and monitors its progress until completion. A commonly used term to describe this type of operation is pro
grammed
A
I/O.
somewhat
different version of a twobus structure
is
given in Fig. 211.
CPU
and memory are reversed. Again, a memory bus exists for communication between them. However, I/O transfers are made directly to or from the memory. Since the memory has little in the way of cir
The
relative positions of the
cuitry capable of controlling such transfers,
ent control mechanism.
of the
it is
necessary to establish a differ
A standard technique is to provide I/O channels as part
I/O equipment, which have
the necessary capability to control the trans
CPU
and can often be thought of as computers in their own right. A typical procedure is to have the CPU initiate a transfer by passing the required information to the I/O channel, which then takes over and controls the actual transfer. We have already mentioned that a bus consists of a collection of distinct lines, serving different purposes. While at this point it is not necessary to get into the details, it is useful to note that the memory bus in the above diagram contains a data bus and an address bus. The data bus is used for transmission fers.
In fact they resemble a small
of data.
To
Hence
its
number of lines corresponds
access data in the
location.
The
The above
CPU
memory
it is
number
of bits in the word.
necessary to issue an address to indicate
sends address bits to the
memory
computer.
Many machines
I/O bus
An
alternative twobus structure
its
via the address bus.
descriptions are representative of most computers. Fig. 21
ally implies a large
FIG. 211
to the
1
usu
have several distinct buses, so
214
THE McGRAWHILL COMPUTER HANDBOOK
FIG. 212
Singlebus structure
that one could in fact treat tion
is
CPU
Memory
Output
Input
them
However,
as multibus machines.
their opera
adequately represented by the twobus examples, since the main reason
for inclusion of additional buses
is
to
improve the operating speed through
fur
ther parallelism.
A
212. All units are connected to this bus, so that interaction. Since the bus
shown in Fig. provides the sole means of
which has a single bus,
significantly different structure,
can be used
for only
it
is
one transfer at a time,
it
follows
The bus lines. The
that only two units can be actively using the bus at any given instant. is
and some control low cost and flexibility for attaching
likely to consist of the data bus, the address bus,
main
virtue of the singlebus structure
peripheral devices.
The
tradeoff
that a singlebus structure
is
is
is its
lower operating speed.
It is
not surprising
primarily found in small machines, namely, mini
computers and microcomputers. Differences in bus structure have a pronounced
effect
on the performance of
computers. Yet from the conceptual point of view (at least at
this level of detail)
they are not crucial in any functional description. Indeed, the fundamental principles of
computer operation are
essentially independent of the particular bus
structure.
Transfer of information on the bus can seldom be done at a speed directly
comparable
to the operating
speed of devices connected to the bus.
tromechanical devices are relatively slow, for readers, printers. Others, such as disks
memory and
the
it is
elec
tapes, are considerably faster.
Main
CPU operate at electronic speeds, making them the fastest part
of the computer. Since
the bus,
and
Some
example, teletypewriters, card
all
necessary to
communicate with each other via provide an efficient transfer mechanism which is not these devices must
constrained by the slow devices.
A
common approach
is
to include buffer registers with the devices to hold
the information during transfers. fer of
To
illustrate this technique, consider the trans
an encoded character from the
printed.
The
CPU
CPU
effects the transfer
to a teletypewriter
where
it is
to
be
by sending the character via the bus to is an electronic register, this
the teletypewriter output buffer. Since the buffer transfer requires relatively
little
time.
Once
the buffer
is
loaded, the teletype
writer can start printing without further intervention by the
the bus
is
no longer needed and can be released
for use
CPU. At
is
time
by other devices. The its buffer and is
teletypewriter proceeds with the printing of the character in
not available for further transfers until this process
this
completed.
Number Systems and Codes Zvi Kohavi
31
NUMBER SYSTEMS
32
BINARY CODES ERROR DETECTION AND CORRECTION
33
31
NUMBER SYSTEMS Convenient as the decimal number system generally
computation
is
most present
digital
is, its
usefulness in machine
limited because of the nature of practical electronic devices. In
machines the numbers are represented, and the arithmetic
operations performed, in a different
system. This section
is
number system,
called the binary
concerned with the representation of numbers
number
in various
systems and with methods of conversion from one system to another.
Number Representation An
ordinary decimal
number
actually represents a polynomial in powers of 10.
For example, the number 123.45 represents the polynomial 123.45
=
1
•
10^
+
2
10'
•
+
3
•
10°
This method of representing decimal numbers system, and the number 10 In a system
whose base
N
=
is b,
aq.ib"'
is
is
+
4
•
known
10'
+
5
•
10"^
as the decimal
number
referred to as the base (or radix) of the system.
a positive
+
•
•
number
+
aob°
A'^
+
•
represents the polynomial •
•.
+
a.pb'P
ql
i=p
Adapted from Switching and Finite Automata Theory, 2d
ed.,
by Zvi Kohavi, Copyright
©
1978, 1970. Used by
permission of McGrawHill, Inc. All rights reserved.
31
32
THE McGRAWHILL COMPUTER HANDBOOK where the base b
•
•
•
Xn'xs
•
X2
•
•
•
x„
indicated above, the dot product symbol or juxtaposition will be used to
denote the ation
is
AND
X and y is then written The next Boolean operation denoted by a plus sign
is
ables
X and y
is
.
x
to (
y.
/\
+
Thus, the
).
The
is
OR operation. This oper
OR operation
between two
vari
+
y
often referred to as logical addition.
OR operation +
can be seen that the value of x
x
logic0; otherwise,
+
logic0
y has the value of
one of the variables
at least
From
are given in Table 42.
>' is
logic 1.
+
is
this table
both x and y are This operation can also be
and only
if
generalized for the case of n variables. Thus, X/ if
the
is
written as
postulates for the
and only
oper
operation between the two vari
be introduced
X
This operation
AND
A The
d,?,
ation
AND
operation. Frequently in literature, however, the
denoted by the symbol
ables
x„
OR
logic0.
As
it
,x
0+1
number
.
>
\
1
ation can be generalized to any logic 1 if
43
+
x^
•
if
•
•
+
logic1; otherwise, X/
x„
+
logic 1 if
is
X2
+
•
•
•
+
logic0.
is
Although the plus sign will always be used to indicate the OR operation in this book, the symbol V frequently appears in computer literature. In this case the OR operation between the two variables x and y is written as x y y.
The
final
operation
bar
(
)
is
will
operation to be introduced at this time also
x
and
_ X
The prime symbol
(') is
literature. In this case the
X
NOT
NOT operation
This
A Operation X
X
1
X
if
x_=
and
are
=
if
1
1=0
complementation of x
NOT operation in computer
is
written as
x'.
twovalued Boolean algebra can now be
defined as a mathematical system with the ele
ments tions
1
1
also used to indicate the
43
X
= =
0=1
or, equivalently.
NOT
NOT operation.
and inversion. An overoperation. Thus, the negation of the
indicated in Table 43, the postulates of the
Definition of the
the
written x.
is
X
TABLE
is
as complementation, negation,
be used to denote the
single variable
As
known
logic0
and
AND, OR,
and the three operaand NOT, whose postulates logic1
are given by Tables 41 to 43.
44
43
THE McGRAWHILL COMPUTER HANDBOOK
TRUTH TABLES AND BOOLEAN EXPRESSIONS Now
that the constituents of a Boolean algebra have been defined,
necessary to show
how they
The
are used.
is
it
object of a Boolean algebra
next is
to
describe the behavior and structure of a logic network. Fig. 41 shows a logic x„, network as a black box. The inputs are the Boolean variables x,, X2 and the output is/ To describe the terminal behavior of the black box, it is
necessary to express the output
LOGIC
/ as
a
function of the input variables x,, X2,
NETWORK .
.
.
,
x„.
This can be done by using a
truth table (or table of combinations) or FIG. 41
The
logic
by using Boolean expressions. Logic networks that are
network as a black box
readily
described by truth tables or Boolean
A combinational network is which the values of the input variables at any instant determine the values of the output variables. A second class of logic networks is that in which there is an internal memory. Such networks are said to be sequential and have the property that the past as well as the present input values determine the output values from the network. This chapter will concentrate on combinational expressions are said to be combinational networks.
one
in
networks.
As
indicated earlier, each of the Boolean variables Xi, X2,
two values logic0 and
to the
logic 1.
Furthermore,
.
.
.
,
x„
box, including the output line, are also restricted to these values. of
restricted
A
tabulation
the possible input combinations of values and their corresponding output
all
values,
i.e.,
functional values,
tions). If there are
consist of 2" rows
shown
in
Table
is
known
to 2"
—
upon the
1.
as a truth table (or table of
+
and n
1
44. It should
will
columns. The general form of a truth table
be noted that a simple way of including
The value of/ will,
is
to
count
in the
of course, be
binary
or
1
in
all
is
pos
number system from each row, depending
specific function.
The second method
of describing the terminal behavior of a combinational
network uses a Boolean expression. This
is
a formula consisting of Bool
ean constants and variables connected by the Boolean operators
NOT.
combina
n input variables and one functional output, this table
sible input values in a truth table
logic
is
points within the black
all
Parentheses
may
TABLE44 The x,
^2
Truth Table
•')(2 + z), which is equivalent to ANDing X with logic1. By similar reasoning, the variable y can be introduced into the second term xz by ANDing it with y + y. Finally, the last term is a minterm since all three variables appear. Combining our results, we can rewrite the given expression as missing variables.
the
y and
The
first
z variables.
x(y
+
y)(z
+
+
z)
x(y
+
+
y)z
xyz
If the distributive law is now applied to this expression and duplicate terms are dropped when they appear, the minterm canonical form will result. In this case we have
xyz
+
+
xyz
xyz
+
xyz
+
xyz
+
xyz
The Maxterm Canonical Form
A
canonical expression for a function
form.
That
It is,
can therefore be of value
in
two functions are equivalent
The minterm canonical form
is
one that
is
unique and has a standard
determining the equivalence of functions. if
their canonical expressions are the same.
consists of a
sum
of product terms in which every
variable appears within each product term. Another standard formula in Bool
known
maxterm canonical form or standard productofminterm sums. As canonical form, the maxterm canonical form can be obtained from the truth table or by expanding a given Boolean ean algebra
is
as the
in the case of the
expression.
Again consider Table 46. This truth table denotes a Boolean function/. The complement of this function, i.e., /, is constructed by com
truth table for the
plementing each of the values
in the last
column,
i.e.,
the functional values.
The
shown in Table 49. Using the procedure of Sec. 4.3, we can now write the minterm canonical form for the complementary function /as resulting truth table
f(Xi, X2, X3)
is
=
X1X2X3
+
X,X2X3
+
X1X2X3
+
X,X2X3
+
X1X2X3
412
THE McGRAWHILL COMPUTER HANDBOOK both sides of the above equation are complemented with the use of law, an equation for the function /will result:
If
DeMorgan's [f(X,X2,X3)]
=
f(x,, X2, X3)
= =
last
+
(X1X2X3
X1X2X3
+
+
X]X2X3
X1X2X3
+
X1X2X3)
(XiX2X3)(x,X2X3)(x,X2X3)(XiX2X3)(x,X2X3) (X,
(Xi
This
=
+ +
expression
+ +
X2 X2
is
X3)(Xi
+
+
X2
+
X3) (Xi
+
X3)(X,
+
+
X2
X3)
X3)
maxterm canonical form
the
X2
for the function /.
The maxterm canonical form or standard productofsums is characterized a product of sum terms in which every variable of the function appears
as
exactly once, either
TABLE 49 The Complement
the
Given
in
Truth Table
complemented or uncom
for
plemented,
of the Function
that
Table 46
each sum term. The sum terms
in
comprise
expression
the
are
called
maxterms. ^2
Xi
^3
J
J
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
maxterm canonical
In general, to obtain the
form from a truth table, the truth table of the complementary function is first written by changing each logic 1 functional value to logicand vice versa. The minterm canonical form is then written for the complementary function. the
expression
resulting
1
Finally,
1
mented by DeMorgan's law
is
comple
to obtain the
max
term canonical form.
The maxterm canonical form can arrived at algebraically
made
of the theorem
if
a Boolean expression
xx =
and the
is
also be
given. In this process, use
distributive law
x
+
yz
=
(x
\
y){x
is
+
z).
To
illustrate the procedure, consider the expression
xy
+
yz
Since the maxterm canonical form consists of a product of
sum
terms,
it is first
necessary to rewrite the expression in this general form. This rewriting can be
done by use of the distributive law. In xy
Once an it is
+
expression
= = = =
yz
is
this case,
+ z) + yKx + z)(y + z) (x + z)(y + z) + z)( y + z)
(xy +_y)(xy (x (x (x
+ + +
y)(y y)
^
y)(x
1
•
obtained that consists of only a product of
next necessary to determine whether each
sum term
is
we can
introduce the appropriate variables by using the theorem
for the
above example, we get
(x
+
y)(x
+
z)(y
+
z)
= =
(x (x
+ +
y
y
+ +
0)(x zz)(x
sum
terms,
a maxterm. If not,
xx =
0.
+ + z)(0 + y + z) + yy + z)(xx + y +
Thus,
z)
BOOLEAN ALGEBRA AND LOGIC NETWORKS Finally, the distributive law
is
413
applied and duplicate terms arc removed by the
idempotent law. Thus, we have
+
(X
y)(x
+
z)(y
=
(x
(x
=
46
(x
+ + + +
z)
+ + +
y y y
z) (x
z)(x
z)(x
+ y + z)(x + y + z) + y + z)(x + y + z) + y + z)(x + y + z)(x +
y
+
z)
THE KARNAUGH MAP METHOD OF BOOLEAN SIMPLIFICATION In the previous section a
means
resulting
it
was stated that the Boolean algebra theorems provide
for the manipulation of
Boolean expressions. Since the expressions
from such manipulation are equivalent, the combinational
works that they describe
mine what
is,
in
some
will
be equivalent.
It is
logic net
therefore of interest to deter
sense, the "simplest" expression. Unfortunately, such
an
may be difficult to determine by algebraic manipulations. Several methods have been developed for deriving simple expressions. One such method, utilizing Karnaugh maps, will be presented in this section. expression
Karnaugh Maps
A
Karnaugh map is a graphic representation of a truth table. The structure of Karnaugh maps for two, three, and fourvariable functions is shown in
the
Fig. 42 to 44 along with the general y X
y
fix, y)
1
/(O, 0) /(O, 1)
the
corresponding
truth
can be seen that for each
row of a truth
/(0,0) /(0,1)
table, there
is
one
cell
Karnaugh map, and vice versa. Each cell in a map is located by a in a
/(l.O)
/(I, 1)
1
of
tables. It
/(1,0)
1 1
form
/(M)
coordinate system according to
its
ib)
(a)
FIG. 42 A twovariable Boolean function Truth table (b) Karnaugh map
axis labelings,
and the entry
in the
(a)
cell is
the value of the function for
the corresponding assignment of val
ues associated with the for the particular
cell.
Fig 45 gives the truth table and Karnaugh
Boolean function f(x, y, z)
The
truth table
is
map
=
x(y
+
z)
+
xz
arrived at by evaluating the expression for the eight combi
nations of values as described in Sec. 4.3, and the
structed as indicated by the general form
shown
Karnaugh map
is
then con
in Fig. 43.
When Karnaugh maps are used for simplifying Boolean expressions, rectangular groupings of cells are formed. In general, every 2" X 2* rectangular grouping of cells corresponds to a product term with n — a — b variables, where n is the total number of variables associated with the map and a and b are nonnegative integers. Since the dimensions of these groupings are 2°
X
2*, it
)
414
THE McGRAWHILL COMPUTER HANDBOOK X
y
/(x, y, z)
z
yz
no, 1 1
0, 0)
00
01
/(0,0,0)
/(0,0,1)
/(0,1,1) /(0,1,0)
/(1, 0,0)
/(1,0,1)
/(1, 1,1)
/(0,0, 1) /(O, 1,0)
1
/(O,
1
1)
1,
10
11
/(I, 0,0)
1
/(1,0, 1) /(I, 1,0)
1
1
1
1
1
1
/(I,
1
1)
1,
(b)
(a)
A
FIG. 43
/(I, 1,0)
threevariable Boolean function (a) Truth table (b) Kar
naugh map
w
X
z
y
f(w,x,y,z) /(O, 0, 0, 0)
1
yz
/(O, 0, 0, 1)
00
/(O, 0, 1,0)
1
1
1
1
1
1
1
1
00 /(0,0,0,0) /(0,0,0,1) /(0.0,1,1) /(0,0,1,0)
1
/(O, 1,0,1)
/(O,
1,
1,0)
1
/(O,
1,
1,
/(I,
0,0,0)
1
/(I, 0,0,1)
1
/(I, 0,1,0) /(I, 0,1,1)
1
10
11
/(0,0, 1,1) /(O, 1,0,0)
1
1
01
/(0,1,1,1) /(0,1,1,0)
01
/(0, 1,0,0) /(0, 1,0,1)
11
/(I, 1,0,0) /(I, 1.0,1) /(l.l, 1.1) /(1, 1,1,0)
10
/(I, 0,0,0) /(1, 0,0,1) /(
1)
1
,0,
1 , 1
/(I. 0,1,0)
/(I, 1,0,0)
1
1
1
1
1
1
1
/(I, 1,0, 1) ib)
/(I, 1
1, 1,0) /(l.l, 1,1)
(a)
A
FIG. 44
fourvariable Boolean function (a) Truth table (b)
x
y
z
/
1
1
1
1
00
Karnaugh map
01
1 1
1
1
1
1
1
1
1
1
1
1
1
1
I
1
(a)
x(y
follows that the total 2.
(b)
45
FIG.
+
z
The Boolean
+
number
1
1
function
f(x,y,z)
xz (a) Truth table (b) Karnaugh
= map
of cells in a grouping must always be a power of
All future references to groupings will pertain only to those whose dimensions
are 2"
X
Minimal
2*.
Sums
One method
of obtaining a Boolean expression from a
Karnaugh map
is
to con
These are called 1 cells. They the minterms of the canonical expression. Every 2° X 2* grouping correspond to a product term that can be used in describing part
sider only those cells that have a logic 1 entry.
correspond to of
1
cells will
of the truth table. If a sufficient
every
1cell
appears
in at least
number of groupings
are selected such that
one grouping, the ORing of these product terms
BOOLEAN ALGEBRA AND LOGIC NETWORKS
completely describe the function. By a judicious selection of groupings, sim
will
ple Boolean expressions can be obtained.
of a Boolean expression
ity
415
i.e.,
is
One measure
a count of the
number
of the degree of simplic
of occurrences of letters,
variables and their complements, called literals, in the expression. Expres
sum
sions consisting of a
of product terms
and having a minimum number of
are called minimal sums. There are two guidelines for a judicious selection of groupings that will enable a minimal sum to be written. First, the groupings should be as large as possible. This guideline follows from the fact that the larger the grouping, the literals
fewer a
be the number of
will
minimum number
fact that
number
corresponding product term. Second,
literals in its
of groupings should be used. This guideline stems from the
minimum
each grouping corresponds to a product term. By using a
number
of groupings the
of product terms, and consequently the
num
minimum. Karnaugh map and the optimal groupings of 1cells are shown. No larger groupings are possible on this map. Also, no fewer than three groupings will encompass all the 1 cells. The columnar grouping corber of literals in the expression, can be kept to a In Fig. 46 a fourvariable
responds to the rectangle with dimensions 2'
X
2°
=
X
4
1,
grouping has dimensions
2X2, 2
X
1. It
X
2'
dimensions
=
01
^
00
order
to
may
write
01
ri
1
11
Karnaugh map,
erence must be
made
along the map's axes.
to the
It is
^ wy
n
V
necessary to
determine which axis variables do not
y i
labels FIG. 46
u
1
i t_
10
ref
1
i
Boolean
expression from a
10
=
2°
overlap.
the
11
*
should be noted that the rec
tangular groupings In
2X2
00
and the small grouping of two
has
cells
yz
the square
wxy y
z
Groupings on a fourvariable Kar
naugh map
change value within each grouping. Those variables whose values are the same for each cell in the grouping will appear in the product term. A variable will be complemented if its value is always logic0 in the grouping and will be uncomplemented if its value is always logic 1.
To
illustrate the writing of a
Boolean expression, again consider Fig.
Referring to the square grouping,
we can
46.
see that the grouping appears in the
and second rows of the map. In these rows the variable w has the value of Thus, the product term for this grouping must contain w. Furthermore, since the x variable changes value in these two rows, this variable will not appear in the product term. When we now consider the two columns that contain the grouping, the y variable has the same value in these two columns, i.e., logic1, and hence, the literal y must appear in the product term. Finally, we can see that the z variable changes value in these two columns and, hence, will not appear in the product term. Combining the results, we find that the square grouping corresponds to the product term vjy. first
Iogic0.
If this
procedure
is
applied to the remaining two groupings in Fig. 46, their
corresponding product terms can be determined. The columnar grouping cor
responds to the term yz, since the variables j' and z both have the value logic
416
THE McGRAWHILL COMPUTER HANDBOOK associated with every cell in this grouping. Furthermore, since no row vari
same
ables have the
X
w
logic value for every cell of the grouping, neither the
nor
variables appear in the product term. In a similar manner, the twocell group
sum
ing corresponds to the product term wxy. Thus, the minimal
naugh map
is
for this
Kar
given by the expression f(w, X, y, z)
= wy +
+
yz
wxy
Although the three and fourvariable Karnaugh maps are normally drawn as the twodimensional configurations shown in Figs. 43 to 44, from the point of view of the permissible rectangular groupings that can be formed, it is necessary to regard
them
as threedimensional configurations.
For the threevariable right edges of the
map
map
of Fig. 43,
it is
necessary to regard the
as being connected, thus forming a cylinder.
left
It is
and
on the
surface of this cylinder that the rec
tangular 00
01
10
11
Hence, appear
(e:
groupings rectangular
split
shows a
are
formed.
groupings
may
when drawn. Figure
split
47
rectangular grouping.
The corresponding product term FIG. 47
Split grouping
on a threevariable
Karnaugh map
is
obtained as explained previously and is
xz
for the case
Split
shown
rectangular
in Fig. 4.7.
groupings
can
and right edges of a fourvariable map are connected as well as the top and bottom edges. Thus, the fourvariable map of Fig. 44 should be regarded as appearing on the surface of a toroid. Fig. 48 shows some examples of split rectangular groupings on a fourvariable map. In Fig. 48a the grouping of the four cells corresponds to the term xz and the grouping of the two cells corresponds to xyz. Special attention should be paid to the grouping illustrated in Fig. 48b. The four corners form a also appear
2'
X
2'
on fourvariable maps. In general, the
rectangular grouping
corresponding product term
is
if
the
map
is
left
visualized as being a toroid.
The
xz.
summary, the basic approach to determining the optimal groupings on a Karnaugh map leading to a minimal sum is as follows. First a 1cell is selected that can be placed in only one grouping that is not a subgrouping of some larger grouping. The largest grouping containing this 1cell is then formed. Next, another 1cell with the above property, not already grouped, is selected and its In
yz
00 00
of 11
01
11
'\
'1 1
10
01
11
'.
10
V
I
01
11
10
(.a)
FIG. 48
00
viJ
J
00
10
\^
p (b)
Examples of split groupings on a fourvariable Karnaugh map
BOOLEAN ALGEBRA AND LOGIC NETWORKS
417
some groupmore than one way. At this point, a minimum number of additional groupings are formed to account for the remaining 1 cells. The following examples illustrate this procedure for obtaining minimal sums from Karnaugh maps. grouping formed. This process ing or there remain
Example 4.5 in the
is
ungrouped
repeated until 1
cells that
all
the
1
cells
can be grouped
map shown
Consider the Karnaugh
are in
upper righthand corner can be grouped with the
in
The
in Fig. 49.
1
cells in
1cell
the other three
corners. Furthermore, this 1cell can appear in no other groupings that are not
subgroupings of these four
sum. Next,
it is
cells.
noted that the
can be placed
Thus, the term xz must appear
1cell in
the
first
in
the minimal
row, second column,
grouping of four
still is
not
term xy. Finally, the remaining ungrouped 1cell can be grouped with the cell just below it to produce the term wyz. The minimal sum is in a
grouping.
It
in a
f(w, X, y, z)
which consists of seven
Example 4.6 in the
=
xz
+
xy
+
cells to yield the
wyz
literals.
Consider the Karnaugh
map shown
in Fig. 410.
upper lefthand corner can be grouped only with the y^
00 00
1
The
1cell
next to
it.
yz 10
11
u
1cell
00
J
J
00
01
01
10
11
(1
01
wx ,
11
10
tt
FIG. 49
{\
Example
n
11
^
10
(
T^
(0
u
FIG. 410
4.5
^
1
Example
4.6
Similarly, the 1cell in the lower righthand corner can be grouped only with
the 1cell above
only by
itself.
it.
At
The
1cell in
the second row, third column, can be grouped
this point there
placed in some grouping.
It
still
remain three
1
cells that
should be noticed that these
1cells,
have not been
unlike the other
more than one grouping. To complete the process, a minimum number of groupings must be selected to account for these remaining 1cells. The groupings shown on the map correspond to the minimal sum cases, can be placed into
f(w, X, y, z)
= wxy + wyz + wxyz + wxy + wyz
which consists of 16 literals. There are two other equally good minimal sums that could have been formed:
and It
can be seen from
a given function.
f(w, X, y, z)
= wxy + wyz + wxyz + wyz + wxz
f(w, X, y, z)
= wxy + wyz + wxyz + wxy +
this
xyz
example that more than one minimal sum can
exist for
418
THE McGRAWHILL COMPUTER HANDBOOK
Minimal Products Thus far it has been shown how a minimal sum can be obtained from a Karnaugh map. Karnaugh maps can also be used to construct minimal expressions, as measured by a literal count, consisting of a product of sum terms. These expressions are called minimal products.
To
map
obtain a minimal product, attention
given to those cells in the Karnaugh
complement of a given function by including every
written for the
is
is
that contain a logic0. These are called 0cells. In this case a minimal 0cell,
sum and
only 0cells, in at least one grouping while satisfying the requirements of using the largest and the fewest groupings possible. Again, the threedimensional
maps must be kept in mind. Then, DeMorgan's law is applied to complement of the expression. This results in an expression for the Karnaugh map (and, hence, the truth table). Furthermore, it consists of a product of sum terms and a minimum number of literals. nature of the
the
Example 4. 7
in Example 4.5, whose Karnaugh map The map is redrawn in Fig. 411, where the 0cells are minimal sum for the complement of the function:
Consider the function
given in Fig. 49.
is
grouped
form a
to
=
+ wx + xy = (yz + wx + xy)
f(w, X, y, z) f(w, X, y, z)
or
By applying DeMorgan's
we
law,
obtain the minimal product
=
f(w, X, y, z)
yz
(y
+
z)(w
which consists of
six literals. In this case the
has fewer
than
literals
+
+
x)(x
y)
minimal product of the function
minimal sum.
its
yz
00
01
11
1
1
H
01
11
10
10
00
01
1
1
00
1
10
11
(o
11 0)
°1
p u
wx
yz
00
oj
01
(0
0)
1
1
y
1
1
1
f°l
1
FIG. 411
©
1
1
Example
FIG. 412
4.7
y
1
Example
1
4.8
are three minimal products that
Example 4.6, whose Karnaugh map By grouping the 0cells, there can be formed. The minimal product corre
sponding to the groupings
412
Example 4.8 is
shown
f(w,
=
in Fig.
^^y^^)_ (w + X
The two
Consider the function
410 and
+
_ y)(w
+
is
redrawn
in Fig.
y
+
_
_
z)(w
in
in Fig. 412.
is
+
X
+
y
+
z)(w
+
y
+
+
X
+
y
+
z)(w
+
y
+
z)(w
+
x
+
y)
z)(x
+
y
+
z)
other minimal products are
f(w, X, y, z)
= (w +
_ X
+
_ y)(w
+
_ y
+
_
_
z)(w
BOOLEAN ALGEBRA AND LOGIC NETWORKS
419
and f(w,
^^y^^)_
= (w +
+
X
_
_
+
y)(w
_
_
+
y
z)(w
+
+
x
y
+
z)(w
+
+
x
In each of these expressions, 16 literals appear. Hence, the literals
imal
appear
sum
minimal product descriptions of
in the
+
y)(w
x
+
z)
same number of
this function as in its
min
descriptions.
Don'tCare Conditions Before we close this discussion on Karnaugh maps, one more situation must be considered.
should be recalled that Boolean expressions are used to describe
It
the behavior and structure of logic networks.
of a
Karnaugh map) corresponds
Each row of a truth
to the response
as a result of a combination of logic values on
(i.e.,
output) of the network
input terminals
its
table (or cell
the values
(i.e.,
of the input variables). Occasionally, a certain input combination
never to occur, or cases,
it is
if it
does occur, the network response
is
known
not pertinent. In such
is
not necessary to specify the response of the network
the func
(i.e.,
These situations are known as don'tcare conditions. When don'tcare conditions exist, minimal sums and products can still be obtained with Karnaugh maps. tional value in the truth table).
maps by dash
Don'tcare conditions are indicated on the Karnaugh
To
obtain a minimal
may be
sum
used optionally
order to form the best possible groupings.
care
cells,
Any
of the don'tcare cells can be used
Furthermore,
it is
entries.
or product, the cells with dash entries, called don'tin
when grouping
not necessary that they be used at
the
1
cells or
the 0cells.
or that they be used
all
only for one particular type of grouping.
Figure 413 shows a Karnaugh Fig. 41 3a
map
f(w, X, y, z)
while the
map
It
should be noted that the
0,
and
=
=
yz
+
wxy
of Fig. 41 3b can be used to obtain a minimal product f(w, X, y, z)
z
The map of
with don'tcare conditions.
can be used to obtain a minimal sum
is
cell
=
(y
+
z)(y
+
z)(w
+
y)
used for both a minimal
sum and
yz 11

00 01
01
(
1

ia)
FIG. 413

J
10
00
10
n
11
x
0,
00 01
11
=
\,
y
=
a minimal product; while the
yz
00
w =
corresponding to the values
01
11
10
1
f^
f°l
1
c
10 lloJ
1


^
1
'1 oj
W
Karnaugh maps involving don'tcare conditions
420
THE McGRAWHILL COMPUTER HANDBOOK corresponding to the values
cell
at
w =
0,
x =
0,
>^
=
0,
and z
=
1
is
not used
all.
Although the Karnaugh ables, the
maps
map method can
be extended to more than four
get increasingly difficult to analyze.
To handle
vari
these larger
problems, computer techniques have been developed.
47 LOGIC
NETWORKS
Boolean algebra serves to describe the logical aspects of the behavior and strucThus far we have considered only its behavioral descrip
ture of logic networks. tive properties.
That
is,
the algebraic expression or the truth table provides a
mechanism
for describing the output logic value of a
logic values
on
its
input
lines.
network in terms of the However, Boolean algebra expressions can also
provide an indication of the structure of a logic network.
The Boolean
algebra, as described in the preceding sections, includes the
AND, OR, and NOT. some sense correspond
whose terminal
three logic operators:
If there are circuits
logic properties in
to these three operators, then the
interconnection of such circuits, as indicated by a Boolean expression, will provide a logic network. Furthermore, the terminal logic behavior of this network
be described by the expression. In the next chapter
it will be seen that such and are called gates. Of course, electrical signals really appear at the terminals of the gates. However, if these signals are classified as twovalued, then logic0 can be associated with one of the signal values and logic 1 with the other. In this way, the actual signal values can be disregarded at the terminals of the gate circuits, and the logic values themselves can be assumed to appear. The gate symbols for the three Boolean operations introduced thus far are shown in Fig. 414. Inasmuch as these symbols denote the Boolean operators,
will
circuits exist
(b)
(a)
FIG .414
Gate symbols
(a)
(c)
AND gate (b) OR gate (c) NOT gate
(or inverter)
the terminal characteristics for these gates are described by the definitions pre
That is, the output from the AND gate will and only if all its inputs are logic 1; the output from the OR gate will be logic 1 if and only if at least one of its inputs is logic 1; and the output from the NOT gate will be logic 1 if and only if its input is logic0. NOT gates viously stated in Tables 41 to 43.
be
logic 1 if
are also
commonly
called inverters.
A
drawing that depicts the interconnection of the logic elements is called a logic diagram. In general, when a logic diagram consists only of gate elements with no feedback lines around them, the diagram tional network.
A
combinational network
is
is
said to be of a
one that has no
combina
memory
property
and, thus, one in which the inputs to the network alone determine the outputs
from the network. There is a correspondence between the
logic
diagram of a combinational
net
BOOLEAN ALGEBRA AND LOGIC NETWORKS
421
L_i>^^ FIG. 415
Logic diagram whose terminal behavior
the Boolean expression f(w,x,y,z)
=
w(xyz
+
is
described by
yz)
work and a Boolean expression. Hence, Boolean expressions serve as descriptions of combinational networks. As an example, consider the logic diagram shown in Fig. 415. The two NOT gates are used to generate y and z. The output from the upperlefthand AND gate is described by xyz, and the output from the lowerlefthand AND gate is given by yz. These two outputs serve as inputs to the OR gate. Thus, the output from the OR gate is described by xyz + yz. Finally, the output from the OR gate enters the remaining AND gate along with a w input. Hence, the logic diagram of Fig. 415 is described by the equation
= w(xyz +
f(w, X, y, z)
Clearly,
it is
just as easy to reverse the
Boolean expression,
it
is
yz)
above process. That
is,
from a given
a simple matter to construct a corresponding logic
diagram. In order that the gate symbols can
and
all
be kept the same size
order to prevent the crowding of several inputs to
in
gates, the generalized single gate has a large
symbols shown
number
in Fig.
gates or
OR
when
a
accommodate
a
416 are frequently used
FIG. 416
Gate symbols
number of
inputs (a)
to
AND gate (b) OR
gate
(b)
48 ADDITIONAL LOGIC
AND
diagram
of input lines.
large (a)
in a logic
GATES
were introduced in the previous section. However, several additional ones frequently appear in logic diagrams. Fig. 417 summarizes the commonly encountered gate symbols. First, it should be noted that several additional logic functions are symbolized. Second, two gate symbols are shown for
Three
logic gates
each function. These symbols
The As
utilize the inversion
bubble notation.
Inversion Bubble Notation
indicated in Fig. 417, a simple triangle denotes a buffer amplifier. These
circuits are
needed
to provide isolation, amplification, signal restoration,
and
/
422
THE McGRAWHILL COMPUTER HANDBOOK Gate symbols
Function
AND ^
OR
„
Boolean description
j ^/
l~j'^^f
f=xy
)— :£>^
:^0
f=x+y = (xy)
y^f
J
NOT
iTTJ)
f=x
(inverter)
NAND
V>/
^
I
NOR
)—f
— T>' :3E>—
/
}— x>^ :x>
/
)
)
NOT EXCLUSIVE OR (EQUIVALENCE)
J)
>X> /
„JJ
(buffer amplifier)
Summary
)
[]
;,__r\^/
IDENTITY
>— /
:^0
yof :T>^
EXCLUSIVE OR
FIG. 417
=
f=(x+y)
/=
x_
0100 = 1.1011 0111 = 1.1000 1011 pl.OOll
1.0001
The output 1
>
I
1
.
1
1.0100
of the adder will be in Is complement form in each case, with a
in the signbit position.
the above we see that in order to implement an adder which will handle magnitude signed Is complement numbers, we can simply add another full adder to the configuration in Fig. 65. The sign inputs will be labeled Xq and Yo, and the output from the adder connected to Xj and Yi will be connected to the C, input of the new full adder for Xq and Yq. The Co output from the adder for Xq and Yq will be connected to the C, input for the adder for X4 and Y4. The Sq output from the new adder will give the sign digit for the sum.
From
4bit
Q
(Overflow
69 ADDITION IN
will not
be detected
in this adder; additional gates are required.)
THE 2S COMPLEMENT
SYSTEM When
negative numbers are represented in the 2s complement system, the oper
ation of addition
is
very similar to that in the Is complement system. In parallel
machines, the 2s complement of a number stored
in
a register
may be formed
—
—
THE ARITHMETICLOGIC UNIT
by
first
complementing the
register
and then adding
1
bit of the register. This process requires two steps and
611
to the least significant
more timeconsuming than the Is complement system. However, the 2s complement system has the advantage of not requiring an endaround carry during addition. The four situations which may occur in adding two numbers when the 2s complement system is used are as follows:
1.
When
both numbers are positive, the situation
that in case 2.
When is
one number
positive
is
therefore
completely identical with
complement system which has been discussed.
in the Is
1
is
is
and the other negative, and the larger number
the positive number, a carry will be generated through the sign
carry
may
bit.
This
be discarded, since the outputs of the adder are correct, as shown
below:
0.0111
+1000=
0.1000
+1.1101
0111 = +0001
+1.1001
+ 0111= 0011 =
+ 0100 I
> carry
'
3.
When is
0.0100 is discarded
a positive and negative
'
number
Note:
A
negative
it
is
discarded
bit,
and the answer
number
will
again
stands:
+ 0011 =
0.0011
0100 = 0001
1.1100
must be added
1
> carry
are added and the negative
the larger, no carry will result in the sign
be correct as
0.0001
I
1.1111
to the least significant bit of a 2s
number when converting 1.001
1
=
+0100 = 0.0100 1000 = 1.1000 0100 1.1100
1
it
to a
magnitude. For example:
100 form the
0001 add
complement
Is
complement
1
1101
When
both numbers are the same magnitude, the result
+ 0011 =
When
is
as follows:
0.0011
0011 =
1.1101
0000
0.0000
number of the same magnitude
a positive and a negative
are added,
the result will be a positive zero. 4.
When
the two negative numbers are added together, a carry will be gener
ated in the sign bit and also in the bit to the right of the sign
cause a
1
to
the sign bit
be placed
may be
in the sign bit,
which
is
correct,
discarded.
0011 = 0100 =
1.1100
0011 1011
1.1001
1110
1.1101
Ollli •
carry
is
discarded
= =
bit.
This
will
and the carry from
1.1101 1.0101
1.0010
—
612
THE McGRAWHILL COMPUTER HANDBOOK For parallel machines, addition of positive and negative numbers
is
quite sim
any overflow from the sign bit is simply discarded. Thus for the parallel adder in Fig. 65 we simply add another full adder, with Xq and Yq as inputs and with the CARRY line Co from the full adder, which adds Xi and is placed F/, connected to the carry input C, to the full adder for Xq and Yq. A on the C, input to the adder connected to X4 and Y4. This simplicity in adding and subtracting has made the 2s complement sysple, since
tem the most popular for parallel machines. In fact, when signedmagnitude systems are used, the numbers generally are converted to 2s complement before addition of negative numbers or subtraction is performed. Then the numbers are changed back to signed magnitude.
AND SUBTRACTION IN A PARALLEL ARITHMETIC ELEMENT
610 ADDITION
We
now examine
tract
the design of a gating network which will either add or sub
two numbers. The network
TRACT
to
is
diff'erence
is
the output First
ative
to
When
bers to be added or subtracted.
numbers
is
have an
ADD
SUB
input line and a
input line as well as the lines that carry the representation of the
we
is
be on the output
to be to
be
lines,
on the output
the
ADD
line
is
a
1,
the
sum
num
of the
SUBTRACT line is a 1, the ADD and SUBTRACT are Os,
and when the
lines. If
both
0.
note that
the machine
if
numbers, subtraction
may
is
capable of adding both positive and neg
be performed by complementing the subtra
hend and then adding. For instance, 8 — 4 yields the same result as 8 + ( 4), and 6 — ( — 2) yields the same result as 6 + 2. Subtraction therefore may be performed by an arithmetic element capable of adding, by forming the complement of the subtrahend and then adding. For instance, in the Is complement system, four cases
may
arise:
TWO POSITIVE NUMBERS 0.0011
0.0011
— 0.0001
complementing the subtrahend 1.1 1 10 0.0001 and adding
—
^ carry
1
0.0010
TWO NEGATIVE NUMBERS 1.1101
1.1101
complementing
1.1011
0.0100 0.0001
—> carry
1
0.0010
POSITIVE
MINUEND
NEGATIVE SUBTRAHEND 0.0010
1.1101
^
0.0010 0.0010 0.0100
NEGATIVE MINUEND POSITIVE SUBTRAHEND 1.0101 ^ 1.0101 0.0010
—
1.1101
1.0010
> carry
1
1.0011
THE ARITHMETICLOGIC UNIT
The same
basic rules apply to subtraction in the 2s
except that any carry generated
in the signbit
adders
is
613
complement system,
simply dropped. In this
case the 2s complement of the subtrahend
is formed, and the complemented number is then added to the minuend with no endaround carry. We now examine the implementation of a combined adder and subtracter network. The primary problem is to form the complement of the number to be
subtracted. This complementation of the subtrahend
may
be performed
in sev
complement system, if the storage register is composed of flipflops, the Is complement can be formed by simply connecting the complement of each input to the adder. The 1 which must be added to the least significant position to form a 2s complement may be added when the two numbers are added by connecting a 1 at the CARRY input of the adder for the least eral ways.
For the
Is
significant bits.
A
complete logical circuit capable of adding or subtracting two signed 2s
complement numbers is shown in Fig. 66. One number is represented by Xq, Xi, Xj, Xs, and X4, and the other number by Y/Yi, Y2, V3, and Y4. There are two control signals, ADD and SUBTRACT. If neither control signal is a I (that is, both are Os), then the outputs from the five full adders, which are So, S,, S2, S3, and S4 will all be Os. If the ADD control line is made a 1, the sum of the number X and the number V will appear as So, S/, S2, S3, and S4. If the SUBTRACT line is made a 1, the diff'erence between A' and K(that is, — Y) will appear on So, Si, S2, S3, and S4.
X
Notice that the either full
Y or
ANDtoOR
SUBTRACT
adder, while a
To
either
adder.
a subtraction
gated into the
full
ADD causes
X input
adder, and a
is
1
full
adder.
connected to the appropriate
is
called for, the
is
Y, to enter the appropriate
causes F/ to enter the
add or subtract, each
When
complement of each
added by connecting the
signal to the C, input of the full adder for the lowest order bits
the
SUBTRACT
when
addition
is
line will
F input selects
gate network connected to each
Y, so that, for instance, an
when we add, a
be a
X^ and
Since
F4.
carry will be on this line
performed.
ADD
r^
i
,n
,K
i
,H
\
i
Co
"
t
"
^1
«2
S3
To add: the
ADD
To subtract the
Numbers
FIG. 66
Parallel addition
and subtraction
line is
made
SUBTRACT
are to be
in
a
1
line is
made
a
2s complement form
1
full
Y flipflop is SUBTRACT
'^""
adder
C '
614
THE McGRAWHILL COMPUTER HANDBOOK
The
simplicity of the operation of Fig. 66
makes
and subtraction very attractive for computer use, and
2s
complement addition
it is
the most frequently
used system.
The
configuration in Fig. 66
subtraction because
is
the most frequently used for addition and
provides a simple direct
it
means
for either
tracting positive or negative numbers. Quite often the Sq, Sx,
gated back into the
Y replaces
and
An
X flipflops,
when
Since the registers
from
+
5 to
1
,
is
overflow. In digital computers an overflow
that an overflow this
is
the performance of an operation results in a quantity beyond
in Fig. 66
—
have a sign
bit plus
is
to receive the result.
4 magnitude
6 in 2s complement form. Therefore,
1
addition or subtraction were greater than H
be +20, and
.
X
the capacity of the register (or storage register) which
store
.
the original value of X.
important consideration
said to occur
adding or sub
S4 lines are so that the sum or difference or the numbers .
1
5 or less than
had occurred. Suppose we add cannot be represented
+8
if
—
1
they can
bits,
the result of an 6,
we would
say
+12; the result should complement on the lines we add — 13 and —7 or if we to
(fairly) in 2s
and S4. The same thing happens if from +12. In each case logical circuitry is used to detect the overflow condition and signal the computer control element. Various options are then available, and what is done can depend on the type of instruction being executed. (Deliberate overflows are sometimes used in doubleprecision routines. Multiplication and division use the results as are.) 5*0,
Si, S2, Si,
subtract
611 FULL
—8
ADDER DESIGNS
The
full
adder
is
component of an arithmetic element. Figure
a basic
63 illus
trated the block diagram symbol for the full adder, along with a table of
binations for the inputoutput values and the expressions describing the
com
sum and
Succeeding figures and text described the operation of the full adder. Notice that a parallel addition system requires one full adder for each carry
lines.
bit in the basic
word.
There are of course many gate configurations for full binary adders. Examples of an IBM adder and an MSI package containing two full adders follow. 1.
Full binary adder
used
in several
Figure 67 illustrates the
IBM
full
binary adder configuration
generalpurpose digital computers. There are three
inputs to the circuit: the
X input
is
from one of the storage devices
in the
Y input is from the corresponding storage device in the added to the accumulator register, and the third input is the CARRY input from the adder for the next least significant bit. The two outputs are the SUM output and the CARRY output. The SUM output will accumulator, the register to be
contain the
sum
output
be connected to the
bit's
will
value for this particular digit of the output.
adder (refer to Fig.
The outputs from
CARRY
The
CARRY
input of the next most significant
65).
the three
AND
gates connected directly to the X, Y,
and C inputs are logically added together by the OR gate circuit directly and C, or Y and C input lines contains a beneath. If either the X and Y, there should be a CARRY output. The output of this circuit, written in 1
X
,
THE ARITHMETICLOGIC UNIT
U
615
hfi\.
li li
X
r
Y
C
'
Sum
Carry
[{XC + YC + XY) + XYC] {X + Y + C XYC + XYC + XYC + XYC FIG. 67
Full
adder used
logical equation form,
is
in
IBM
machines
shown on the
figure.
This
may be compared
with
the expression derived in Fig. 63.
The
SUM output XY + XC + YC
derivation of the
output expression ing
{XY + XC + YQ. The
AND
gate and
XYC. The
logically
is
sum
logical
is
not so straightforward.
is first
logical product of ^, Y,
added
to this,
of X, Y, and
C is
The
CARRY
inverted (complemented), yield
forming
and
C is
formed b y an
(XY + XC +
then multiplied times
this,
YQ + forming
the expression
[{XY
When
+ XC + YQ + XYQiX + Y +
Q
multiplied out and simplified, this expression will be
+ XYC + XYC,
the expression derived
in Fig. 63.
XYC + XYC
Tracing through the
logical operation of the circuit for various values will indicate that the
output
will
be
1
when
only one of the input values
three input values are equal to
output value 2.
Two two
full full
will
be a
1
.
For
all
5"
1,
or
SUM
when
all
other combinations of inputs the
adders in an integrated circuit (IC) container
Figure 68 shows
adders. This package was developed for integrated circuits using
(TTL). The entire circuitry is packaged in one IC The maximum delay from an input change to an output change output is on the order of 8 nanoseconds (ns). The maximum delay
container.
an
equal to
0.
transistortransistor logic
for
is
from input
to the
C2 output
is
about 6
ns.
616
THE McGRAWHILL COMPUTER HANDBOOK
B2
^H>
A2
^=H>^
FIG. 68 Two full adders Texas Instruments)
The amount of delay
in
an IC container (courtesy
associated with each carry
is
an important figure
in
adder for a parallel system, because the amount of time required to add two numbers is determined by the maximum time it takes
evaluating a
full
through the adders. For instance, if we add 01111 complement system, the carry generated by the Is in the least significant digit of each number must propagate through four carry stages and a sum stage before we can safely gate the sum into the accumulator. A study of the addition of these two numbers using the configuration in Fig. 65 will make this clear. The problem is called the carryripple for a carry to propagate
10001
to
in the 2s
problem.
There are a number of techniques which are used in highspeed machines to alleviate this problem. The most used is a bridging or carrylookahead circuit which calculates the carryout of a number of stages simultaneously and then delivers this carry to the succeeding stages.
612 THE BINARYCODEDDECIMAL (BCD) ADDER Arithmetic units which perform operations on numbers stored
must have the
BCD
ability to
A
add
4bit representations of
decimal
in
digits.
BCD
form
To do
this
shown in Fig. 69. The adder has an augend digit input consisting of four lines, an addend digit input of four lines, a carryin and a carryout, and a sum digit with four output lines. The augend digit, addend digit, and sum digit are each represented in 8, a
4, 2,
1
adder
BCD
is
used.
code.
block diagram symbol for an adder
is
THE ARITHMETICLOGIC UNIT Carry
617
in
'
^^1
Augend
A'2
digit
X4
Carry o
Decimal adder
2i Z2
Sum
Z4
digit
Addend ^2 digit Ya
FIG. 69
Serialparallel addition
of the BCD adder in Fig. 69 is to add the augend and addend and the carryin and produce a sum digit and carryout. It is possible to make a BCD adder using full adders and AND or OR gates. An adder made in this way is shown in Fig. 610.
The purpose
digits
4
^1^
4
r
Carry to next higher order
Sum
adder
FIG. 610
BCD
adder
digits
Carry from lower order
adder
618
THE McGRAWHILL COMPUTER HANDBOOK
© 41 © B1
A2
^O
©
^—
B2 ©I
© A2 B3 /14
©
B4 ®r
FIG. 61
1
Complete
BCD
adder
in
an IC package
There are eight inputs to the BCD adder, four A',, or augend, inputs and four or a 1 during a F„ or addend, digits. Each of these inputs will represent a = given addition. If 3(001 1) is to be added to 2(0010), then Xg 0, A'^ = 0, X2 = 1, and X, = 1; Fs = 0, Y4 = 0, F^ = 1, and Y, = 0. The basic adder in Fig. 610 consists of the four binary adders at the top of the figure and performs base 16 addition when the intent is to perform base 10 addition. Some provision must therefore be made to (1) generate carries and (2) correct sums greater than 9. For instance, if 3io(001 1) is added to 8io(1000), the result should be lio(OOOl) with a carry generated.
The
actual circuitry which determines
when a carry
is
to
be transmitted
to
the next most significant digits to be added consists of the
full binary adder to which sum (S) outputs from the adders for the 8, 4, 2 inputs are connected and of the OR gate to which the carry (C) from the eightposition bits is connected. An examination of the addition process indicates that a carry should be
generated when the 8
AND
4, or 8
AND
2,
or 8
AND 4 AND
2
sum
outputs
from the base 16 adder represent Is, or when the CARRY output from the eightposition adder contains a 1. (This occurs when 8s or 9s are added together.)
Whenever the sum of two digits exceeds 9, the CARRY TO NEXT HIGHER ORDER ADDER line contains a for the adder in Fig. 610. A further difficulty arises when a carry is generated. If 7io(01 1) is added to 1
1
610(01 10), a carry will be generated, but the output from the base 16 adder will 11 01. This 1101 does not represent any decimal digit in the 8, 4, 2, 1 system and must be corrected. The method used to correct this is to add 6io(01 10) to the sum from the base 16 adders whenever a carry is generated. This addition is performed by adding Is to the weight 4 and weight 2 position output lines from the base 1 6 adder when a carry is generated. The two half adders and the full adder at the bottom of Fig. 610 perform this function. Essentially then,
be
THE ARITHMETICLOGIC UNIT
the adder performs base 16 addition and corrects the sum, 9,
by adding
6.
if it is
619
greater than
Several examples of this are shown below. (8) (4) (2) (1)
8
+
7
=
15
1000
+
0111
=
1
+
0110
10
1
1
1
1
1=5
t.with a carry generated (8)
9
+
5
=
(4)(2)(1)
10 10
14
+
1
1110 0110 1
1 or 4 t_with a carry generated 1
BCD adder and the outputs are
Figure 611 shows a complete digits
A
and
included.
The
digits B,
circuit line
used
is
in
an IC package. The inputs are
5*.
A
carryin
and a carryout are
CMOS.
AND NEGATIVE BCD NUMBERS
613 POSITIVE
The techniques
for handling
binary numbers.
A
sign bit
BCD is
numbers greatly resemble those
for handling
used to indicate whether the number
is
positive
methods of representing negative numbers which must be considered. The first and most obvious method is, of course, to represent a negative number in true magnitude form with a sign bit, so that — 645 is represented as 1.645. The other two possibilities are to represent negative numbers in a 9s or a 10s complement form, which resembles the binary Is and 2s complement forms. or negative, and there are three
AND SUBTRACTION IN THE 9S COMPLEMENT SYSTEM
614 ADDITION
When
decimal numbers are represented
in a binary code in which the 9s comformed when the number is complemented, the situation is roughly the same as when the Is complement is used to represent a binary number. Four cases may arise: two positive numbers may be added; a positive and negative number may be added, yielding a positive result; a positive and a negative number may be added, yielding a negative result; and two negative numbers may be added. Since there is no problem when two positive numbers are added, the three latter situations will be illustrated.
plement
is
620
THE McGRAWHILL COMPUTER HANDBOOK Negative and positive number
—
sum:
positive
+ 692 =
0.692
342 =
1.657
+ 350
pO.349 >
I
1
0.350 Positive
Two
and negative number
— negative sum:
631 =
1.368
+ 342 =
0.342
289
1.710
= 289
negative numbers:
•248
329
= =
577
1.751
1.670
r
1.421
L ^
1
1.422
The rules same as
the
for handling negative
full
numbers
in the 10s
complement system are
those for the binary 2s complement system in that no carry must
be endedaround. only the
= 577
A
BCD
parallel
BCD
adder
may
therefore be constructed using
adder as the basic component, and
all
combinations of posi
and negative numbers may thus be handled. There is an additional complexity in BCD addition, however, because the 9s complement of a BCD digit cannot be formed by simply complementing each bit in the representation. As a result, a gating block called a complementer must tive
be used.
may be used to form complements of numbers, a block diagram of a logical circuit which form the 9s complement of a code group representing a decimal number in
To
illustrate the
the code groups for will
type of circuit which
BCD
Series parallel inputs
NOTE:
XY + XY
'.^ This gate
9s
Binary
coded

P
Af +
1
P
left
No
/ICC
Shift
left
P ^P 1
FIG. 619
Flowchart of division algorithm
Shift /ICC,
B
left
remainder
THE ARITHMETICLOGIC UNIT
637
620 LOGICAL OPERATIONS In addition to the arithmetic operations,
by ALUs. Three
many
logical operations are
performed
be described here: logical multiplication,
logical operations will
and sum modulo 2 addition (the exclusive OR operation). Each of these will be operations between registers, where the operation specified will be performed on each of the corresponding digits in the two registers. The result logical addition,
will
be stored
The
first
in
one of the
registers.
operation, logical multiplication,
AND
The
is
often referred to as an extract,
have been Suppose that the contents of the accumulator register are "logically multiplied" by another register. Let each register be five binary digits in length. If the accumulator contains 01101 and the other register 001 1, the contents of the accumulator after masking, or
defined as
•
=
operation.
0;
=
1
•
0;
rules for logical multiplication
=
•
1
and
0;
•
1
=
1
1
.
1
the operation will be 00101.
The masking, or extracting, operation is words. To save space in memory and keep pieces of information
may
may
be stored
the
in
computer
useful in "packaging"
associated data together, several
same word. For
instance, a
word
contain an item number, wholesale price, and retail price, packaged as
follows:
s
6 7—15
1
16
— 24 V
item
wholesale
retail
number
price
price
To extract the retail price, the programmer will simply logically multiply the word above by a word containing Os in the sign digit through digit 15, and with Is in positions
remain
The vided
in
16 through 24. After the operation, only the retail price will
the word.
logical addition operation, or the
in
most computers. The rules
Figure 620 shows
how
gated together so that
The
1
also pro
10 1=0
1
a single accumulator flipflop
all
circuit in Fig. 620
=
is
000=0 001=1 100=1
0+0=0 0+1=1 1+0=1 +
2 operation,
MODULO 2 ADDITION
LOGICAL ADDITION
1
sum modulo
for these operations are:
and
B
flipflop
can be
three of these logical operations can be performed.
would be repeated
for
each stage of the accumulator
register.
There are three control signals, LOGICAL MULTIPLY, LOGICAL ADD, and 2 ADD. If one of these is up, or 1, when a clock pulse arrives, this operation is performed and the result placed in the ACC (accumulator) flipflop. If none of the control signals is a 1 nothing happens, and the ACC remains
MOD
,
as
it is.
The and
actual values desired are found by three sets of gates; that
ACC +
B, and
ACC © B are all
formed
first.
Each
of these
is,
is
ACC
then
•
B,
AND
1
.
638
THE McGRAWHILL COMPUTER HANDBOOK MOD
FIG.
620
2
ADD
Circuit for gating logical operations into accumulator flipflop
gated with the appropriate control signal. Finally the three control signals are
ORed
ACC
together,
and
this signal
is
used to gate the appropriate value into the
when one of the control signals is a 1 how a choice of several difl"erent
flipflop
Figure 620 shows
gated into a single
flipflop
using control signals.
We
function values can be
could include an
ADD
SHIFT RIGHT and a SHIFT LEFT
by simply adding more gates. Figure 621 shows an example of the logic circuitry used in modern computers to form sections of an ALU. All the gates shown in this block diagram are contained in a single IC chip (package) with 24 pins. The chip is widely
and a
signal
used
(in the
With
TTL
ns.
(There
8bit
etc.,
PDP11
an is
two
ECL
and Data General NOVAs, for example). the maximum delay from input to output is 1
series
(Schottky) circuits is
This chip
OR,
DEC
version with a 7 ns
maximum
delay.)
and can add, subtract, AND, chips could be used for the logic in an
called a 4bit arithmeticlogic unit 4bit register sections.
Two
accumulator, four chips would form a 16bit accumulator,
The function performed by
this chip
is
four function select inputs Sq, S,, S2, and Sj. (a 0),
the
etc.
M and M low
mode input mode input
controlled by the
When
the
74S181 performs such arithmetic operations
as
ADD
or
is
SUB
TRACT. When the mode input M is high (a ), the ALU does logic operations on the A and B inputs "a bitatatime." (Notice in Fig. 621 that the carry generating gates are disabled by M = 1.) For instance, if M is a 0, 5*/ and S2 1
are also Os, and So and S3 are
M
is
ORs
Is,
the 74S181 performs arithmetic addition. If
and S3 are Is, and Si and S2 are Os, the 74S181 chip exclusive Bq, A, © Bi, A2 © B2 and A3 © (mod 2 adds) A and B. (It forms Aq a
1, 5*0
B3.)
The
table in Fig. 621 further describes the operation of this chip.
THE ARITHMETICLOGIC UNIT
1
ACTIVE
MODE SELECT INPUTS*
LOW
INPUTS AND OUTPUTS
L
L L L
L
H
AB
A AB_
AB 
L
H
Logical
L L
H
H H
H
+ B
L L L
H
{C„=
1
A ® B A + B
AB A ®
1
1 _ Al^ {A + ^)
ABT
+
{A
A  B_~ A + B A + {A +
AT
z
=
is
B)
L
L
Logical
L
H
H H
L
AB AB
AB T A
H
A
A
4bit arithmeticlogic unit
the symbol for a
X®
OR
mod
2 adder
gate)
y
1
H
A
is
(exclusive
B)
B
ABT
—
1
AB T (A + A + B A ^ A*
1.
X y
B B A + B
L
Note: L)
1
H H
H H
*L = 0;H =
A A + B
L L
H H H H
FIG. 621
L)
L
L L L L
H H
ARITHMETIC
(W=
L
H H H H
H H H H H H
Sn.9,
"1
LOGIC (M = H)
L L L L L L L L
639
B)
the sign for arithmetic addition
640
THE McGRAWHILL COMPUTER HANDBOOK
NUMBER SYSTEMS
621 FLOATINGPOINT The preceding
number
sections describe
and negative integers are stored used, the binary point
numbers
integer.
it
the end of each word, and so computers calculate with binary
lies at
When
format, the operations are called fixedpoint arithmetic.
in this
In science
an
is
representation systems where positive
binary words. In the representation system
"fixed" in that
is
each value represented
in
is
it
often necessary to calculate with very large or very small
in which a number. For instance, 4,900,000 may be written as 0.49 X 10^, where 0.49 is the mantissa and 7 is the value of the exponent, or 0.00023 may be written as 0.23 X 10"^. The notation is based on the relation y = a X r^, where y is the number to be
numbers. Scientists have therefore adopted a convenient notation mantissa plus an exponent are used
represented, a
=
decimal, and r It is
the mantissa, r
is
number system
the base of the
2 for binary), and
p
is
bX
we form (a X b) X 10'"+". To 10'""". To add a X 10"" to Z? X
10'",
we form a/b X equal to n.lfm = making
m
n,
X
then a
equal to n
is
10'"
\
b
X
=
10"
divide 10",
(a
+
aX
To
b)
X
10 for
is
raised.
multiply a
10'"
we must
=
by
Z)
X
X
10",
make m The process
first
10'".
called scaling the numbers.
Considerable "bookkeeping" can be involved there can be
(r
the power to which the base
possible to calculate with this representation system.
10" times
of
is
to represent a
in scaling the
numbers, and
maintaining precision during computations when the
difficulty in
numbers vary over a very wide range of magnitudes. For computer usage these problems are alleviated by means of two techniques whereby the computer (not the programmer) keeps track of the radix (decimal) point, automatically scaling the numbers. In the first, programmed floatingpoint routines automatically scale the numbers used during the computations while maintaining the precision of the results and keeping track of the scale factors. These routines are used with small computers having only fixedpoint operations. lies in
A
second technique
building what are called floatingpoint operations into the computer's
hardware. The logical circuitry of the computer scaling automatically
performed.
To
point system,
A
and
number
effect this, a is
is then used to perform the keep track of the exponents when calculations are
to
representation system called the floating
used.
floatingpoint
number
in a
computer uses the exponential notation system
described above, and during calculations the computer keeps track of the exponent as well as the mantissa.
A
computer number word
in a floatingpoint sys
tem may be divided into three pieces: the first is the sign bit, indicating whether the number is negative or positive; the second part contains the. exponent for the number to be represented; and the third part is the mantissa. As an example, let us consider a 1 2bit word length computer with a floatingpoint word. Figure 622 shows this.
C Characteristic
It is
common
I Integer part
\E
'
One FIG. 622
practice to call the exponent
12bit
Binary point
word
12bit floatingpoint
word
THE ARITHMETICLOGIC UNIT
part of the
641
word the characteristic and the mantissa section the integer
we shall adhere to this practice. The integer part of the floatingpoint word shown
represents
signedmagnitude form (rather than 2s complement, although
value in
its
this
part;
has been
The characteristic is also in signedmagnitude form. The value of the number expressed is / X T, where / is the value of the integer part, and C is used).
the value of the characteristic.
Figure 623 shows several values of floatingpoint numbers both in binary form and after being converted to decimal. Since the characteristic has 5 bits
is
2^ X
Value
is
2^
10101 00001 0lj Value
is
2"^ X 5 =
is
26 X 9
00111 00010lTil Value
C
11000111
lOlOlOM
C
1 1
= 1408
/=+11
= +7
X (7) = 56
/=7
= +3
C=5
/=+5
1011 0l00li00lJ Value
C=6
/
FIG. 623
=
9
Values of floatingpoint numbers
in
^ =

^
12bit
allinteger systems
and
is
+ 15.
in
signedmagnitude form, the
The value of
/
is
can have values from system would have a
C in / X
2""
can have values from
a signplusmagnitude binary integer of 7
bits,
—
1
5 to
and so /
—63 to +63. The largest number represented by this maximum / and would be 63 X 2'^ The least number
would be 63 X 2'^ This example shows the use of a floatingpoint number representation system to store "real" numbers of considerable range in a binary word. One other widely followed practice is to express the mantissa of the word as a fraction instead of as an integer. This is in accord with common scientific usage since we commonly say that 0.93 X lO"* is in "normal" form for exponential notation (and not 93
X
10^). In this
mally has a value from 0.1 to 0.999.
.
.
.
usage a mantissa
in
decimal nor
Similarly, a binary mantissa in normal
form would have a value from 0.5 (decimal) to less than 1. Most computers maintain their mantissa sections in normal form, continually adjusting words so that a significant (1) bit
is
always
in the leftmost
mantissa position (next to the
sign bit).
When
the mantissa
is
in fraction
form, this section
is
called the fraction. For
example we can express floatingpoint numbers with characteristic and fraction by simply supposing the binary point to be to the left of the mag
our 12bit
nitude (and not to the right as in integer representation). In this system a
ber to be represented has value is
F X
2^ where
F
is
num
the binary fraction and
For the 12bit word considered before, fractions would have values from

C
the characteristic.
2^ which
is
0.111111, to (1

2"^),
which
is
1
1.111111. Thus numbers
642
THE McGRAWHILL COMPUTER HANDBOOK from

(1
X
2"^)

(1
2'^ to
2^^)
X
2'^
can be represented, or about
32,000 to —32,000. The smallest value the fraction part could have
which
fraction 0.1000000,
is
2"',
now
and the smallest characteristic, which
number representable
so the smallest positive
is
is
X
2~'
2"'^ or 2"'^.
is
+ the
2~'^,
Most com
puters use this fractional system for the mantissa, although computers of Bur
roughs Corporation and the National Cash Register
Company
use the integer
system previously described.
The Univac 1108
represents singleprecision floatingpoint
numbers
in this
format: 9 10
2
1
36
c
s
is
number
F
t
t
t
sign
characteristic
fraction part
bit
8 bits
27 bits
For positive numbers, the characteristic sign bit
bit
a 0, and the fraction part
is
C
treated as a binary integer, the
is
a binary fraction with value 0.5
W words
1 1
1
/ Bit
Words
FIG. 71
01001011
into
shall read the
memory
V
Bit 2
1
in
highspeed
Each word contains the same number of bits
memory
address 17 and later read from this same address,
word 0100101
1.
If
we again read from
(and have not written another word This means the
V
memory
is
in),
this
we
address at a later time
the word 0100101
1
will
again be read.
nondestructive read in that reading does not destroy
or change a stored word. It is
important to understand the difference between the contents of a
ory address and the address as
many drawers
itself.
A memory
as there are addresses in
is
mem
like a large cabinet containing
memory. In each drawer
is
a word,
and the address of each word is written on the outside of the drawer. If we write word at address 17, it is like placing the word in the drawer labeled 17. Later, reading from address 17 is like looking in that drawer to see its contents. We do not remove the word at an address when we read, but change the
or store a
contents at an address only
when we
store or write a
new word.
main memory looks very much like a "black box" with a number of locations or addresses into which data can be stored or from which data can be read. Each address or location contains a fixed number of binary bits, the number being called the word length for the memory. A memory with 4096 locations, each with a different address, and with each location storing 16 bits, is called a 4096word 16bit memory, or, in the vernacular of the computer trade, a 4K 16bit memory. (Since memories generally come with a number of words equal to 2" for some n, if a memory has 2'"* = 16,384 words, computer literature and jargon would refer to it as a 16K memory, because it is always understood that the full 2" words actually occur in the memory. Thus, 2' Word 16bit memory is called a 32K 16bit memory.) Memories can be read from (that is, data can be taken out) or written into (that is, data can be entered into the memory). Memories which can be both read from and written into are called readwrite memories. Some memories have programs or data permanently stored and are called readonly memories. A block diagram of a readwrite memory is shown in Fig. 72. The computer places the address of the location into which the data are to be read into the
From an
exterior viewpoint, a highspeed
memory address flipflops),
The data
where to
register. This register consists of n binary devices (generally
2"
is
the
number of words
be written into the
memory
that can be stored in the
are placed in the
memory
memory.
buffer reg
THE MEMORY ELEMENT /?)
bits per
75
word
Read write
random access
Read
memory 2" words
Write
Memory
buffer
register
m FIG. 72
ister,
memory
which has as many binary storage devices as there are
ory word.
The memory
The memory
line.
Readwrite randomaccess
bits
will
is
told to write
by means of a
then store the contents of the
1
bits in
each
signal on the
memory
mem
WRITE
buffer register in
memory address register. Words are read by placing the address of the location to be read from into signal is then placed on the READ line, and the memory address register. A the contents of that location are placed by the memory in the memory buffer the location specified by the
1
register.
computer communicates with the memory by means of memory buffer register, and the READ and WRITE inputs. Memories are generally packaged in separate modules or packages. It is possible to buy a memory module of a specified size from a number of different manufacturers, and, for instance, an 8K 16bit memory module can be purchased on a circuit board ready for use. Similarly, if a computer is purchased with a certain amount of main memory, more memory can generally later be added by purchasing additional modules and "plugging them in."
As can be
the
memory
If
there the
it is
is
seen, the
address register, the
possible to read from or write into any location "at once," that
no more delay
memory
is
in
memory (RAM). Computers
called a randomaccess
memories
invariably use randomaccess readwrite
memory and
73 LINEARSELECT
is, if
reaching one location as opposed to another location,
almost
for their highspeed
main
then use backup or slower speed memories to hold auxiliary data.
MEMORY ORGANIZATION
The most used randomaccess memories memories. Both are organized
are
in a similar
IC memories and magnetic core
manner, as
will
In order to present the basic principles, an idealized
be shown.
IC memory
will
be
shown, followed by details of several actual commercial memories. In any
memory
memory
there must be a basic
cell consisting
of an
RS
memory
flipflop
cell.
Figure 73 shows a basic
with associated control circuitry. In
76
THE McGRAWHILL COMPUTER HANDBOOK
Will
FIG. 73
memory
Basic
be drawn as
cell
order to use this cell in a memory, however, a technique for selecting those cells
addressed by the
memory
address register must be used, as must a method to
control whether the selected cells are written into or read from.
memory organization for a linearselect IC memmemory with 3 bits per word. The memory address the memory cells (flipflops) to be read from or written
Figure 74 shows the basic
This
ory.
register
is
a fouraddress
(MAR)
selects
into through a decoder which selects three
be
in the
memory
flipflops for
each address that can
address register.
Figure 75(a) shows the decoder in expanded form. flipflop (bit) to
be decoded.
there will be four output lines, take. For instance,
if
memory
cells contain a
three lines a line
with a
always be a
1
0.
MAR contains
the
of the decoder will be a
It has an input from each two input bits as in Fig. 75(a), then one for each state (value) the input register can
If there are
1 1
,
in
both
flipflops,
and the remaining three
lines
a
the lowest output line will be a
then the upper line
0. Similarly, if 1
both
and the remaining
Similar reasoning will show that there will be a single output
output for each possible input
state,
and the remaining
lines will
0.
Figure 75(b) shows a decoder for three inputs. The decoder has eight output lines.
In general, for n input bits a decoder will have 2" output lines.
The decoder 5(a).
in Fig. 75(b) operates in the
For each input state the decoder
same manner
will select a particular
as that in Fig. 7
output
line, plac
on the selected line and a on the remaining lines. Returning to Fig. 74, we now see that corresponding to each value that can be placed in the MAR, a particular output line from the decoder will be selected and carry a 1 value. The remaining output lines from the decoder will contain ing a
1
THE MEMORY ELEMENT
77
Data inputs I^
MAR, I
t—w
00 §
0—1
"
01
o
10
I
5 1
11
/
O
P
E MAR 2 Two flipflops in
MAR
/
w Write.
Read
FIG. 74
Linearselect
Os, not selecting the
IC memory
AND
gates at the inputs and outputs of the flipflops for
these rows. (Refer also to Fig. 73.)
The memory
in Fig. 74 is
organized as follows: There are four words, and
comprises a word. At any given time the MAR memory. If the READ line is a 1, the contents of the three cells in the selected word are read out on the Oi, O2, and O3 lines. If the WRITE line is a 1, the values on //, 1 2, and /^ will be read into the memory. The AND gates connected to the OUT lines on the memory cells in Fig. 73 must have the property that when a number of AND gate output lines are
each row of three selects a
word
memory
cells
in
connected together, the output goes to the highest line
goes to
memory is
a
1,
in this
1,
otherwise
cells in the first
it is
a 0.) This
is
column are wireORed
the entire line will be a
1.
level. (If
called a wired
(Memory
any
OR.
together, so
cells in
OUT is a
In Fig. 74 if
1,
all
any output
the
four line
IC memories are constructed
manner.)
Now
if
the
READ
line is a
1
in Fig. 74, the
output values for the
flipflops
1
78
THE McGRAWHILL COMPUTER HANDBOOK Decoder
~1 00
MB 01
AB Decoder outputs
10
AB
11
AB L
^^=P=?
AG
4=1
AG
4=l=l=? 4=r=?
AG
1
1
AG
4=r^
AG
X,
Jv
Ji.
,
y
Ji.
^
AG 1
1
I
(6)
FIG. 75
row
in the selected
(a)
Fouroutput decoder (b) Parallel decoder
will all
be gated onto the output
line for
each
bit in the
memory. For example,
if
the second row in the
and if the ory decoder (marked 01) cells,
three
memory
MAR will
cells will
memory
contains 110 in the three
mem
contains 01, then the second output line from the
be a
1,
and the input gates and output gates
be selected.
If the
READ
line
is
a
1,
to these
then the outputs
THE MEMORY ELEMENT
79
from the three memory cells in the second row will be 1 10 to the AND gates at the bottom of the figure, which will transmit the value 1 10 as an output from the memory. and the again contains 01, the second row If the WRITE line is a of flipflops will have selected inputs. The input values on //, I2, and 1 3 will then
MAR
1
be read into the
flipflops in
the second row.
As may be seen, this is a complete memory, fully capable of reading and The memory will store data for an indefinite period and will operate as
writing.
fast as the gates
memory
—
its
and
flipflops will
complexity.
ated circuitry)
is
The
permit. There
basic
memory
is
only one problem with the
cell (the flipflop
with
complicated, and for large memories the decoder
its
will
associ
be large
in size.
In order to further explore
decoder construction used, and finally
74
in
more
memory
we will first examine schemes that are commonly
organization,
detail, the selection
some examples of IC memories now
production.
in
DECODERS An
important part of the system which selects the
written into
is
cells to
the decoder. This particular circuit
is
be read from and
called a
manytoone
decoder, a decoder matrix, or simply a decoder, and has the characteristic that
each of the possible 2" binary input numbers which can be taken by the n input cells, the matrix will have a unique one of its 2" output lines selected. for
Figure 75(b) shows a decoder which is completely parallel in construction and designed to decode three flipflops. There are then 2^ = 8 output lines, and for each of the eight states which the three inputs (flipflops) may take, a unique output line will be selected. This type of decoder is often constructed using
AND gates. The rule AND gate equal to the
the number of diodes (or number of inputs to each AND gate. For Fig. 75(b) this is equal to the number of input lines (flipflops which are being decoded). Further, the number of AND gates is equal to the number of output lines, which is equal to 2" (n is the number of input flipflops being decoded). The total number of diodes is therefore equal to n X 2", and
diodes (or transistors) in the transistors)
used
for the binary
in
each
decoding matrix
is:
is
in Fig. 75(b)
24 diodes are required
to construct
As may be seen, the number of diodes required increases sharply number of inputs to the network. For instance, to decode an eightflipregister, we would require 8X2^ = 2048 diodes if the decoder were con
the network.
with the flop
structed in this manner.
As
which are often used in building decoder networks. One such structure, called a treetype decoding network, is shown in Fig. 76. This tree network decodes four flipflops and therefore has 2"* = 16 output lines, a unique one of which is selected for each state of the flipflops. An examination will show that 56 diodes are required to build this particular network, while 2'* X 4 = 64 diodes would be required to build the parallel decoder type shown in Fig. 75. Still another type of decoder network is shown in Fig. 77. It is called a balanced multiplicative decoder network. Notice that this network requires only a result there are several other types of structures
710
THE McGRAWHILL COMPUTER HANDBOOK
^3^3
X,
^1
FIG. 76
48 diodes.
It
Tree decoder
can be shown that the type of decoder network illustrated
77 requires the
minimum number
in Fig.
of diodes for a complete decoder network.
number of diodes, or decoding elements, to construct a network such as shown in Fig. 77, compared with those in Figs. 75 and 76, becomes more significant as the number of flipflops to be decoded increases. The network shown in Fig. 75, however, has the advantage of being the fastest
The
difference in the
and most regular in construction of the three types of networks. Having studied the three types of decoding matrices which are now used in digital machines, we will henceforth simply draw the decoder networks as a box with n inputs and 2" outputs, with the understanding that one of the three types of circuits shown in Figs. 7577 will be used in the box. Often only the uncomplemented inputs are connected to decoders, and inverters are included in the decoder package. Then a threeinput (or threeflipflop) decoder will have only three input lines and eight outputs.
^
,
>
r
I
THE MEMORY ELEMENT
711
x,x. X,x,
x,x. X\ X2 x^ x^
A X2 A3 A^
,
J
^g\ ^>
Xj A2
^3
u^ A^
if
A) Aj A3 A
.
1^2
^n
u^ I
—
.
I
U^
x,^' X, X2
A A 2 A3 A4
,
A] A2 A3 A^
Xi X2
Aj A2 A3 A^
A
A2A3A4
n
ag)
2
1
3
4
I
2
1
^ —
3
4
Aj X2 A3 A^
\A AG
iiE) J
Aj A3 A^
>
—
r
=
AG
_E)
tiE) FIG. 77
75
X, Xj
.
)
X3X4
Balanced decoder
DIMENSIONS OF MEMORY ACCESS The memory organization selection system. This
is
in Fig. 74
has a basic linearselect (onedimensional)
the simplest organization. However, the decoder in the
becomes quite large as the memory size increases. As an example we assume a parallel decoder as shown in Fig. 75b. These are widely used in IC packages because of their speed and regular (symmetric)
selection system
construction.
Consider now a decoder for a 4096word memory, a package. There will be 12 inputs per required. If a diode (or transistor)
12
X
4096
=
is
AND
is
and 4096
required at each
49,152 diodes (or transistors)
of components
gate,
will
the primary objection to this
common
AND
size for
AND
add another
78.
Now
both
gates are
gate's input, then
be required. This large number
memory
organization.
Let us now consider a twodimensional selection system. First we to
an IC
will
need
SELECT input to our basic memory cell. This is shown in Fig. the SELECT and the SELECT 2 must be Is for a flipflop to 1
be selected. Figure 79 shows a twodimensional
Two
memory
selection system using this cell.
memory, which has 16 words of only 1 bit per word (for clarity of explanation). The MAR has 4 bits and thus 16 states. Two of the MAR inputs go to one decoder and two to the other. To illustrate the memory's operation, if the MAR contains 0111, then the value 01 goes to the left decoder and 1 1 goes to the upper decoder. This will select the second row (line) from the left decoder and the rightmost column from the top decoder. The result is that only the cell (flipflop) at this intersecdecoders are required for
this
712
THE McGRAWHILL COMPUTER HANDBOOK Select 2
Select
1
Write Will
FIG. 78
Twodimensional memory
tion of the second lines
be drawn as
cell
row and the rightmost column
(and as a result
its
AND
single cell will be selected,
gates) enabled.
and only
will
As a
this flipflop
have both
its
SELECT
result, only this particular
can be read from or written
into.
As another example, the
left
decoder
will
if
be a
the 1
MAR contains
1001, the line for the third row of
as will be the second
at the intersection of this row and
column
will
column
line.
The memory
be enabled, but no other
cell
cell will
If the READ line is a 1, the enabled cell will be read from; if the WRITE line is a 1, the enabled cell will be written into. Now let us examine the number of components used. If a 16word 1bit mem
be enabled.
ory was designed using the linearselect or onedimensional system, then a
decoder with
16X4
inputs and therefore 64 diodes (or transistors) would be
required.
For the twodimensional system two 2input 4output decoders are required, each requiring 8 diodes
(transistors);
so
16 diodes are required for both
decoders.
For a 4096word 1bitperword memory the numbers are more striking. A 4096word linearselect (onedimensional) memory requires a 12bit MAR. This decoder therefore requires 4096 X 12 = 49,152 diodes or transistors. The twodimensional selection system would have two decoders, each with six inputs. Thus each would require 2^ X 6 = 384 diodes or transistors, that is, a total of 768 diodes or transistors for the decoders. This is a remarkable saving, and extends to even larger memories.
THE MEMORY ELEMENT
MAR, MAn.
MAR,
713
MAR.
Row decoder
Decoder
Column decoder
Decoder
(sonietinies
10
11
01
00
00 01 10
called .V
11
(sometimes called Y decoder)
decoder
WRITEINPUT
(All U
inputs on cells are connected to this input)
(All / int)uts
on
cells are
connected to
this input line)
READ READ OUT All outputs from cells are connected to this point
Twodimensional IC memory organization
FIG. 79
In order to
memory
make
like that
a
memory
shown
with more bits per word,
in Fig. 79 for
each
bit in the
we simply make a
word (except that only
MAR
and the original two decoders are required). The above memory employs a classic twodimensional selection system. This is the organization used in most core memories and in some IC memories. Figure 710 shows an IC memory with 256 bits on a single chip. As can be seen,
one
memory. In a twodimensional memory, however, simplification in decoder complexity is paid for with cell complexity. In some cases this extra cell complexity is inexpensive, but it is often a problem, and so a variation of this scheme is used.
this
is
A
a twodimensional select
variation on the basic twodimensional selection system
is
illustrated in
714
THE McGRAWHILL COMPUTER HANDBOOK
736
Package outline
0.830
295 325
BO
PIN
1
(O
0010 t 0.005
_L
H
0.055
260
0.125
0.293
0)
oo
200
MAX, 0.010 "0 002
°^^p^ KO
1
0.020
100 MIN.
^U ^'*A
100^0 010*1 TYP.
Z^ "*
0.074
0.290
o E
0.410
0.032 REF.
Block diagrann
Pin configuration
256 BIT
ADDRESS
5
1
RAM
CHIP SELECT
16
PLANE
^4
ADDRESS
8
2
15
R/W
ADDRESS
1
3
14
DATA OUT
4
13
DATA OUT
5
12
DATA
DATA
a:
5
6
11
ADDRESS
4
ADDRESS
1
;
10
ADDRESS
2
8
9
ADDRESS
3
Vui>
I
/ 1/
CQ
ADDRESS
y ADDRESS DECODE
CIRCUIT
OUT IN
I
SENSE
Y
=)
INPUT BUFFERS
DATA
ODT
5 3
o
CS
6
1
13
2
R/W DATA
FIG. 710
Singlechip 256bit
memory
IN
1
(courtesy of Intel Corp.)
memory uses two decoders, as in the previous scheme; however, memory cells are basic memory cells, as shown in Fig. 73. The selection scheme uses gating on the READ and WRITE inputs to
Fig. 711. This
the
achieve the desired twodimensionality.
Let us consider a
WRITE
operation. First
assume that the
MAR
0010. This will cause the 00 output from the upper decoder to be a the top row of 1,
and
the
this
is
memory
cells.
in the third
row and third column the
In the lower decoder the 10 output will
contains selecting
become a
AND gate near the bottom of the diagram, turning
gated with an
W inputs on
1,
column. As a
S
result, for the
input and the
W input
memory
will
be a
1
cell in .
the top
For no other
^
THE MEMORY ELEMENT
MAR, I
^
MAR'
—
T~T
'
'
—'—
I
^
Row
decoder (sometimes
MAR
MAR,
Column decoder (sometimes called
Y
decoder
WRITE D/
All
inputs to
memory cells are
connected to this line
FIG. 71
1
IC memory chip layout
715
716
THE McGRAWHILL COMPUTER HANDBOOK
memory
RS
its
cell will
both
flipflop set to
cells are
S
Whe
and
a.
1
and so no other memory
,
the input value. (Notice that
connected to the input value
memory
cell will
MAR
is
decoder's 01 line will be a
As a
cells.
writing a
1
1,
turning the
S
.
This
1
on the output
lines.
(Again, the
will
have input
turns on the rightmost
memory
1 1
,
and so
AND gate in
the output from the rightmost column of
down has
its
mem
result only these four cells in the entire array are capable of
The lower decoder 1
0111, then the upper
inputs on in the second row of
having their outputs connected together, this time a
have
memory
will indicate that for each value be selected for the write operation. Therefore for each
MAR state only one memory cell will be written into. similar. If the MAR contains The read operation ory
cell will
/ inputs on the
/)/.)
Consideration of other values for the a unique
all
its
in
cells are
wireORed by
groups of four.)
lowest output line will carry
the lowest row, which enables
memory
Only the second
cells.
cell
output enabled, however, and so the output from the rightmost
AND gate will have as output the value in the cell. This value then goes through the OR gate and the AND gate at the bottom of the diagram, the AND gate having been turned on by the
READ
signal.
show that each input value from the MAR will select a unique memory cell to be read from, and that cell will be the same as would have been written into if the operation were a write operation. This is basically the organization used by most IC memories at this time. The chips contain up to 64K bits. The number of rows versus the number of columns in an array is determined by the designers who decide upon the numbers that will reduce the overall component count. All the circuits necessary for a memory are placed on the same chip, except for the MAR flipflops which quite often are not placed on the chip, but the Examination
will
inputs go directly to the decoders. This will be clearer
when
interfacing with a
bus has been discussed.
76
CONNECTING MEMORY CHIPS TO A COMPUTER BUS The
present trend in computer
central processing unit
connection
is
to
connect the computer
(CPU), which does the arithmetic, generates
memory by means of a bus. The bus is simply shared by all the memory elements to be used.
etc., to
are
memory
the
control,
a set of wires which
Microprocessors and minicomputers almost always use a buS to interface
memory, and in this case the memory elements will be IC chips, which are in IC containers just like those shown in Fig. 710. The bus used to connect the memories generally consists of (1) a set of address lines to give the address of the word in memory to be used (these are effectively
an output from a
MAR on the microprocessor chip); (2) a set of data
wires to input data from the
memory and
output data to the memory; and (3)
a set of control wires to control the read and write operations.
Figure 712 shows a bus for a microcomputer. In order to simplify drawings
and
clarify explanations,
we
will use
a
memory bus
with only three address
THE MEMORY ELEMENT
717
Address
A,
line
A2
,
^
Data
in
lines
Bus
h
lines
J
Data out
0;
0, to
CPU
CPU
lines J
R/W\
Control
ME
lines 1
3us
chip
chip
chip
(b)
FIG. 712
Bus
CPU/memory
lines,
computer system
for
Bus
(a)
lines (b)
Bus/
organization
three output data lines, two control signals, and three input data lines.
The memory to be used is therefore an 8word 3bitperword memory. The two control signals work as follows. When the R/W line is a 1 the memory is to be read from; when the R/W line is a 0, the memory is to be written into. The MEMORY ENABLE signal ME is a 1 when the memory is either ,
to
be read from or to be written
The IC memory package
to
into;
otherwise
be used
is
has three address inputs Aq, Ai, and A2, an input bit Di, and a bit
CHIP SELECT
it is
shown
a
0.
Each IC package
in Fig. 713.
R/W
input, an output bit Dq,
CS. Each package contains an 8word
memory.
Logic symbol
Pin CO ntigL ration
^0 A,
c C
A^Q RIW
C C
1
6
::^/v
2
7
H Do
3
8
Z\
CS
4
9
3
+5
5
10
^
Gh'D
FIG. 713
IC package and block diagram symbol
chip (a) Pin configuration (b) Logic symbol
for
RAM
an 1
H
718
THE McGRAWHILL COMPUTER HANDBOOK
The IC memory chip works be
set to the address to
operation (the
is
CS
a
READ,
line
is
as follows.
The address
lines Aq, A,,
and
A 2 must
be read from or written into (refer to Fig. 713).
the
R/W
normally a
line
is
The data
1).
may
bit
CS
and the
set to a 1,
line
is
If the
brought to
then be read on line Dq.
Certain timing constraints must be met, however, and these will be supplied by the IC manufacturer. Figure 714 shows several of these.
minimum
The value Tr
is
the
cycle time a read operation requires. During this period the address
must be stable. The value T^ is the access time, which is the maximum time from when the address lines are stable until data can be read from the memory. The value Tco is the maximum time from when the CS line is made lines
until data
a
can be read.
The bus timing must accommodate
the above time.
>
R/W
\
1
means read from memory
CE A
means enable chip (lowered after address lines are
y
Dj^ not used
J
Do
—
important that the
lines are set
R/W: A
f
CE
Address
It is
Memory
V
in
place
1
set)
read cycle (or 0)
output
on bus
Tco
read
(q)
^ln~
~\
cycle
r
/
>
\
Address
lines are set
A,RIW~~\ CE
r\
R/IV:
/
Di,__v
CE
A
.
enables chip set to value to
is
be written into chip
n
D(, not used T,, is
TfK;
(6)
FIG, 714
means write
memory
D/Y
\
.4
into
Timing
for bus
WRITE
IC memory
(a)
in
minimum
is
write operation
cycle time for
minimum time CE must
cycle
READ
A
cycle (b)
WRITE
cycle
write
be
THE MEMORY ELEMENT
719
bus not operate too fast for the chip and that the bus wait for at least the time T^ after setting its address lines before reading and wait at least Tco after lowering the CS line before reading. Also, the address line must be held stable for at least the period 7,^.
For a lines,
WRITE operation the address to be written into is set up on the address
the
R/W
line
is
are placed on the D,
The time is
made
a 0,
CS
is
brought down, and the data to be read
line.
interval Tyy
is
the
minimum
time for a
WRITE
cycle; the time
T^
the time the data to be written into the chip must be held stable. Different
types of memories have different timing constraints which the bus must accom
We
assume that our bus meets these constraints. memory from these IC packages (chips), the interconnection scheme in Fig. 715 is used. Here the address line to each chip is connected to a corresponding address output on the microcomputer bus. modate.
will
In order to form an 8word 3bit
The CHIP
MEMORY
ENABLE input of CS
of each chip is connected to the from the microprocessor via an inverter, and the R/W bus line is connected to the R/W input on each chip. If the microprocessor CPU wishes to read from the memory, it simply places the address to be read from on the address lines, puts a 1 on the R/W line, and
ENABLE
then raises the line,
ME
output
and the
ME
CPU
a chip's output
is
line.
can read these values on
FIG. 715
selected bit onto
its //, I2,
and
Is lines.
its
output
(Notice that
a bus input.)
Similarly, to write a
Bit
Each chip then reads the
word
into the
1
Interfacing chips to a bus
memory, the
Bit 2
CPU
places the address to
Bit 3
720
THE McGRAWHILL COMPUTER HANDBOOK
,
o> '
A
1 ,
SO
q^q^
=
^ chips Remain
o •
c
(
o
>