954 42 44MB
English Pages 950 [1000] Year 1983
Table of contents :
Computer history and concepts. Computer structures. Number systems and codes. Boolean algebra and logic networks. Sequential networks. The arithmetic-logic unit. The memory element. Software. Input, output, and secondary storage devices. Timesharing systems. Assembly and system level programming. Survey of high-level programming languages. Basic. Cobol. Fortran. Pascal. PL/I. Hardware and software documentation. Databases and file-system organization. Computer graphics. Artificial intelligence and robotics. Character printers. Graphics plotters. Survey of microprocessor technology. Microcomputers and programming. Subroutines, interrupts, and aritmetic. Interfacing concepts. Microcomputer operating systems. Audio output: speech and music. Voice recognition. Index.
.« 4.
^m/mmm.my^fmm4^^*t
«.%^-4»'«i.«i'4|K*^1«)««>4Mk^4i^^'«»»«^i^
APPLICATIONS
CONCEPTS HAT«!)WARE '^6ftware
0\'crvie\v bv Adam Osborne
Foreword bvThomas
C Bartee
«.
i«
m^m^t
a"a7-aa7i7E-i
i^i3N
McGraw-Hill N
aDMPUTER HANDBOOK Harry Helms 992 pages, 475
illustrations
Here is the all-inclusive and highly definitive handbook the computer world has been waiting
for!
Whether your
interest
is
professional, per-
sonal, business, or academic, Hill
Computer Handbook
The McGraw-
a standard
is
ref-
erence that will be essential in answering virtually any question that may arise in the use of today's computers. Written by a staff of world-renowned
comprehenand practical information and techniques on mainframe computer, minicomputer, and microcomputer hardware, software, theory, and applications. experts, this working tool offers
sive, authoritative,
Clearly written
and extensively
illustrated for
quick comprehension, this book has a speit assumes no prior knowledge computer science; thus, nonspecialists and enthusiasts can benefit from the knowledge as well as any professional.
cial feature:
of
Another key feature is that is organized for easy reference. This relevant anthology begins with the elementary concepts applicable to all computer systems, large or it
small.
It
nents of
on
then examines the basic compoall
computer systems and moves
to specific
systems.
By gathering the broad spectrum
of
com-
puter science information into one place,
The McGraw-Hill Computer Handbook will prove invaluable to both the experienced user and the beginner. Here is a sampling of topics that are detailed
* *
in this text:
basic computer theory computer structures (continued on back flap)
Digitized by the Internet Archive in
2012
http://archive.org/details/mcgrawhillcomputOOhelm
The McGraw-Hill
Computer Handbook
The McGraw-Hill
Computer Handbook Editor
in
Chief
Harry Helms Overview by
Adam Osborne Foreword by
Thomas
C. Bartee
McGraw-Hill Book Company New York St. Louis San Francisco
Auckland Bogota Hamburg Johannesburg London Madrid Mexico Montreal New Delhi Panama Paris Sao Paulo Singapore Sydney Tokyo Toronto
Library of Congress Cataloging
Main
entry under
in Publication
Data
title:
The McGraw-Hill computer handbook. Includes index. 1.
Computers
— Handbooks,
manuals,
etc.
Programming (Electronic computers) — Handbooks, 3. Programming languages (Electronic manuals, etc. I. Helms, computers) — Handbooks, manuals, etc. 2.
Harry L.
II.
QA76.M37 ISBN
McGraw-Hill Book Company. 83-1044
001.64
1983
0-07-027972-1-
Copyright
© 1983 McGraw-Hill,
Inc. All rights reserved. Printed in the
United States of America. Except as permitted under the United States Copyright Act of 1976, no part of
this publication
may
be reproduced or
distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher.
34567890 KGP/KGP
8 9 8 7 6 5 4
ISBN 0-n7-DE7T72-l The
editors for this
book were Patricia Allen-Browne and Margaret Lamb, was Teresa F. Leaden, and the designer was
the production supervisor
Mark
E. Safran.
It
was
Printed and bound by
set in
Times Roman by University Graphics,
The Kingsport
Press.
Inc.
Contents
Contributors
Overview
xiii
Foreword
xv
1.
xi
Computer History and Concepts 1-1
Introduction
1-2
Historical Perspective
1-3
A
1-4 1-5
1-6 1-7 1-8
1-1
Classification of
3.
1-2
4-1
Automatic
4-3 Truth Tables
4-1
Expressions
4-3
Boolean Algebra Theorems
4-5
Using the Boolean Algebra
4-6
Theorems 4-9 The Karnaugh Map Method
4-7
of Boolean
4-13
Simplification
Computer Structures
4-2
and Boolean
4-4
1-13 4-7
Logic Networks
4-8
Additional Logic Gates
4-20 4-21
2-1 5.
2-1
Introduction
2-2
Functional Units
2-3
Input Unit
2-4
Memory
2-5
Arithmetic and Logic Unit
2-6
Output Unit
2-8
2-7
Control Unit
2-9
2-8
Basic Operational Concepts
2-9
Bus Structures
2-1
2-2
2-5
Unit
Introduction
4-2 Boolean Algebra
Computers 1-5 The Nature of a Computer System 1-6 1-7 Principles of Hardware Organization 1-10 Conventions on Use of Storage 1-11 Elements of Programming Principles of the Space-Time Relationship
2.
Boolean Algebra and Logic Networks 4-1
1-1
2-5 2-7
Sequential Networks 5-1
Introduction
5-2
The Flip-Flop Element
5-3
State Tables and State Diagrams
Diagram 2-10
Number Systems
3-2
Binary Codes
3-3
Error Detection and Correction
5-1
5-6
3-1
3-1
3-5 3-8
6.
5-10
Converting a Logic Diagram into a State
Table
2-12
3-1
5-1
5-4 Converting a State Table into a Logic
5-5
Number Systems and Codes
5-1
5-14
5-6
Design Examples
5-7
Important Sequential Networks
5-18
The Arithmetic-Logic
6-1
Unit
6-1
Introduction
6-2
Construction of the
6-3
Integer Representation
6-1
ALU
6-2 6-3
5-26
1
CONTENTS
VI
6-4 6-5
6-6 6-7
6-8
6-9
The Binary Half Adder 6-4 The Full Adder 6-5 A Parallel Binary Adder 6-7 Positive and Negative Numbers Addition in the IS Complement
7-19
Drum
7-21
6-8
6-9
Addition
in the
System
6-10
Storage
7-51
Magnetic Drum 7-53 Magnetic Disk Memories
7-55
7-22 Flexible Disk Storage Systems
Floppy Disk
System
2S Complement
7-23
— the
7-60
Magnetic Tape
7-63
7-24 Tape Cassettes and Cartridges 7-25 Magnetic Bubble and
6-10 Addition and Subtraction in a Parallel
Memories
6-12
Arithmetic Element
Adder Designs
Full
6-12
The Binary-Coded-Decimal (BCD) Adder 6-16
6-13 Positive and Negative
7-27
7-71
7-72
Return-to-Zero and Return-to-Bias
Recording Techniques
BCD
7-69
CCD
7-26 Digital Recording Techniques
6-14
6-11
Numbers
Magnetic
7-20 Parallel and Serial Operation of a
7-73
7-28 Non-Return-to-Zero Recording
Techniques
6-19
7-75
6-14 Addition and Subtraction in the 9S 6-15
Complement System 6-19 The Shift Operation 6-24
6-16 Basic Operations
6-25
6-17 Binary Multiplication
6-27
6-18 Decimal Multiplication
6-19 Division
6-31
6-32
6-20 Logical Operations 6-21
8.
6-37
Number Systems
Floating-Point
Software
8-1
8-1
Introduction
8-2
Languages and Translators
8-3
Loaders
Numbers
8-4
Linkers
8-5
Operating Systems
8-7
The Memory Element
9.
6-44
Input, Output,
9-1
7-1
and Secondary
9-3
Introduction
7-2
Random-Access Memories
7-3
Linear-Select
7-4
Decoders
7-5
Memory Access Memory Chips to a
Dimensions of
7-6
Connecting
Medium-Term Storage Devices
9-5
Speed and Capacity Comparisons
10. Timesharing
Computer Bus 7-16 Random-Access Semiconductor Memories 7-21 7-8 7-22 Bipolar IC Memories 7-9 7-26 Static MOS Memories 7-10 Dynamic Memories 7-29 7-11 Read-Only Memories 7-31 7-12 Magnetic Core Storage
in a
Introduction
The User Viewpoint and Some
10-3
Choice of Time Slice
10-4
The The
10-5
7-37
7-42
7-18
10-10
10-6
Performance Measurement
A
10-16
Timesharing System Simulator
IBM
7-44
7-46
Memory
10-9
MIT CTSS System APL System 10-12
Timesharing Option (TSO) For System/360/370 10-28 10-9 The G.E. Information Service Network 10-35
7-40
Circuitry
10-3
10-19
7-16 Driving the X- and Y-Selection 7-17
9-19
10-1
10-7
10-8
7-14 Assembly of Core Planes into a Core
Lines
9-16
10-1
10-1
Consequences
Two-Dimensional
7-15 Timing Sequence
Systems
10-2
7-13 Storage of Information in Magnetic
Memory
9-8
9-4
7-1
7-7
Array
9-2
Long-Term Storage and Intermediate
7-9
7-5
Cores
9-1
Input-Output
7-3
Memory
Organization
Introduction
9-1
9-2 Input-Output Devices
7-2
7-1
8-10
6-40
Storage Devices 7.
8-2
8-4
6-22 Performing Arithmetic Operations with Floating-Point
8-1
Buffer Register and Associated
7-17
Core-Memory Organization and Wiring Schemes 7-49
11.
Assembly and System Level Programming 11-1 11-1
Introduction
11-2
The Raw Machine: Load 11-2
11-1 Initial
Program
1
CONTENTS 11-3 1
1-4
The Assembler 11-26
Editors 11-5
The Library
11-34
11-6 Other Translators 1
1-7
1-8
1
FORTRAN 15-1
Introduction
15-5
Program Format 15-2 Key Punch and Terminal Entry 15-3 Constants and Variables Type Declarations 15-4
15-6
Arrays
15-4
1-35
Task and Job Scheduling, 11-38
15-1
15-5
5-7
Assignment of Values
15-8
Arithmetic Operators
15-9
Relational Operators
1
Commands
15-1
15-2 15-3
11-35
Running the Program, Monitoring
1
15.
11-4
Relocating Loaders and Linkage
Survey of High-Level Programming
Languages
15-6
Introduction
15-7
15-10
12-1
15-13
Development of High-Level Languages 12-2 12-3 High-Level Language 1
15-7
Control and Transfer Statements
15-1
12-1
15-12 Subprograms 12-1
5-5
1
15-6
15-10 The Equals Symbol
12.
Intrinsic Functions
15-13
2-2
Summary
15-15
Naming Programs
15-15
15-16 Character Manipulation
15-16
12-3
Descriptions 12-4
15-14 Parameters 15-15
15-17
Equivalence Operators
15-18
File
15-17
12-5
Organization
15-17
15-19 List-Directed (Stream) Input/
Output 13.
BASIC
15-21 13-1
Introduction
13-2
System Commands Program Structure
13-3
13-2
Variables and Constants
Arithmetic Operators
13-6
Relational Operators
13-7
Logical Operators
13-11
13-5
16.
13-8
13-8
13-9
13-11
Numeric Functions
13-12 String Functions
13-12 13-13
13-13 Assembly Language Statements and
Routines
13-15
13-14 Graphics Statements 13-15
Input and Output Statements
13-17 Reserved
13-18
Introduction
16-2
Program Structure
16-3
Identifiers
16-4
Data Types
16-5
Definitions
16-6
Arrays
16-1
16-3
and Declarations
16-7
Assignment Operations Relational Operators
16-9
Control Statements
16-11
16-12 Input and Output
Packed Arrays
16-6 16-8
16-9
16-9
16-10
16-10
16-16 Reserved
COBOL
16-5
16-5
Predefined Functions
16-14 Sets
13-21
16-3
16-4
16-15 Files and Records
14.
16-2
16-3
16-8
16-13
13-20
Words
16-1
16-1
16-10 Functions and Procedures
13-16
13-16 Specialized Input and Output
Statements
Pascal
13-7
Order of Operations 13-8 Program Logic and Control
13-10 Subroutines
15-18
15-21
13-5
13-5
13-9
Reserved Words
13-1
13-4
13-8
15-18
15-20 Formatted Input/Output
13-1
Words
16-11
16-12
14-1
14-1
Introduction
14-2
Writing
14-3
COBOL
14-4
Input-Output Conventions
14-1
COBOL
Programs
Statements
14-3
14-16
17. PL/I 17-1
14-37
17-1 Introduction
17-2 Writing
17-1
PL/ 1 Programs
17-3
Subprograms 14-43 14-6 Complete Programs 14-47
17-3
Basic Statements
17-4
Input-Output Conventions
14-7
Additional Features
17-5
14-8
Comparison of 1968 and 1974
Subprograms 17-48 17-6 Complete Programs 17-62
Standards
17-7
14-5
14-54
14-48
VII
Additional Features
17-14
17-63
17-32
15-3
1
Viii
18.
CONTENTS Hardware and Software Documentation 8-1
20-19 The Storage-Tube Display Display
18-1
Introduction
Good 21.
Documentation 18-2 18-2 18-3 Types of Documentation 18-3 18-4 Reference Documentation 18-5 Tutorial Documentation
Developing Documentation
21-1
Robotics 21-1
18-3
Introduction
18-4
Databases and File-System
21-3 Expert Systems
19-1
Introduction
19-2
Files
19-3
Computations on
Descriptions
19-7
Binding
Database
a
19-14 19-15
19-16
Classification of Printers
22-3
Printer Character Set
22-4
Character-At-A-Time Impact Printers
19-6
19-11
19-14
for Fully
22-5
19-18
22-6
Applications
19-21
19-22
19-27
Pile
22-1
22-2
22-3
Formed Characters
22-5
Line-At-A-Time Impact Printers
for
Formed Characters 22-9 Line-At-A-Time Nonimpact Printers 22-1 for Fully Formed Characters Fully
19-10 File-System Organization
19-12
Introduction
22-2
19-16
Systems
19-13
22-1
19-4
Classification of Operating
The The The The The The
21-5
22-1
22. Character Printers
Hierarchical View of Data
Current Practice
19-11
21-4
21-6 Robots and Personal Computers
19-1
19-6
19-9
21-3
Robotics
19-2
19-5
19-8
in
21-1
21-3
19-1
Organization
A
21-1
21-2 Computers and Intelligence
21-5 Tasks
19-4
and
Artificial Intelligence
21-4 Robotics
19.
20-34
18-1
18-2 Characteristics of
18-6
20-31
20-20 The Refresh Line-Drawing
19-33
Sequential File
Dot-Matrix Printers
22-8
Character-At-A-Time Impact Dot-
22-9
Line-At-A-Time Impact Dot-Matrix
19-57
Printers
19-68
Direct File
22-19
22-10 Nonimpact Dot-Matrix Printers
19-84
Multiring File
22-13
22-16
Matrix Printers
19-40
Indexed-Sequential File
Indexed File
22-7
22-20 22-11
20.
Computer Graphics 20-1
Introduction
20-2
The Origins
20-3
How
23. Graphics Plotters
20-1
of
Computer
23-1
Some Common
20-5
New
20-6
General-Purpose Graphics
20-7 20-8
The User Interface 20-10 The Display of Solid Objects
20-9
Point-Plotting Techniques
Questions
23-3 Paper Motion
20-6
20-8
Display Devices
Plotters
23-5
20-11
20-13 20-14
24-1
The Evolution
20-21
24-1
24-2
The Elements
24-3
Microprocessor and Microcomputer
of a Microprocessor
24-3
Software
20-23
24-10
24-4 Fundamentals of Microprocessor
20-23
20-18 Inherent-Memory Devices
23-9
of the
Microprocessor
20-22
CRT
23-8
24. Survey of Microprocessor Technology 24-1
20-15 Display Devices and Controllers
The
23-6
20-10
20-19
20-14 Line-Drawing Displays
20-16 Display Devices
23-6
of Physical Capabilities
23-7 Plotting Programs
20-12
Incremental Methods
20-13 Circle Generators
Range
Drafting Systems
20-12 Line-Drawing Algorithms
20-17
23-2
23-3
23-6 Dedicated-Use Plotters: Automatic
20-9
20-10 Coordinate Systems 20-11
23-1
Introduction
23-4 Tabletop and Free-Standing
20-5
20-4
Software
23-1
23-2 Marking and Scanning Methods
the Interactive-Graphics Display
Works
22-24
20-1
20-4
Graphics
Paper Handling
20-28
System Design
24-15
1
CONTENTS 28. Microcomputer Operating
25. Microcomputers and
Programming
Systems
25-1
28-1
25-1
25-1
Introduction
25-2
25-4
Program Planning 25-4 Data Types 25-8 Data Storage 25-1
25-5
Instructions
25-3
25-6
28-3
Components
Specific Problems
28-2 vs. Mainframes The Portability/Compatibility
28-6
CP/M
28-7
Advanced
Problem
25-12
Assembly Language 25-16 Data Movement Instructions
25-9
Boolean Manipulation
25-19
28-3
28-4 8-Bit Operating
Systems 28-7 28-8 16-Bit Operating Systems
25-28
25-11
Branching Instructions
25-12
A
25-31
Logic Controller Example
25-37
25-13 Another Approach to the Logic
Speech and
29. Audio Output: Music 29-1
25-43
Controller
29-1
Introduction
29-1
29-2 Audio-Response Units
26. Subroutines, Interrupts, and Arithmetic 26-1 Subroutines
26-2 Interrupts
26-6
Some
26-1
26-11
30-3
30-1 Historical Overview System Goals 30-2 System Overview 30-3
30-4
Input-Signal Processing
30-5
Data Compression
30-6
Discrete
30-1
30-2
26-14
Movement
26-24
Programmed Arithmetic Operations
26-25
30-7
Concepts
30-8 27-1
Introduction
27-2
Input/Output Ports
27-2
27-3
27-4
Handshaking 27-6 Program Interrupts
27-11
27-5
Main-Memory
27-6
Direct
27-7
Further Microprocessor Bus
27-1
Concepts 27-9
Serial
I/O
30-12
Pattern Parameters and
Analysis
30-14
30-10 Word-Identification Techniques 27-21
27-31
30-21 30-11
Hardware Implementation
30-12 Status
30-23
27-36
27-9
27-10 Bit-Slice Microprocessors 27-11
30-9
27-34
Analog Conversion
Microprocessor Clocks
Terms
Pattern Detection within a
Word
Interfacing
27-8
30-8
Definitions of Linguistic
30-11
27-1
Memory Access
30-4
30-6
Wood-Boundary
Determination
27. Interfacing
30-1
30. Voice Recognition
Controller
Additional Data
Instructions
26-7
29-3
26-6
Manual-Mode Logic Example 26-12
26-5 Arithmetic
29-1
Music and Sound Synthesizer 29-4 Speech Synthesizers 29-3 29-3
26-3 Additional Pseudoinstructions
26-4
28-8
28-9
28-9 Conclusions
25-25
25-10 Rotate Instructions
26-1
28-1
28-2
28-5
8080/8085 Microcomputer
Instructions
28-1
Introduction
28-4 Micros
25-8
25-7
28-1
28-2 Operating System
25-11
Organization
IX
27-55
27-61
Glossary G-1
Index follows Glossary
30-22
^Tf^smFFi
Contributors
Thomas C. Harvard University Conroy, Thomas F. International Business Machines Bartee,
Radio Shack Technical Publications
Erickson, Jonathan Gault,
James W.
Corp.
North Carolina State University
Gear, C. William
University of Illinois
State University of New York at Buffalo
Givone, Donald D.
Hamacher, V. Carl
University of Toronto
Hellerman, Herbert
State University of New York at Binghamton
Helms, Harry L.
Technical Writer and Consultant
Hohenstein and Associates
Hohenstein, C. Louis
House, Charles H. Kohavi, Zvi
Hewlett-Packard
University of Utah
Koperda, Frank
International Business Machines Corp.
Miastkowski, Stan
Rising Star Industries,
Newman, William M.
Queen Mary
Pimmel, Russell L.
University of Missouri
Roesser, Robert P.
University of Detroit
Tucker, Allen,
Dataface
Georgetown University
Jr.
Vranesic, Zvonko G.
University
Wiatrowski, Claude A. Wiederhold, Gio
Zaky, Safwat G.
London
Sutherland, Sproull and Associates
Sproull, Robert F.
Stout, David F.
Inc.
College,
M.
of Toronto
Mountain Automation Corp.
Stanford University
University of Toronto
XI
Overview
TRENDS
THE MICROCOMPUTER INDUSTRY
IN
Without a doubt, IBM's Personal Computer strategy cast the shape of the and probably for the rest of the decade. It microcomputer industry in 1982 was not simply sales volume or market share that made IBM's Personal Computer such a formidable factor in 1982. Rather, it was the marketing strategy IBM adopted. It is a strategy all other microcomputer industry participants who wish to survive will have to adopt. Prior to the IBM Personal Computer, industry leaders (such as Apple, Commodore, and Radio Shack) all strove to build unique hardware which, whenever possible, would run programs that could not be run on any competing machine. Furthermore, these companies vigorously fought competitors attempting to
—
build look-alike microcomputers.
That was the old minicomputer industry philosophy, adopted by too many microcomputer manufacturers. Well in advance of IBM's entry, the ultimate fallacy of this philosophy was evident. The reason was CP/M, an operating system standard on none of the three leading personal computers (Apple, Commodore, and Radio Shack). Nevertheless, CP/M not only survived but thrived. CP/M was kept alive by more than the flock of secondary microcomputer manufacturers. A large number of Radio Shack and Apple microcomputers were also running CP/M, even though it required additional expense for unauthorized software and additional hardware for the Apple. Was there a message in the extremes to which customers would go to thwart the intent of microcomputer manufacturers and run CP/M? Indeed there was, and it was that de facto industry standards are overwhelmingly desirable to most customers. That message was not lost on IBM. Few messages of validity are; that is why IBM has grown to be so successful.
And its
so
entry a
when IBM introduced its Personal Computer, it went about making new de facto standard. Any hardware manufacturer who wanted to
could build a product compatible with the
IBM
Personal Computer. Software
vendors were given every encouragement to adopt the
IBM
standard.
The
first
XIII
Xiv
OVERVIEW independent microcomputer magazine dedicated to the
IBM
Personal
Com-
puter grew to be one of the most successful magazines in the business within
months of
six in
it
in
its first
publication. People bought the
There are already several microcomputers
IBM
magazine and advertised
unprecedented volumes.
that are compatible with the
IBM
built
by companies other than
Personal Computer. Already there are
probably more software companies devoting themselves to the than any other with the possible exception of
IBM
standard
CP/M.
Within a short time, I predict there will be two de facto industry standards microcomputers: CP/M running on the Z80 for 8-bit systems and IBM (MDOS and CP/M 86) running on the 8086 or 8088 for 16-bit systems. Companies not supporting one or both of these standards have a tortuous, uphill fight for
ahead of them. One can well argue that 8-bit microprocessors are generally obsolete and that the 8086 and 8088 are among the least powerful 16-bit microprocessors. But what has that to do with anything? If these microprocessors are adequate for the tasks they are being asked to perform, then any theoretical inadequacies will not be perceived by the end user. And even if the de facto standards are not the best in whatever area they have become standard, what does it matter providing their shortcomings are not apparent to the user?
The
difference between the microcomputer and minicomputer industries
is
becoming consumer products. It will be far more difficult for microcomputer industry managers to impose their will on a mass market of consumer buyers than it was for minicomputer manufacturers to manipulate a relatively small customer base (which was commonly done in that microcomputers are rapidly
the early 1970s).
This message has not been learned by many present participants in the microcomputer industry. But this message will determine more than anything else
who
will
be the survivors when the inevitable industry shakeout occurs.
Adam Osborne President
Osborne Computer Corporation
1983
mmm
Foreword
Since the 1950s the digital computer has progressed from a "miraculous" but expensive, rarely seen, and overheated mass of netic cores to a familiar, generally
vacuum
compact machine
and magfrom hundreds of
tubes, wires, built
thousands of minuscule semiconductor devices packaged
in
small
plastic
containers.
As
They run our cash registers, check and manage the family bank account.
a result, computers are everywhere.
out our groceries, ignite our spark plugs,
Moreover, because of the amazing progress in the semiconductor devices which form the basis for computers, both computers and the applications now being developed are only fragments of what will soon be in existence.
The Computer Handbook which
follows presents material from the basic
The Handbook progresses from hardware through software to such diverse topics as artificial intelligence, robotics, and voice recognition. Microprocessors, the newest addition to computer hardareas of technology for digital computers.
ware, are also discussed in some detail.
Computers are beginning
to
touch the
lives of
hospital will be full of computers (there could be in a
modern
hospital;
everyone. If you are
more computers than
ill,
the
patients
medical instrument designers make considerable use of
microprocessors). Schools have been using computers for bookkeeping for years and now use computers in many courses outside computer science. Even elementary schools are beginning to use computers in basic math and science courses. Games designed around microprocessors are sold everywhere. New ovens, dishwashers, and temperature control systems all use microprocessors. The wide range of applications brings up some basic philosophical questions about what the future will bring. System developers can now produce computers which speak simple phrases and sentences reasonably well bank computers probably give the teller your balance over the telephone in spoken form and some of the newer cars produce spoken English phrases concerning car operation. Computers also listen well but have to work hard to unscramble what is said unless the form of communication is carefully specified. This is, however, a hot research area and some details are in this book. The speech recognition problem previously described is a part of a larger problem called pattern recognition. If computers can become good at finding patterns, they can scan x-rays, check fingerprints, and perform many other use-
—
XV
XVi
FOREWORD ful functions (they
already sort mail).
at finding patterns
even
in this
area faces
many
in the
The human
brain
is,
however, very good
presence of irrelevant data (noise), and research
challenges
if
computers are
to
become competitive.
If,
however, computers can become good at recognizing speech and returning
answers verbally,
it
might be possible even
data verbally. This would
make
it
to enter
programs
possible literally to
tell
for
computers and
the computer what to
do and have the computer respond, provided the directions were clear, contained no contradictions, etc. While verbal communication might facilitate the programming of a computer, there would still be the problem of what language to use. There are many proponents of English, but English need not be precise and lacks the rigidity now required by the computer and its translators. Certainly much has been done in this area and the steady march from machine-like assemblers to today's highlevel programming languages testifies to the need for and emphasis on this area.
Much more
is
needed, however, and the sections of this Handbook fairly rep-
resent the state of this art and point to the direction of future work.
Robotics also presents an outstanding area for future development. Factories
now use many robotic devices and research labs are beginning to fill with robots in many strange and wonderful forms. Waving aside the possibility and desirability of robots for maids, waitresses, waiters, ticket sales persons,
functions already exploited by television and movies, there are
and other
many medical
operations and precision production operations which can and will be performed
by computer-guided robots (often because they are better than humans). We often complain that others do not understand us, and at present computers do not understand us; for a while we will have to be content with computers which will simply follow our directions.
Computer memories are making
human memories, however. Largely due to the ingenuity of memory device designers, the memory capacity of large machines now competes with our own but the different organization of the brain seems to give it substantial gains on
advantages for creative thought. Artificial intelligence delves into
some areas formerly relegated however.
For example,
in
to
"human"
this area. In
thought, computers do quite well,
such straightforward
mathematical systems as
Euclid's geometry, computers already perform better than might be expected; in
a recent test a computer proved
all
the theorems in a high school test in
minutes. I
think that to be really comfortable with computers
it is
necessary to have
some knowledge of both hardware and software. In order to make computers more widely used, there is a tendency to make consumer-oriented personal computers appear to be "friendlier" than they really are. This limits their flexibility and presents users with a mystery element which needs to be and can be dissolved by a little knowledge of actual computer principles. A handbook such as this can be very helpful to users in dissolving some of the mystery. At the same time, such a handbook can open new doors in exploration and serve as a contin-
uing reference.
Thomas
C. Bartee
Harvard University 1983
The McGraw-Hill
Computer Handbook
Computer History and Concepts Herbert Hellerman
1-1
1-2
1-3 1-4 1-5 1-6 1-7
1-8
1-1
INTRODUCTION HISTORICAL PERSPECTIVE A CLASSIFICATION OF AUTOMATIC COMPUTERS THE NATURE OF A COMPUTER SYSTEM PRINCIPLES OF HARDWARE ORGANIZATION CONVENTIONS ON USE OF STORAGE ELEMENTS OF PROGRAMMING PRINCIPLES OF THE SPACE-TIME RELATIONSHIP
INTRODUCTION The modern general-purpose this book, is the
computer system, which is the subject of and complex creation of mankind. Its versatility applicability to a very wide range of problems, limited only by most
digital
versatile
follows
from
human
ability to give definite directions for solving a
its
such directions
in the
form of a
problem.
precise, highly stylized
A program gives
sequence of statements
A
computer system's job is to reliably and rapidly execute programs. Present speeds are indicated by the rates of arithmetic operations such as addition, subtraction, and comparison, which lie in the range of about 100,000 to 10,000,000 instructions per second, depending on the size and cost of the machine. In only a few hours, a modern large computer can do more information processing than was done by all of mankind before the electronic age, which began about 1950! It is no wonder that this tremendous amplification of human information-processing capability is precipdetailing a problem-solution procedure.
itating a
new
revolution.
Adapted from Digital Computer Systems
Principles. 2d ed., by Herbert Hellerman. Copyright
©
1973. Used by
permission of McGraw-Hill, Inc. All rights reserved.
1-1
1-2
THE McGRAW-HILL COMPUTER HANDBOOK
To most
people, the words "computer" and "computer system" are probably
synonymous and
refer to the physical equipment, such as the central processing
card reader, and printers visible to anyone visiting a computer room. Although these devices are essential, they make up only the visible "tip of the iceberg." As soon as we start to use a modern computer system, we are confronted not by the machine directly but by sets of rules called programming languages in which we must express whatever it is we want to do. The central importance of programming language is indicated by the fact that even the physical computer may be understood as a hardware interpreter of one particular language called the machine language. Machine languages are designed for machine efficiency, which is somewhat dichotomous with human convenience. Most users are shielded from the inconveniences of the machine by one or more languages designed for good man-machine communication. The versatility of the computer is illustrated by the fact that it can execute translator programs (called generically compilers or interpreters) to transform programs from user-oriented languages into machine-language form. It should be clear from the discussion thus far that a computer system consists of a computer machine, which is a collection of physical equipment, and also programs, including those that translate user programs from any of several languages into machine language. Most of this book is devoted to examining in some detail theories and practices in the two great themes of computer systems: equipment (hardware) and programming (software). It is appropriate to begin, in the next section, by establishing a historical perspective. unit, console, tapes, disks,
1-2
HISTORICAL PERSPECTIVE Mechanical aids
many many
and calculating were known
in antiquity.
One
of
ancient devices, the abacus, survives today as a simple practical tool in parts of the world, especially the East, for business
calculations. tians,
to counting
and
it
and even
scientific
(A form of the abacus was probably used by the ancient Egypwas known in China as early as the sixth century B.C.) In the hands
of a skilled operator, the abacus can be a powerful adjunct to hand calculations.
There are several forms of abacus; they all depend upon a positional notation for representing numbers and an arrangement of movable beads, or similar simple objects, to represent each digit. By moving beads, numbers are entered, added, and subtracted to produce an updated result. Multiplication and division are done by sequences of additions and subtractions. Although the need to mechanize the arithmetic operations received most of the attention in early devices, storage of intermediate results was at least as important.
Most
devices, like the abacus, stored only the simple current result.
Other storage was usually of the same type as used for any written material, e.g., clay tablets and later paper. As long as the speed of operations was modest and the use of storage also slow, there was little impetus to seek mechanization of the control of sequences of operations. Yet forerunners of such control did appear in somewhat different contexts, e.g., the Jacquard loom exhibited in 1801 used perforated (punched) cards to control patterns for weaving.
— COMPUTER HISTORY AND CONCEPTS Charles Babbage (1792-1871) was probably the
1-3
to conceive of the
first
essence of the general-purpose computer. Although he was very versatile,
accomplished both as a mathematician and as an engineer,
computing machines. this direction
It is
his lifework
worth noting that Babbage was
first
was
his
stimulated in
because of the unreliability of manual computation, not by
its
slow
speed. In particular, he found several errors in certain astronomy tables. In
determining the causes, he became convinced that error-free tables could be
produced only by a machine that would accept a description of the computation by a human being but, once set up, would compute the tables and print them all without human intervention. Babbage's culminating idea, which he proposed in great detail, was his Analytic Engine, which would have been the first general-purpose computer. It was not completed because he was unable to obtain sufficient financial support.
As Western
industrial civilization developed, the need for
clear that
if
mechanized com-
As the 890 census approached in the United States, it became new processes were not developed, the reduction of the data from
putation grew.
1
it was time for the next one. Dr. Herpunched cards and simple machines for processing them in the 1 890 census. Thereafter, punched-card machines gained wide acceptance in business and government. The first third of the twentieth century saw the gradual development and use of many calculating devices. A highly significant contribution was made by the mathematician Alan Turing in 1937, when he published a clear and profound theory of the nature of a general-purpose computing scheme. His results were expressed in terms of a hypothetical "machine" of remarkable simplicity, which he indicated had all the necessary attributes of a general-purpose computer. Although Turing's machine was only a theoretical construct and was never seriously considered as economically feasible (it would be intolerably slow), it drew
one census would not be complete before
man
Hollerith applied
the attention of several talented people to the feasibility of a general-purpose
computer.
World War
II
gave great stimulus to improvement and invention of comput-
ing devices and the technologies necessary to them.
Howard Aiken and an IBM
team completed the Harvard Mark I electric computer (using relay logic) in 1944. J. P. Eckert and J. W. Mauchly developed ENIAC, an electronic computer using vacuum tubes in 1946. Both these machines were developed with scientific calculations in mind. The first generation of computer technology began to be mass-produced with the appearance of the UNI VAC I in 1951. The term "first generation" is associated with the use of vacuum tubes as the major component of logical circuitry, but it included a large variety of memory devices such as mercury delay lines, storage tubes, drums, and magnetic cores,
name a few. The second generation of hardware featured the transistor (invented in 1948) in place of the vacuum tube. The solid-state transistor is far more efficient than the vacuum tube partly because it requires no energy for heating a source of electrons. Just as important, the transistor, unlike the vacuum tube, has almost
to
unlimited
life
and
reliability
and can be manufactured
at
much
lower cost. Sec-
ond-generation equipment, which appeared about 1960, saw the widespread installation
and use of general-purpose computers. The third and fourth gen-
1-4
THE McGRAW-HILL COMPUTER HANDBOOK computer technology (about 1964 and 1970) mark the increasing use of integrated fabrication techniques, moving to the goal of manufacturing most of a computer in one automatic continuous process without manual
erations of
intervention.
Hardware developments were roughly paralleled by progress in programming, which is, however, more difficult to document. An early important development, usually credited to Grace Hopper, is the symbolic machine language which relieves the programmer from many exceedingly tedious and error-prone tasks. Another milestone was FORTRAN (about 1955), the first widely used which included many elements of algebraic notation,
high-level language,
like
indexed variables and mathematical expressions of arbitrary extent. Since
FORTRAN FORTRAN
was developed by IBM, whose machines were most numerous, quickly
became pervasive and,
after several versions, remains
today a very widely used language.
Other languages were invented to satisfy the needs of different classes of computer use. Among the most important are COBOL, for business-oriented data processing;
ALGOL,
the
first
widely accepted language in the interna-
community, particularly among mathematicians and scientists; and PL/ IBM and introduced in 1965 as a single language capable of satisfying the needs of scientific, commercial, and system programming. Along with the introduction and improvements of computer languages, there was a corresponding development of programming technology, i.e., the methods of producing the compiler and interpreter translators and other aids for the programmer. A very significant idea that has undergone intensive development is the operating system, which is a collection of programs responsible for monitoring and allocating all systems resources in response to user requests in a way that reflects certain efficiency objectives. By 1966 or so, almost all medium to large computers ran under an operating system. Jobs were typically submitted by users as decks of punched cards, either to the computer room or by remotejob-entry (RJE) terminals, i.e., card reader and printer equipment connected by telephone lines to the computer. In either case, once a job was received by tional I
developed by
the computer, the operating system
A
made almost
all
the scheduling decisions.
large computer could run several hundred or even thousands of jobs per 24-
hour day with only one or two professional operators
The 1960s saw a
in the
machine room.
great intensification of the symbiosis of the computer and
the telephone system (teleprocessing).
Much
of this was
RJE and
routine non-
general-purpose use, such as airline reservation systems. Considerable success
was
also achieved in bringing the generality
and excitement of a general-pur-
pose computer system to individual people through the use of timesharing systems. Here, an appropriate operating-system program interleaves the requests of several
human
users
who may be remotely
located and communicating over
telephone lines using such devices as a teletype or typewriter terminal. Because of high computer speed relative to
human
"think" time, a single system could
comfortably service 50 to 100 (or more) users, with each having the "feel" of his
own
to the
private computer.
The timesharing system, by bringing people
computer, seems to have very great potential for amplifying
creativity.
closest
human
COMPUTER HISTORY AND CONCEPTS 1-3
1-5
A CLASSIFICATION OF AUTOMATIC
COMPUTERS Automatic computers may be broadly classified as analog or digital (Fig. 1-1). Analog computers make use of the analogy between the values assumed by some physical quantity, such as shaft rotation, distance, or electric voltage, and a variable in the problem of interest. Digital computers in principle manipulate numbers directly. In a sense all computers have an analog quality since a phys-
Automatic computers
Analog
Digital
Operations
Problem
Operations
Problem
General
only
setup
only
setup
purpose
—Slide rule
—Abacus
-Differential
analyzer
— Planimeter
—Adding
—Any
—Plugboard
procedure
accounting
described
machines
precisely
machines
-Network analyzer
-Digital
-Desk -Field analogs
differential
calculators
-Special purpose
-Card
analyzers
sorters
Radar Navigation Fire control
FIG. 1-1 ical
A
classification of
computers
representation must be used for the abstraction that
digital
computer, the analogy
is
is
a number. In the
minimal, while the analog computer exploits
it
to a very great extent.
Both analog and digital computers include a subclass of rather simple machines that mechanize only specific simple operations. For example, the slide rule is an analog computer that represents numbers as distances on a logarith-
mic
scale. Multiplication, division, finding roots of
numbers, and other opera-
done by adding and subtracting lengths. Examples of operation-only machines of the digital type include adding machines and desk calculators. A second class, more sophisticated than operation-only machines, may be termed problem-setup machines. In addition to performing arithmetic operations they can accept a description of a procedure to link operations in sequence tions are
to solve a problem.
The
machine's controls, as
specification of the procedure
in certain special-purpose
may be
built into the
machines, or a plugboard
arrangement may be supplied for specifying the desired sequence of operations. The main idea is that the problem-solution procedure is entered in one distinct operation, and thereafter the entire execution of the
automatic.
work on the problem
is
1-6
THE McGRAW-HILL COMPUTER HANDBOOK
The
electronic differential analyzer that
general form of analog computer.
It is
emerged
in the late
1940s
is
the most
constructed from a few types of carefully
engineered precision circuits (integrators,
summing
amplifiers, precision poten-
and capacitors) each capable of a single operation. The problem is usually set up on the machine by plugboard. Since there is usually no provision tiometers,
for storing results internally, the output plotter. Precision, limited
by
drift
and
is
noise,
generally sent directly to a curve is
typically no higher than
1
part
Compared with general-purpose
1000 of digital computers, analog computers suffer from lack of generality of the problems that can be handled, low precision, difficulty in performing complex operations (including multiplication and division at high speed), inability to store large amounts of information effectively, and equipment requirements that must grow directly with problem size. However, for the jobs to which it is suited, particularly mathematical full scale.
in
or simulation problems involving differential equations, the analog computer
can often give high speed,
if
required, at lower cost than a digital computer.
The high speed of the analog computer ation;
i.e., all its
is
the result of
highly parallel oper-
its
parts are working concurrently on separate parts of the
same
problem.
A most important theoretical question that can be asked of a problem-setup machine is: What is the range of problems solvable by this machine? As a practical matter, this question is rarely asked in this form because plugboard machines are usually designed for specifically stated kinds of problems. Nevertheless, the question of ultimate logical power,
able by a given machine,
is
i.e.,
the range of problems solv-
fundamental. In 1937 Turing
made
a direct contri-
when he defined a remarkably simple, hypothetical "machine" (since named a universal Turing machine) and proved, in effect, that any solution procedure can be expressed as a procedure for this machine. By implication, any machine that can be made to simulate a universal Turing machine also has its generality. The class of such machines is called general purpose. Most commercially available electronic digital computers are, for bution to this subject
practical purposes, general-purpose machines. bility,
ple,
1-4
amount of
storage,
They
differ in speed, cost, relia-
and ease of communication with other devices or peo-
but not in their ultimate logical capabilities.
THE NATURE OF A COMPUTER SYSTEM
A computer system is best considered as a collection of resources that are accesby programs written according
sible to its users
to the rules of the system's
programming languages. The resources are of two major variety of components in each: 1.
classes with a
wide
Equipment (hardware) a.
b.
Storages
To
hold both programs and data
Processing Logic information
Implementing arithmetic,
logical manipulation of
COMPUTER HISTORY AND CONCEPTS
Concerned with movement of information and sequenc-
Control Logic
c.
1-7
ing of events
Transducers
d.
form
Devices for translating information from one physical
to another, e.g., a printer that converts electric signals to printed
characters on paper
Programs (software) Application Programs
a.
Programs written
to satisfy
some need of com-
puter users outside the operation of the computer system
and inventory-control programs
entific, payroll,
—
in fact
itself, e.g., sci-
most of the work
computers do b.
System Programs
Programs concerned with the means by which the to its users and manages its own language translators and operating-system programs
system provides certain conveniences resources, e.g.,
OF HARDWARE
1-5 PRINCIPLES
ORGANIZATION From now on we
shall use the
word computer
to
mean
only the hardware part
of a general-purpose computing system. All
computers have certain qualitative
The reader
described.
listing these
in
common
will readily appreciate,
properties
similarities,
which
will
now be
however, the lack of precision
—our objective
at present
is
in
to describe these
such a way that the essential nature of the machine, and the basis of
its
generality, can be intuitively understood.
From
the viewpoint of the user, the machine manipulates two basic types of
information: (1) operands, or data, and (2) instructions, each of which usually specifies a single airthmetic or control operation (e.g.,
ADD, SUBTRACT),
and one or more operands which are the objects of the operation. Within the machine, both instructions and data are represented as integers expressed in the binary number system or in some form of binary coding. This is done because the "atom" of information is then a two-state signal (called or 1) which requires only the simplest and most reliable operation of electronic devices. Although the binary representation of instructions and data must appear within the machine for processing to take place, most users of computers may use the common decimal representation of numbers and alphabetic names of operations and data. Translator programs (usually supplied by the computer manufacturer) executed by the machine translate these convenient representations into the internal binary form. In other words, the binary representation of
information inside the computer
ogy but
The
is
is
important for reasons of electronic technol-
not an essential principle of the general-purpose computer.
following
is
a
list
of attributes
common
to general-purpose
digital
computers:
L The and
machine
is
capable of storing a large amount of information (both data
instructions).
For economy reasons, there are usually at least three levels
1-8
THE McGRAW-HILL COMPUTER HANDBOOK of Storage speed and capacity. iting factor in the 2.
The but
repertoire of instructions is
The amount
of storage
is
a fundamental lim-
range of problems that can be handled. is
typically small (from about 16 to
256 types)
judiciously chosen to cover the requirements for any procedure.
3.
Operands are referenced by name; the names of operands can be processed by instructions.
4.
Instructions are accessed from storage and executed automatically. Nor-
mally, the location in storage of the next instruction (or
program) counter. This pointer
by
1 )
is
is
held in an instruction
most often stepped
value (increased
in
to specify the location of the next instruction, but certain instructions
modify the program counter to contain a value that depends on
specifically
the outcome of comparisons between specified operands. This gives the pro-
gram
the ability to branch to alternative parts of the program,
i.e.,
alterna-
tive instruction sequences.
The
general organization of a typical computer
heart of the system
is
shown in shown
the central processing unit (CPU),
is
1-2.
Fig.
The
as comprising
a main storage, which holds both program and data, an arithmetic-logic unit
Main
Card reader and card punch
-^
storage Printer
Picture display
and
keyboard
Routing circuits
I/O
Arithmetic-logic unit
channels Central processing unit
FIG. 1-2
(CPU)
General organization of a typical
(ALU), which contains processing
The program counter would
some diagrams the program
One
part of the
storage and the
system
CPU ALU
illustrated,
is
computer
an adder, shifter, and a few and the instruction currently being pro-
circuitry such as
fast registers for holding the operands,
cessed.
digital
also be included in the
control facilities are
ALU,
shown as a
although in
distinct function.
a set of routing circuits which provide paths between
and input/output controllers or channels. In the type of
many
storage or input/output devices
may
be wired to one
channel; but only one device per channel can be transmitting information from or to
main storage
at
any one time. This
is,
of course, a restriction on the
ber of devices that can operate concurrently.
It is
num-
imposed because of the econ-
COMPUTER HISTORY AND CONCEPTS
omy
of sharing
common
paths to main storage and simpHcity
1-9
in controlling
movement of information between the devices and storage. The major parts of a computer may be described as follows: 1.
Storage Means for storing a rather large volume of information and a simple economical access mechanism for routing an element of information to/
from storage from/to a single point in several versions, ity,
2.
and
even
in
the
(register).
same system;
Storage
is
usually available
these vary in access time, capac-
cost.
The switching networks
Data Flow
that provide paths for routing infor-
mation from one part of the computer to another. 3.
Transformation
The
This function
usually concentrated in a single arithmetic-logic
(ALU). The sive circuits
is
centralization provides
used
is
and other data manipulation.
circuits for arithmetic
in
economy
time sequence for
unit
since a single set of fast expen-
operations. Transformation
all
circuits operate
on information obtained from storage by control of the data-
flow switching.
As
will
be seen
later,
many
of the
more complex
transfor-
mations such as subtraction, multiplication, and division can be obtained economically by control of sequences of very elementary operations such as addition, shifting, etc. 4.
This
Control
is
a general term that includes the important function of per-
forming time sequences of routings of information through the data
The
control function appears on
trol is
many
levels in a
flow.
computer. Usually the con-
organized as a set of time sequences, or cycles. Each cycle period
commonly
is
(but not always) divided into equally spaced time units called
clock intervals. The term "cycle" refers to a specific type of sequence for selections on the data flow
example, there
is
performed
in a succession of clock intervals.
taining information about a transformation
ALU
For
an instruction fetch cycle during which an instruction conis
brought from storage to an
At each clock interval within the cycle, an elementary operperformed such as routing the storage location of the instruction to the storage-access mechanism, signaling for storage access, or routing of the instruction obtained to an ALU register.
ation
5.
register.
is
Input/Output
Since information
in the processor
and storage of the com-
puter are represented by electric signals, devices are provided to convert
information from human-generated to machine-readable form on input, and in the opposite direction this
on output.
A
very
common scheme for performing An operator reads the infor-
transducer function uses a punched card.
mation from handwritten or typed documents and enters the information on much like a typewriter keyboard, of a keypunch machine. This
a keyboard,
machine
The
translates the key strokes into holes on the card (see Fig. 1-3).
cards are then sent to the card reader, which contains the necessary equip-
ment
to
READ
the cards,
i.e.,
sense the hole positions and translate
into the internal electric-signal representation.
The punched card
information in a nonvolatile form and can be read by
human
them stores
beings (by
reading either the hole configurations or the printed characters at the top of
1
THE McGRAW-HILL COMPUTER HANDBOOK
1-10
.
A U
0-9
^
1
1
I
I 10
II
1
ml
a M
15 111
till
0|0 ooio 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 llllllll 000
looo 000 OOOi 0000 00 11)4 till
11
n
11
tin n
1
1
)
t
It 10 40 11 UH uis 111) till till till
77nH »»7Tn 10 10 1
1
1
11
11
II
II 41 I
I
M » « 47
44 41
M
SI
» UMUSI
tl
U 04 Ittttl
iinn n n
1510
n
It It
It 1
2222 2222 2222 2222 2222 2222
2
III 333 333
3333 3333 3333 3333 3333 3333
3
222 222 |222 '222 2|22 2222 2|22 2222 2222 2222 2222 2222 222
333
333 333
3
14
1111
22|2
3|33 1333 33|3 3333 33|3 3333 3333 3333
II
1111 1111 1111 1111 till 1111 1111
I
1
1
t
I
4444 444 444
44|4 1444 444 4444 444
Mil 444 4444 444
4444 4444 4444 4444 4444 4444
4
an
ISS 555
5SS
i555 5555 |555 5555 |5S5 5555 5555 5555 5555 555
5555 5555 5555 5555 5555 5555
5
66tS
6|e 666
6666 |666 6666 6|66 6666 6i66 6666 6666 6666 6666 666
6666 6666 6666 6666 6666 6666
7777
771 777
7777 '177 7777 77|7 7777 7717 7777 7777 7777 7777 777
7777 7777 7777 7777 7777 7777
(III
III
nil
9999
999 9|9
I
!
1
I
I
I
I
I 10
II
ll|l
4444 4444
4
nil nil nil in nil llllllll I
n in nil nil nil nil nil nil
9999 199 9999 9999 1999 9939 |999 9999 9999 9999 999 9999 9999 9999 9999 9999 9999 41 M 49 4t 4^11 »si UMUMi Sliltl r tmoi nttr nnnnnMniilTinioio lonii iiMnx nitii nun null sit II
71
I
4]
4T
— 9 edge Representation of Data: Each of the 80 columns
may
contain one or more holes (small dark rectangles) representing an alphanumeric character. The card shown was
punched to show the representations of the 10 decimal
digits,
26
letters of the alphabet,
An 80-column IBM
FIG. 1-3
48-character set used by the
and 12 special symbols (including blank) of a
common symbol
set.
card showing row-column numbering and holes punched for the language
FORTRAN
A card-punch machine may be controlled by the computer to produce punched-card output of the results of processing.
card).
The punched card and devices. ers,
its associated machines are examples of input/output Other devices available include typewriters, punched paper tape, print-
cathode-ray-tube displays, analog-digital converters.
There is no sharp distinction between the storage function and the input/ output function (the punched card was seen to contain both). However, a useful distinction can be made based on whether the output is directly machine-readable.
On
this basis, printers, typewriters,
punched cards, punched
cathode-ray displays are input/output
magnetic tape are storage devices. A very common terminology classifies all devices and machines not a part of the central processing unit and its directly accessible storage as input/output. devices;
1-6
tape,
CONVENTIONS ON USE OF STORAGE Certain conventions are almost universally assumed in using computer storage. These are independent of the physical nature of the device constituting storage whether it uses magnetic tape, magnetic disks, semiconductors, etc. Two fundamental operations, viewed from the storage unit, are:
—
1.
READ (copy) and sends
it
to
Copies the contents of some specified portion of the storage place. Note that the copy operation is, to the
some standard
user, nondestructive; it
out.
i.e.,
information
in
storage
is
not modified by reading
1
COMPUTER HISTORY AND CONCEPTS
WRITE (replace)
2,
Results in replacement of the present contents of a spec-
of storage from
ified portion
some standard
place.
Sometimes the technology of a storage device naturally tends conventions. In such a case
the
1-7
same
1-1
it is
to violate these
engineered with additional circuits to provide
functional appearance to the user as described above.
ELEMENTS OF PROGRAMMING For our present purposes, storage
may
is
assumed
to consist of
be visualized as a long row of pigeonholes. Each
called an operand which
contained
in
the
cell.
Each
referenced only by the
operand referenced
is
may
an array of cells which
cell
contains information
be likened to a number written on a piece of paper
cell is
given a
name of
the cell
name it
— the information
occupies, not by
its
in the cell is
content.
The
then used for computation. The machine hardware usu-
name scheme whereby
the cell names are the integers 0, 1, The user may, however, choose different names such as X, Y, Z, I, etc., for the cells. The translation of user names to machine names is a simple routine process since each user name is simply assigned to one machine name. This justifies our use of mnemonic symbols for names of operands. Unless otherwise specified, numbers will denote operands, letters the names ally has a wired-in 2, etc.
of operands. For
(1)
is
example
X-5
read "5 specifies X," which means the operand or usual number 5 replaces
the contents of the cell
(2)
named
X.
As another example consider
the statement
Y— H-X
which means "the contents (operand) of the cell whose name is X is added to 1 and the result replaces the contents of the cell named Y." For brevity, we usually read this
statement as
"X
plus
1
contents of
X
remain unchanged.
A
Y." It change
specifies
although the statement generally results
in a
is
important to note that
in the
contents of Y, the
simple program consists of a sequence of
statements like the ones illustrated above. Although detailed rules of writing statements (symbols allowed, punctuation required, etc.) vary widely from pro-
gram language
to
program language, many of the
principles of
can be illustrated adequately using a single language
(APL
programming
in this case).
Since a computer normally handles large volumes of information, a key notion
is
designation and processing of arrays of information.
sional array of cells will be called a vector.
(3)
An nation.
An example
A
of a vector
one-dimenis
X=3,29,47.4,82,-977.6
element or component of a vector
One
part
is
the
name
will
be denoted by a two-part desig-
of the entire vector; the other, written between
1-12
THE McGRAW-HILL COMPUTER HANDBOOK brackets, gives the position of the element being referenced. In the above
example (4)
X[2]=29 in X start at 1 from the meaning of a variable index. For example,
(assuming element position numbers
Note (5)
also the
left).
Y-X[I]
means "the content of
cell I is
of the cell so designated in
For example,
X
if
is
X
used as a position number
in
X, and the content
replaces the content of Y."
the vector specified in (3), the sequence
1—3 Y-X[I]
Y being respecified by the A variable such as
results in
is
47.4.
X[3]
or
X[I]
number
said to be subscripted or indexed
—the
variable
I is
called an index variable.
Index operations are extremely important because they allow us
in effect to
names from other cell names or constants. Why is it important to be able to compute names? One reason is that without this facility, it would be necessary to specify each cell explicitly by a unique name. Generating thousands of names would be tedious, and sooner or later we would probably devise a systematic naming procedure similar or identical to the indexed-variable idea. A second reason for the power of indexed variables is that the calculation of names can be included in the program for processing the data, thus greatly shortening the statement of the program but lengthening the time to execute it. For example, assume that 100 numbers have been entered into storage and called vector X. Two programs are shown in Fig. 1-4 to do the same job: compute S, the sum of the numbers. Figure l-4a is easy to understand immediately; it is a straight-line program systematically
consisting of
compute
all
cell
100 executed steps written
explicitly.
Figure l-4b
is
a
much
program because it contains a loop. Note that in Fig. l-4b, the "guts" of the program is statement 4, which adds the value in the I position of X and shorter
VSUM2
VSUMl 11) [2] [3]
s*-o
S-X[l]
[11
S-S-|-X[2] S-S-l-X(3]
[2]
I-l
[3]
TEST:-(I>100)/0
[4]
S-S-I-X[I]
[5]
I-I-t-l
[6]
[1001S*-S-I-X[100]
^TEST V
V (a) Straight line
FIG. 1-4
Straight-line
(Z>)
Loop
and loop programming
to
sum 100 numbers
X
COMPUTER HISTORY AND CONCEPTS new
S, to produce the
now
see,
S. This statement will
each time with a new value of
line 6 directs the
program
I.
be executed repetitively as we shall I by and where the statement labeled greater than 100; if so, branch to
Certainly, line 5 increases
to line 3 since this
TEST
is
line 0,
which by convention means
found. Line 3 says:
"Compare
I
for
from the program. Otherwise, continue
exit
statement (line 4)." With these rules,
hand, lines
4, 5,
cuted 101 times;
in
will
1 ,
is
to the next
and 6
1-13
it is
seen that for the case at
each be executed 100 times and
line 3 will
be exe-
other words lines 3 to 6 constitute a program loop.
paring the straight-line and loop programs of Fig. of written statements
100
is
in the first
1-4,
we
find that the
Com-
number
case and only 6 in the second. This
somewhat offset by the fact that the compared to only 100 for the program. The additional executed statements in the loop program
advantage of a short written program
is
loop program requires 403 executed statements straight-line
are required for index updating and testing.
1-8 PRINCIPLES
OF THE SPACE-TIME
RELATIONSHIP The computer designer or user must be aware of some rather fundamental notions of how a computer and a problem can be organized to "trade" space and time. The word "space" will roughly correspond to "amount of equipment."
One now be 1-5;
simple example of this trade-off idea discussed.
the function
1^-
Two ways is
the case of machine parts same function are shown in
in
of obtaining the
the appearance of six signals
— each of these can be
either
A
1^
A
0^
A
—* A
Delay
0^
A
1
1
|0
|0
1
1
1
|0 1
Circulating
1^
inputs
Clock Ot*-
pulse(s)
A (b)
Serial
system
Inputs
Outputs
Clock
at clock
pulse pulse times (a)
Parallel
system
A "AND" (output
FIG. 1-5
Parallel
and
=
1
only
if
circuit
= 1) ON-OFF signals
both inputs
serial representation of
will
Fig.
Output 1
(
1
1
1-14
THE McGRAW-HILL COMPUTER HANDBOOK
ON
(=1)
or
OFF ( = 0). The
circuit outputs are to
appear as
(
except at
,
timing or clock intervals when the signals appear at the output point(s).
To
ensure that the output appears only at clock intervals, each signal and a clock pulse are fed into an
AND circuit which gives a
line and clock line are both
In part (a) of Fig. 1-5
Each
signal uses
AND
requires six
its
own
1;
we
1
output only when the signal
at other times the output
is 0.
see one representation of our set of six signals.
line;
the output appears on six output lines (and
circuits). In part (b)
we
see a second possibility
signals circulate as pulses in a delay-line structure
— the delay
is
— the
six
in this case six
clock times. Here the signals appear in time sequence on a single wire.
The
first
circuit
is
extensive (and expensive) in space but concise (inexpen-
The second circuit has exactly dual properties. Notice also that as the number of signals grows, the parallel circuit grows proportionately but the time to receive all the signals remains the same. The serial circuit on the other hand requires no more lines (or AND circuits) to handle more signals, sive-fast) in time.
although the delay must increase proportionately.
Many
of the desirable properties of a computer, especially
its
reliability,
from its use of simple components in a simple manner. Complex structures and operations are built up by using many simple components and intriresult
cate time sequences of the signals they generate or modify.
Because of the many devices
for processing, control,
and particularly storage,
The time-space relamethod of reducing cost at the expense an example of the idea of time sharing, i.e., using the same
great efforts are exerted to obtain economical structures. tionship discussed above provides one
of time. This
is
equipment (such as the adder circuit) successively in time by routing to it the numbers to be added in time sequence. The routing of information from place to place within the computer is therefore a fundamental operation. The paths a most provided for routing determine the data-flow structure of the machine important characteristic of any computer. The time-space relationship may also be illustrated by programming organization. Recall that in the procedures for summing a list of numbers, one can program straight-line, thereby obtaining an expensive space (storage) program but a fast-execution-time program. An alternative is to program the problem
—
utilizing a loop; this results in great storage savings but longer execution time.
program usually gains space by a much greater factor the preferred method for all but the shortest lists. The major point of the above discussions on time-space relationships is a fundamental property of data processing; in any task to be done, there is usually a choice of several solutions, which can be compared, to a first order, by the extent to which they trade space and time. From the brief introduction given in this chapter, some broad properties of computer systems should be discernible. First, a general-purpose computer is In most cases, the loop
than
it
loses speed;
it is
one that can accept a precise stylized description of a procedure, called a program, for solving any problem solvable in a finite number of steps and can then execute the program automatically to process data
made
available to the
machine. or program, is important not only to the users of a computer two reasons, to its designers. (1) Product designers can perform
The algorithm, but
also, for
COMPUTER HISTORY AND CONCEPTS intelligently only if they
grammed.
(2)
understand how the products
The sequences
will
be used,
i.e.,
1-15
pro-
of internal switching operations necessary to
—
implement arithmetic and other operations are also algorithms these are the algorithms which must be specified and implemented by the logical designer. A modern computer has been likened to a grand piano, on which the user can play Beethoven or "Chopsticks." Achieving the most value for an investment in equipment and manpower is a problem in optimizing resources that has
some
of the properties of combinatorial mathematics;
specifications or the criterion of optimization can in
i.e.,
make
a "slight" change in
a very great difference
performance. The general-purpose nature of the computer rarely raises doubt
that "answers" to a well-defined problem can be obtained one
The
central question
is
usually
how
to obtain the
answers
way or another. way that opti-
in a
mizes user convenience, problem-solution time, storage space,
reliability, or
some combination of such parameters. Needless to say, all these factors are interdependent, and some can be improved only at the expense of others. This has already been illustrated in the case of space versus time in the examples given earlier in this chapter. servation" laws relations trade-offs
may
Some
fairly general,
but as yet undiscovered, "con-
relate these parameters; but at this time, the general inter-
can only be discussed qualitatively, although quantitative analysis of is readily possible and should be done in specific cases.
sea 2
mm
Computer Structures V. Carl
Hamacher
Zvonko G. Vranesic Safwat G. Zaky
2-1
2-2 2-3 2-4 2-5
2-6 2-7 2-8 2-9
2-1
INTRODUCTION FUNCTIONAL UNITS INPUT UNIT MEMORY UNIT ARITHMETIC AND LOGIC UNIT
OUTPUT UNIT CONTROL UNIT BASIC OPERATIONAL CONCEPTS BUS STRUCTURES
INTRODUCTION The
objective of this chapter
terminology or jargon.
We
to introduce
is
will give only a
characteristics of computers, leaving the to the
some
basic concepts and associated
broad overview of the fundamental
more
detailed (and precise) discussion
subsequent chapters.
meaning of the word
computer" or simply "computer," which is often misunderstood, despite the fact that most people take it for granted. In its simplest form, a contemporary computer is a fast electronic calculating machine, which accepts digitized "input" information, processes it according to a "program" stored in its "memory," and produces the Let us
first
define the
"digital
resultant "output" information.
Adapted from Computer Organization, by V. Carl Hamacher, Zvonko G. Vranesic, and Safwat M. Zaky. Copyright
© 1978. Used by permission of McGraw-Hill,
Inc. All rights reserved.
2-1
2-2
2-2
THE McGRAW-HILL COMPUTER HANDBOOK
FUNCTIONAL UNITS The word computer encompasses
a large variety of machines, widely differing
use more specific words to represent some subclasses of computers. Smaller machines are usually called minicomin size, speed,
and
puters, which
is
cost. It is fashionable to
a reflection on their relatively lower cost, size, and computing
power. In the early 1970s the term microcomputer was coined to describe a very small computer, low in price, and consisting of only a few large-scale inte-
grated (LSI) circuit packages.
from minicomputers and microcomputers in size, processing power, cost, and the complexity and sophistication of their design. Yet the basic concepts are essentially the same for all classes of computers, relying on a few well-defined ideas which we will attempt to explain. Thus the following discussion should be applicable to most general-purpose digLarge computers are quite
ital
diff'erent
computers.
A computer consists of five functionally independent main parts: input, memand logic, output, and control units, as indicated in Fig. 2-1. The input unit accepts coded information from the outside world, either from human
ory, arithmetic
r
n
n
r" Arithmetic
Input
and logic
Memory
Output
Control
I/C)
CPU
1
L
L, FIG. 2-1
Basic functional units of a computer
operators or from electromechanical devices. in the
memory
for later reference or
The information
is
either stored
immediately handled by the arithmetic and
which performs the desired operations. The processing steps are determined by a "program" stored in the memory. Finally, the results are sent back to the outside world through the output unit. All these actions are coordinated by the control unit. The diagram in Fig. 2-1 does not show the connections between the various functional units. Of course, such connections must logic circuitry,
exist. It is
customary
to refer to the arithmetic
and
logic circuits in conjunction
with the main control circuits as the central processing unit (CPU). Similarly, input and output equipment
O). This
is
is
combined under the term input-output unit
both input and output functions. The simplest such example
We
is
the often encoun-
must emphasize that input and output funcare separated within the terminal. Thus the computer sees two distinct
tered teletypewriter terminal. tions
(1/
reasonable in view of the fact that some standard equipment provides
COMPUTER STRUCTURES
FIG. 2-2
A typical large computer— IBM
devices, even though the
same
human
S370/158 (IBM Corp.
operator associates
them
2-3
Ltd.)
as being part of the
unit.
main functional units may comprise a number of sepand often sizeable, physical parts. Fig. 2-2 is a photograph of such a computer. Minicomputers are much smaller in size. A basic minicomputer is often of desktop dimensions, as illustrated by the two machines in Fig. 2-3. Even a fairly complex minicomputer system, such as the one shown in Fig. 2-4, tends to be small in comparison with large computers. In large computers the
arate,
At
this point
FIG. 2-3
we should
take a closer look at the "information" fed into the
Two minicomputers— PDP/8M and PDPl
1/05 (Digital Equipment Corp.)
2-4
THE McGRAW-HILL COMPUTER HANDBOOK
FIG. 2-4
computer.
A
minicomputer system (Digital Equipment Corp.)
It is
convenient to consider
as being of
it
tions and data. Instructions are explicit
two types, namely, instruc-
commands which:
•
Govern the transfer of information within the machine, machine and I/O devices
•
Specify the arithmetic and logic operations to be performed
A set of instructions which perform a task is called a of operation the
is
program
to store a
(or several
as well as
between the
program. The usual mode
programs)
memory. Then, from the memory and
in the
CPU fetches the instructions comprising the program
performs the desired operations. Instructions are normally executed sequential order in which they are stored, although
from
tions
this
order as in the case where branching
behavior of the computer
is
it is
is
in
the
possible to have devia-
required.
Thus the actual
under the complete control of the stored program,
except for the possibility of external interruption by the operator or by digital devices connected to the machine.
Data are numbers and encoded characters which are used instructions. This should not be interpreted as a is
often used to symbolize any digital information.
data,
may
it is
be considered as data
ple of this into
quite feasible that an entire
is
if it is
to
program
as operands
by the
hard definition, since the term
Even within our
(that
is,
definition of
a set of instructions)
be processed by another program.
An exam-
the task of compilation of a high-level language source program
machine instructions and data. The source program
the compiler program.
The compiler
is
the input data for
translates the source
machine language program. Information handled by the computer must be encoded
in
program
into a
a suitable format.
COMPUTER STRUCTURES
2-5
Since most present-day hardware (that is, electronic and electromechanical equipment) employs digital circuits which have only two naturally stable states, namely, ON and OFF, binary coding is used. That is, each number, character of text, or instruction is encoded as a string of binary digits (bits), each having
one of two possible values. Numbers are usually represented in the positional binary notation. Occasionally, the binary-coded decimal (BCD) format is employed, where each decimal
Alphanumeric
digit
is
encoded by 4
bits.
characters are also expressed in terms of binary codes. Several
appropriate coding schemes have been developed.
encountered ones are
Two
ASCII (American Standard Code
change), where each character
is
of the most widely
for
Information Inter-
represented as a 7-bit code, and
EBCDIC
(extended binary-coded decimal interchange code), where 8 bits are used to denote a character.
2-3 INPUT UNIT Computers accept coded information by means of input devices capable of "reading" such data.
The
which consist of
units,
simplest of these
an
is
electric type-
writer electronically connected to the processing part of the computer.
typewriter
is
wired so that whenever a key on
corresponding letter or digit code, which
A
may
is
keyboards
its
automatically translated into
then be sent directly to either the
related input device
is
memory
the teletypewriter, such as the
Send-Receive) terminal' In addition to
its
The
is
depressed, the
its
corresponding
or the
ASR
CPU.
33 (Automatic
typewriter function, this teletype-
writer contains a paper tape reader-punch station. Its low price and sufficient versatility
make
the teletypewriter one of the most frequently used input (and
output) devices.
While typewriters and teletypewriters are unquestionably the simplest I/O and most awkward to use when dealing with large volumes of data. This necessitated the development of faster equipment, such as high-speed paper tape readers and card readers. A convenient way of preparing a hard copy of a program or data is to punch the coded information on paper cards, divided into columns (usually 80), where each column corredevices, they are also the slowest
sponds to one character. tion of the
A card reader may then be used to determine the loca-
punched holes and thus read the input information. This
is
a consid-
erably faster process, with typical readers being able to read upward of 1000
cards per minute. Fig. 2-5 shows a photograph of a card reader.
Many
other kinds of input devices are available.
tion graphic input devices,
2-4
MEMORY The
which
utilize a
We should particularly men-
cathode-ray tube
display.
UNIT sole function of the
memory
unit
is
to store
programs and data. Again,
function can be accomplished with a variety of equipment.
'
(CRT)
Product of Teletype Corporation.
It is
this
useful to distin-
2-6
THE McGRAW-HILL COMPUTER HANDBOOK
FIG. 2-5
A
punched card reader (IBM Corp. Ltd.)
guish between two classes of
memory
devices,
which comprise the primary and
secondary storage.
Primary storage, or the main memory, is a fast memory capable of operating where programs and data are stored during'their execution.
at electronic speeds, It typically consists
mer
of either magnetic cores or semiconductor circuits.
The
for-
constitute core memories, while the latter are referred to as semiconductor
memories.
The main memory storing
Instead,
1
it
is
containing n
usual to deal with
The main memory bits,
them is
number cells in
of storage cells, each capable of
are seldom handled individually.
groups of fixed
size.
can be stored or retrieved
name
in
one basic operation. it is
useful to asso-
with each word location. These names are numbers that
identify successive locations, is
Such groups are
organized so that the contents of one word,
provide easy access to any word in the main memory,
ciate a distinct
word
These
bit of information.
called words.
To
contains a large
which are hence called the addresses. A given its address and issuing a control command that
accessed by specifying
starts the storage or retrieval process.
The number
of bits in each
word
is
often referred to as the
the given computer. Large computers usually have 32 or
more
word length of bits in a
word,
minicomputers have between 12 and 24 (a favorite choice is 16), while some microcomputers have only 4 or 8 bits per word. The capacity of the main memis one of the factors that characterize the size of the computer. Small machines may have only a few thousand words (4096 is a typical minimum), whereas large machines often involve a few million words. Data is usually
ory
manipulated within the machine multiples of words.
A
in units of
typical access to the
data being read from the
memory
words, multiples of words, or sub-
main memory
or written into
it.
results in
one word of
COMPUTER STRUCTURES
FIG. 2-6
2-7
Magnetic disk storage (IBM Corp. Ltd.)
As mentioned above, programs and data must
reside in the
during execution. Instructions and data can be written into
it
main memory
or read out under
It is essential to be able to access any word locamain memory as quickly as possible. Memories where any location can be reached by specifying its address are called random access memories (RAM). The time required to access one word is called the memory cycle time. This is a fixed time, usually 300 nanoseconds (ns) to 1 microsecond (ixs) for most modern computers. While primary storage is essential, it tends to be expensive. Thus additional, cheaper secondary storage is used when large amounts of data have to be stored, particularly if some of the data need not be accessed very frequently. Indeed, a wide selection of suitable devices are available. These include magnetic disks, drums, and tapes. Figures 2-6 and 2-7 show a bank of disk units and a tape unit, respectively.
control of the processing unit. tion within the
2-5 ARITHMETIC
AND LOGIC UNIT
Execution of most operations within the computer takes place
in the arithmetic
(ALU). Consider a typical example. Suppose two numbers main memory are to be added. They are brought into the arithmetic unit where the actual addition is carried out. The sum may then be stored in the memory. and
logic unit
located in the
Similarly, any other arithmetic or logic operation (for example, multiplication, division,
comparison of numbers)
ands into the
ALU, where
point out that not
all
is
done by bringing the required oper-
the necessary operation
is
performed.
operands in an ongoing computation reside
We
should
in the
main
CPU normally contains one or more high-speed storage cells which may be used for temporary storage of often used operands. Each such register can store one word of data. Access times to registers memory,
since the
called registers,
are typically 5 to
1
times faster than
memory
access times.
2-8
THE McGRAW-HILL COMPUTER HANDBOOK
FIG. 2-7
A
magnetic tape unit
(IBM
Corp. Ltd.)
and arithmetic units are usually many times faster in basic cycle time than other devices connected to the computer system. It is thus possible to design relatively complex computer systems containing a number of external devices controlled by a single CPU. These devices can be teletypes, magnetic tape and disk memories, sensors, displays, mechanical controllers, etc. Of
The
control
course, this fast
2-6
CPU
possible only because of the vast difference in speed, enabling the
is
to organize
and control the
activity of
many
slower devices.
OUTPUT UNIT The output
unit
is
the counterpart of the input unit. Its function
is
to return the
processed results to the outside world.
A
number
This
is
of devices provide both an output function and an input function.
the case with typewriters, teletypewriters, and graphic displays. This
dual role of some devices
is
the reason for combining input and output units
under the single name of I/O
shown
Of course,
A
photograph of a typical teletypewriter
is
there exist devices used for output only, the most familiar example
being the high-speed printer. ing as
unit.
in Fig. 2-8.
many
It is
possible to produce printers capable of print-
as 10,000 lines per minute.
mechanical sense, but
still
These are tremendous speeds
in the
very slow compared to the electronic speeds of the
CPU. Sometimes
it is
necessary to produce the output data in some form suitable
for later use as input data.
Punched cards may be generated with a card punch.
Similarly, paper tape punches are available for producing a paper tape output.
COMPUTER STRUCTURES
FIG. 2-8
Finally,
A
teletypewriter
we should observe
marily for secondary storage,
that
may
(IBM
2-9
Corp. Ltd.)
some of the bulk storage devices, used prialso be employed for I/O purposes. As a
magnetic tape. Suppose that a particular job involves gathering data from a set of terminals, which is done over a relatively long period of time. It is likely that such a task can be conveniently and economically specific case, consider the
handled by a minicomputer. Using a large computer for probably be more expensive. However, finally collected,
it
must be processed
capabilities of the minicomputer.
in
let
some
intricate
purpose would
this
us assume that
way
A reasonable arrangement
when that is
to
the data
is
beyond the have the miniis
computer write the collected data onto a magnetic tape as part of its output (or storage!) process. The completed tape can be transported to the large computer, which can then input the data from the tape and carry out the actual processing. In this way the large (and expensive) computer is used only where necessary, with a corresponding reduction in the overall cost of processing this particular job.
2-7
CONTROL UNIT The
previously described units provide the necessary tools for storing and pro-
cessing information. Their operation
way, which
is
must be coordinated
the task of the control unit.
It is
the whole machine, used to send control signals to
A
line printer will print a line only
if it is
in
some organized
effectively the nerve center of all
other units.
specifically instructed to
do
so.
This
may typically be effected by an appropriate Write instruction executed by the CPU. Processing of this instruction involves the sending of timing signals to and from the
printer,
which
is
We can say, in general, that
the function of the control unit.
I/O
transfers are controlled
by software
instruc-
2-10
THE McGRAW-HILL COMPUTER HANDBOOK which identify the devices involved and the type of
tions
transfer.
However, the
actual timing signals which govern the transfers during execution are generated
by the control
Data transfers between the
circuits.
CPU
and memory are
also
controlled by the control unit in a similar fashion.
Conceptually
it is
reasonable to think of the control unit as a well-defined
physically separable central unit which
machine. In practice
this is
somehow
seldom the
case.
Much
physically distributed throughout the machine. set of control lines (wires),
chronization of events in
An
computing process, as
it
nent), but
for timing
and syn-
to see
what
is
happening inside the
when something goes wrong
in the
often does. In such situations the operator can use
the panel to discover the difficulty and hopefully faults cannot
is
a display panel, with switches and
is
particularly useful
is
of the control circuitry
connected by a rather large
which carry the signals used
which enables the operator
computer. The panel
It is
all units.
important part of the control unit
light indicators,
interacts with the rest of the
remedy
it.
Certainly,
some
be easily corrected (for example, failure of an electronic compo-
many commonly occurring
difficulties
(minor software problems) can
be diagnosed and corrected by the operator. In
summary, the operation of a
typical general-purpose
computer can be
described as follows:
•
It
accepts information (programs and data) through the input unit and trans-
fers
it
to the
memory.
Information stored
•
ALU
in the
memory
is
fetched, under
•
Processed information leaves the computer through
•
All activities inside the
2-8 BASIC
program
control, into the
to be processed. its
output
unit.
machine are under the control of the control
unit.
OPERATIONAL CONCEPTS
In the previous section
it
was stated that the behavior of the computer
erned by means of instructions.
gram
To perform
consisting of a set of instructions
instructions are brought
is
gov-
a given task, an appropriate pro-
stored in the
from the memory
is
into the
specified operations. In addition to the instructions,
main memory. Individual
CPU, which it is
data as operands, which are also stored in the memory.
executes the
necessary to use some
A
typical instruction
may be
Add LOCA,R0 which adds the operand at memory location LOCA to the operand in a register in the CPU called RO, and places the sum into register RO. This instruction requires several steps to be performed. First, the instruction must be transferred from the main memory into the CPU. Then, the operand from LOCA must be fetched. This operand is
is
stored in register RO.
added
to the contents of
RO. Finally, the resultant
sum
COMPUTER STRUCTURES
2-11
Main memory
7Y
Iz
Iz
MAR
MDR Control
RO
PC
IR
.
General
•
purpose
•
registers
ALU Rn
CPU FIG. 2-9
Connections between the
Transfers between the main
CPU
memory and
and the main memory
CPU
by sending the address of the memory location to be accessed to the memory unit and issuing the appropriate control signals. Then data is transferred from or to the memory. Fig. 2-9 shows how the connection between the main memory and the CPU can be made. It also shows a few details of the CPU that have not been disthe
start
cussed yet, but which are operationally essential. The interconnection pattern for these
components
is
not
shown
explicitly, since at this point
we
will discuss
their functional characteristics only.
The
CPU
element. data.
It
Two
contains the arithmetic and logic circuitry as the main processing
also contains a
tains the instruction that circuits,
circuits
number
of registers used for temporary storage of
registers are of particular interest. is
being executed.
which generate the timing signals needed
to execute the instruction.
The
Its
instruction register (IR) con-
output
is
available to the control
for control of the actual processing
The program counter (PC)
is
a reg-
which keeps track of the execution of a program. It contains the memory address of the instruction currently being executed. During the execution of the current instruction, the contents of the PC are updated to correspond to the address of the next instruction to be executed. It is customary to say that the PC points at the instruction that is to be fetched from the memory. Besides the IR and PC there exists at least one other, and usually several ister
other, general-purpose registers. Finally, there are two registers that facilitate communication with the main memory. These are the memory address register (MAR) and the memory data register (MDR). As the name implies, the MAR is used to hold the address of the location to or from which data is to be transferred. The MDR contains the
data to be written into or read out of the addressed location.
Let us now consider some typical operating steps. Programs reside in the main memory and usually get there via the input unit. Execution of a program starts by setting the PC to point at the first instruction of the program. The
2-12
THE McGRAW-HILL COMPUTER HANDBOOK
PC
contents of the
are transferred to the
memory. After a
sent to the
MAR
and a Read control signal
certain elapsed time (corresponding to the
access time), the addressed word (in this case the
gram)
is
of the
MDR
to
read out of the
memory and
first
loaded into the
is
memory
instruction of our pro-
MDR.
Next, the contents
are transferred to the IR, at which point the instruction
is
ready
be decoded and executed. If the instruction involves
an operation
be performed by the
to
be necessary to obtain the required operands. ory
(it
an operand resides
it
in the
will
mem-
CPU), it will have to be fetched and initiating a Read cycle. When the oper-
could also be in a general register
MAR
If
ALU,
in the
by sending its address to the and has been read from the memory into the MDR, it may be transferred from the to the ALU. Having fetched one or more operands in this way, the ALU can perform the desired operation. If the result of this operation is to be stored in the memory, it must be sent to the MDR. The address of the location where the result is to be stored is sent to the and a Write cycle is initiated. In the meantime the contents of the PC are incremented to point at the next
MDR
MAR
instruction to be executed. Thus, as soon as the execution of the current instruction
is
may
completed, a new instruction fetch
be started.
main memory and the CPU, it is necessary to have the ability to accept data from input devices and to send data to output devices. Thus some machine instructions with the capability of handling I/O transfers must be provided. Normal execution of programs may sometimes be altered. It is often the case that some device requires urgent servicing. For example, a monitoring device in In addition to transferring data between the
a computer-controlled industrial process dition.
To
program that this,
may have
detected a dangerous con-
deal with such situations sufficiently quickly, the normal flow of the is
being executed by the
CPU
the device can raise an interrupt signal.
must be interrupted. To achieve
An
interrupt
is
a service request
where the service is performed by the CPU by executing a corresponding interrupt handling program. Since such diversions may alter the internal state of the CPU, it is essential that its state be saved in the main memory before servicing the interrupt. This normally involves storing the contents of the PC, the general registers, and some control information. Upon termination of the interrupt handling program, the
CPU's
state
is
restored so that execution of the interrupted
program may continue.
2-9
BUS STRUCTURES So
far
we have
discussed the functional characteristics of individual parts that
To form an operational system they must be connected some organized way. There are many ways of doing this, and we
constitute a computer.
together in
will consider the three If a
computer
is
most popular structures.
must be orgafull word means that data transfers between units are to
to achieve a reasonable
speed of operation,
nized in a parallel fashion. This means that of data at a given time.
be done
in parallel,
It
also
all
units
it
can handle one
which implies that a considerable number of wires
are needed to establish the necessary connections.
A
(lines)
collection of such wires.
COMPUTER STRUCTURES
2-13
I/O bus Input
7ZZZZZ^!ZZZZn
Memory bus
CPU
Output
Memory
1^
A
FIG. 2-10
two-bus structure
which have some common identity, is called a bus. In addition to the wires which carry the data, it is essential to have some lines for control purposes. Thus a bus consists of both data and control lines. Fig. 2-10 shows the simplest form of a two-bus structured computer. The CPU interacts with the memory via a memory bus. Input and output functions are handled by means of an I/O bus, so that data passes through the CPU on route to the memory. In such configurations the I/O transfers are usually under direct control of the CPU. It initiates the transfer and monitors its progress until completion. A commonly used term to describe this type of operation is pro-
grammed
A
I/O.
somewhat
different version of a two-bus structure
is
given in Fig. 2-11.
CPU
and memory are reversed. Again, a memory bus exists for communication between them. However, I/O transfers are made directly to or from the memory. Since the memory has little in the way of cir-
The
relative positions of the
cuitry capable of controlling such transfers,
ent control mechanism.
of the
it is
necessary to establish a differ-
A standard technique is to provide I/O channels as part
I/O equipment, which have
the necessary capability to control the trans-
CPU
and can often be thought of as computers in their own right. A typical procedure is to have the CPU initiate a transfer by passing the required information to the I/O channel, which then takes over and controls the actual transfer. We have already mentioned that a bus consists of a collection of distinct lines, serving different purposes. While at this point it is not necessary to get into the details, it is useful to note that the memory bus in the above diagram contains a data bus and an address bus. The data bus is used for transmission fers.
In fact they resemble a small
of data.
To
Hence
its
number of lines corresponds
access data in the
location.
The
The above
CPU
memory
it is
number
of bits in the word.
necessary to issue an address to indicate
sends address bits to the
memory
computer.
Many machines
I/O bus
An
alternative two-bus structure
its
via the address bus.
descriptions are representative of most computers. Fig. 2-1
ally implies a large
FIG. 2-11
to the
1
usu-
have several distinct buses, so
2-14
THE McGRAW-HILL COMPUTER HANDBOOK
FIG. 2-12
Single-bus structure
that one could in fact treat tion
is
CPU
Memory
Output
Input
them
However,
as multibus machines.
their opera-
adequately represented by the two-bus examples, since the main reason
for inclusion of additional buses
is
to
improve the operating speed through
fur-
ther parallelism.
A
2-12. All units are connected to this bus, so that interaction. Since the bus
shown in Fig. provides the sole means of
which has a single bus,
significantly different structure,
can be used
for only
it
is
one transfer at a time,
it
follows
The bus lines. The
that only two units can be actively using the bus at any given instant. is
and some control low cost and flexibility for attaching
likely to consist of the data bus, the address bus,
main
virtue of the single-bus structure
peripheral devices.
The
trade-off
that a single-bus structure
is
is
is its
lower operating speed.
It is
not surprising
primarily found in small machines, namely, mini-
computers and microcomputers. Differences in bus structure have a pronounced
effect
on the performance of
computers. Yet from the conceptual point of view (at least at
this level of detail)
they are not crucial in any functional description. Indeed, the fundamental principles of
computer operation are
essentially independent of the particular bus
structure.
Transfer of information on the bus can seldom be done at a speed directly
comparable
to the operating
speed of devices connected to the bus.
tromechanical devices are relatively slow, for readers, printers. Others, such as disks
memory and
the
it is
elec-
tapes, are considerably faster.
Main
CPU operate at electronic speeds, making them the fastest part
of the computer. Since
the bus,
and
Some
example, teletypewriters, card
all
necessary to
communicate with each other via provide an efficient transfer mechanism which is not these devices must
constrained by the slow devices.
A
common approach
is
to include buffer registers with the devices to hold
the information during transfers. fer of
To
illustrate this technique, consider the trans-
an encoded character from the
printed.
The
CPU
CPU
effects the transfer
to a teletypewriter
where
it is
to
be
by sending the character via the bus to is an electronic register, this
the teletypewriter output buffer. Since the buffer transfer requires relatively
little
time.
Once
the buffer
is
loaded, the teletype-
writer can start printing without further intervention by the
the bus
is
no longer needed and can be released
for use
CPU. At
is
time
by other devices. The its buffer and is
teletypewriter proceeds with the printing of the character in
not available for further transfers until this process
this
completed.
Number Systems and Codes Zvi Kohavi
3-1
NUMBER SYSTEMS
3-2
BINARY CODES ERROR DETECTION AND CORRECTION
3-3
3-1
NUMBER SYSTEMS Convenient as the decimal number system generally
computation
is
most present
digital
is, its
usefulness in machine
limited because of the nature of practical electronic devices. In
machines the numbers are represented, and the arithmetic
operations performed, in a different
system. This section
is
number system,
called the binary
concerned with the representation of numbers
number
in various
systems and with methods of conversion from one system to another.
Number Representation An
ordinary decimal
number
actually represents a polynomial in powers of 10.
For example, the number 123.45 represents the polynomial 123.45
=
1
•
10^
+
2
10-'
•
+
3
•
10°
This method of representing decimal numbers system, and the number 10 In a system
whose base
N
=
is b,
aq.ib"-'
is
is
+
4
•
known
10'
+
5
•
10"^
as the decimal
number
referred to as the base (or radix) of the system.
a positive
+
•
•
number
+
aob°
A'^
+
•
represents the polynomial •
•.
+
a.pb'P
q-l
i=-p
Adapted from Switching and Finite Automata Theory, 2d
ed.,
by Zvi Kohavi, Copyright
©
1978, 1970. Used by
permission of McGraw-Hill, Inc. All rights reserved.
3-1
3-2
THE McGRAW-HILL COMPUTER HANDBOOK where the base b
•
•
•
Xn'xs
•
X2
•
•
•
x„
indicated above, the dot product symbol or juxtaposition will be used to
denote the ation
is
AND
X and y is then written The next Boolean operation denoted by a plus sign
is
ables
X and y
is
.
x
to (
y.
/\
+
Thus, the
).
The
is
OR operation. This oper-
OR operation
between two
vari-
+
y
often referred to as logical addition.
OR operation +
can be seen that the value of x
x
logic-0; otherwise,
+
logic-0
y has the value of
one of the variables
at least
From
are given in Table 4-2.
>' is
logic- 1.
+
is
this table
both x and y are This operation can also be
and only
if
generalized for the case of n variables. Thus, X/ if
the
is
written as
postulates for the
and only
oper-
operation between the two vari-
be introduced
X
This operation
AND
A The
d,?,
ation
AND
operation. Frequently in literature, however, the
denoted by the symbol
ables
x„
OR
logic-0.
As
it
,x
0+1
number
.
>
\-
1
ation can be generalized to any logic- 1 if
4-3
+
x^
•
if
•
•
+
logic-1; otherwise, X/
x„
+
logic- 1 if
is
X2
+
•
•
•
+
logic-0.
is
Although the plus sign will always be used to indicate the OR operation in this book, the symbol V frequently appears in computer literature. In this case the OR operation between the two variables x and y is written as x y y.
The
final
operation
bar
(
)
is
will
operation to be introduced at this time also
x
and
_ X
The prime symbol
(') is
literature. In this case the
X
NOT
NOT operation
This
A Operation X
X
1
X
if
x_=
and
are
=
if
1
1=0
complementation of x
NOT operation in computer
is
written as
x'.
two-valued Boolean algebra can now be
defined as a mathematical system with the ele-
ments tions
1
1
also used to indicate the
4-3
X
= =
0=1
or, equivalently.
NOT
NOT operation.
and inversion. An overoperation. Thus, the negation of the
indicated in Table 4-3, the postulates of the
Definition of the
the
written x.
is
X
TABLE
is
as complementation, negation,
be used to denote the
single variable
As
known
logic-0
and
AND, OR,
and the three operaand NOT, whose postulates logic-1
are given by Tables 4-1 to 4-3.
4-4
4-3
THE McGRAW-HILL COMPUTER HANDBOOK
TRUTH TABLES AND BOOLEAN EXPRESSIONS Now
that the constituents of a Boolean algebra have been defined,
necessary to show
how they
The
are used.
is
it
object of a Boolean algebra
next is
to
describe the behavior and structure of a logic network. Fig. 4-1 shows a logic x„, network as a black box. The inputs are the Boolean variables x,, X2 and the output is/ To describe the terminal behavior of the black box, it is
necessary to express the output
LOGIC
/ as
a
function of the input variables x,, X2,
NETWORK .
.
.
,
x„.
This can be done by using a
truth table (or table of combinations) or FIG. 4-1
The
logic
by using Boolean expressions. Logic networks that are
network as a black box
readily
described by truth tables or Boolean
A combinational network is which the values of the input variables at any instant determine the values of the output variables. A second class of logic networks is that in which there is an internal memory. Such networks are said to be sequential and have the property that the past as well as the present input values determine the output values from the network. This chapter will concentrate on combinational expressions are said to be combinational networks.
one
in
networks.
As
indicated earlier, each of the Boolean variables Xi, X2,
two values logic-0 and
to the
logic- 1.
Furthermore,
.
.
.
,
x„
box, including the output line, are also restricted to these values. of
restricted
A
tabulation
the possible input combinations of values and their corresponding output
all
values,
i.e.,
functional values,
tions). If there are
consist of 2" rows
shown
in
Table
is
known
to 2"
—
upon the
1.
as a truth table (or table of
+
and n
1
4-4. It should
will
columns. The general form of a truth table
be noted that a simple way of including
The value of/ will,
is
to
count
in the
of course, be
binary
or
1
in
all
is
pos-
number system from each row, depending
specific function.
The second method
of describing the terminal behavior of a combinational
network uses a Boolean expression. This
is
a formula consisting of Bool-
ean constants and variables connected by the Boolean operators
NOT.
combina-
n input variables and one functional output, this table
sible input values in a truth table
logic
is
points within the black
all
Parentheses
may
TABLE4-4 The x,
^2
Truth Table
•')(2 + z), which is equivalent to ANDing X with logic-1. By similar reasoning, the variable y can be introduced into the second term xz by ANDing it with y + y. Finally, the last term is a minterm since all three variables appear. Combining our results, we can rewrite the given expression as missing variables.
the
y and
The
first
z variables.
x(y
+
y)(z
+
+
z)
x(y
+
+
y)z
xyz
If the distributive law is now applied to this expression and duplicate terms are dropped when they appear, the minterm canonical form will result. In this case we have
xyz
+
+
xyz
xyz
+
xyz
+
xyz
+
xyz
The Maxterm Canonical Form
A
canonical expression for a function
form.
That
It is,
can therefore be of value
in
two functions are equivalent
The minterm canonical form
is
one that
is
unique and has a standard
determining the equivalence of functions. if
their canonical expressions are the same.
consists of a
sum
of product terms in which every
variable appears within each product term. Another standard formula in Bool-
known
maxterm canonical form or standard product-ofminterm sums. As canonical form, the maxterm canonical form can be obtained from the truth table or by expanding a given Boolean ean algebra
is
as the
in the case of the
expression.
Again consider Table 4-6. This truth table denotes a Boolean function/. The complement of this function, i.e., /, is constructed by com-
truth table for the
plementing each of the values
in the last
column,
i.e.,
the functional values.
The
shown in Table 4-9. Using the procedure of Sec. 4.3, we can now write the minterm canonical form for the complementary function /as resulting truth table
f(Xi, X2, X3)
is
=
X1X2X3
+
X,X2X3
+
X1X2X3
+
X,X2X3
+
X1X2X3
4-12
THE McGRAW-HILL COMPUTER HANDBOOK both sides of the above equation are complemented with the use of law, an equation for the function /will result:
If
DeMorgan's [f(X|,X2,X3)]
=
f(x,, X2, X3)
= =
last
+
(X1X2X3
X1X2X3
+
+
X]X2X3
X1X2X3
+
X1X2X3)
(XiX2X3)(x,X2X3)(x,X2X3)(XiX2X3)(x,X2X3) (X,
(Xi
This
=
+ +
expression
+ +
X2 X2
is
X3)(Xi
+
+
X2
+
X3) (Xi
+
X3)(X,
+
+
X2
X3)
X3)
maxterm canonical form
the
X2
for the function /.
The maxterm canonical form or standard product-of-sums is characterized a product of sum terms in which every variable of the function appears
as
exactly once, either
TABLE 4-9 The Complement
the
Given
in
Truth Table
complemented or uncom-
for
plemented,
of the Function
that
Table 4-6
each sum term. The sum terms
in
comprise
expression
the
are
called
maxterms. ^2
Xi
^3
J
J
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
maxterm canonical
In general, to obtain the
form from a truth table, the truth table of the complementary function is first written by changing each logic- 1 functional value to logicand vice versa. The minterm canonical form is then written for the complementary function. the
expression
resulting
1
Finally,
1
mented by DeMorgan's law
is
comple-
to obtain the
max-
term canonical form.
The maxterm canonical form can arrived at algebraically
made
of the theorem
if
a Boolean expression
xx =
and the
is
also be
given. In this process, use
distributive law
x
+
yz
=
(x
-\-
y){x
is
+
z).
To
illustrate the procedure, consider the expression
xy
+
yz
Since the maxterm canonical form consists of a product of
sum
terms,
it is first
necessary to rewrite the expression in this general form. This rewriting can be
done by use of the distributive law. In xy
Once an it is
+
expression
= = = =
yz
is
this case,
+ z) + yKx + z)(y + z) (x + z)(y + z) + z)( y + z)
(xy +_y)(xy (x (x (x
+ + +
y)(y y)
^
y)(x
1
•
obtained that consists of only a product of
next necessary to determine whether each
sum term
is
we can
introduce the appropriate variables by using the theorem
for the
above example, we get
(x
+
y)(x
+
z)(y
+
z)
= =
(x (x
+ +
y
y
+ +
0)(x zz)(x
sum
terms,
a maxterm. If not,
xx =
0.
+ + z)(0 + y + z) + yy + z)(xx + y +
Thus,
z)
BOOLEAN ALGEBRA AND LOGIC NETWORKS Finally, the distributive law
is
4-13
applied and duplicate terms arc removed by the
idempotent law. Thus, we have
+
(X
y)(x
+
z)(y
=
(x
(x
=
4-6
(x
+ + + +
z)
+ + +
y y y
z) (x
z)(x
z)(x
+ y + z)(x + y + z) + y + z)(x + y + z) + y + z)(x + y + z)(x +
y
+
z)
THE KARNAUGH MAP METHOD OF BOOLEAN SIMPLIFICATION In the previous section a
means
resulting
it
was stated that the Boolean algebra theorems provide
for the manipulation of
Boolean expressions. Since the expressions
from such manipulation are equivalent, the combinational
works that they describe
mine what
is,
in
some
will
be equivalent.
It is
logic net-
therefore of interest to deter-
sense, the "simplest" expression. Unfortunately, such
an
may be difficult to determine by algebraic manipulations. Several methods have been developed for deriving simple expressions. One such method, utilizing Karnaugh maps, will be presented in this section. expression
Karnaugh Maps
A
Karnaugh map is a graphic representation of a truth table. The structure of Karnaugh maps for two-, three-, and four-variable functions is shown in
the
Fig. 4-2 to 4-4 along with the general y X
y
fix, y)
1
/(O, 0) /(O, 1)
the
corresponding
truth
can be seen that for each
row of a truth
/(0,0) /(0,1)
table, there
is
one
cell
Karnaugh map, and vice versa. Each cell in a map is located by a in a
/(l.O)
/(I, 1)
1
of
tables. It
/(1,0)
1 1
form
/(M)
coordinate system according to
its
ib)
(a)
FIG. 4-2 A two-variable Boolean function Truth table (b) Karnaugh map
axis labelings,
and the entry
in the
(a)
cell is
the value of the function for
the corresponding assignment of val-
ues associated with the for the particular
cell.
Fig 4-5 gives the truth table and Karnaugh
Boolean function f(x, y, z)
The
truth table
is
map
=
x(y
+
z)
+
xz
arrived at by evaluating the expression for the eight combi-
nations of values as described in Sec. 4.3, and the
structed as indicated by the general form
shown
Karnaugh map
is
then con-
in Fig. 4-3.
When Karnaugh maps are used for simplifying Boolean expressions, rectangular groupings of cells are formed. In general, every 2" X 2* rectangular grouping of cells corresponds to a product term with n — a — b variables, where n is the total number of variables associated with the map and a and b are nonnegative integers. Since the dimensions of these groupings are 2°
X
2*, it
)
4-14
THE McGRAW-HILL COMPUTER HANDBOOK X
y
/(x, y, z)
z
yz
no, 1 1
0, 0)
00
01
/(0,0,0)
/(0,0,1)
/(0,1,1) /(0,1,0)
/(1, 0,0)
/(1,0,1)
/(1, 1,1)
/(0,0, 1) /(O, 1,0)
1
/(O,
1
1)
1,
10
11
/(I, 0,0)
1
/(1,0, 1) /(I, 1,0)
1
1
1
1
1
1
/(I,
1
1)
1,
(b)
(a)
A
FIG. 4-3
/(I, 1,0)
three-variable Boolean function (a) Truth table (b) Kar-
naugh map
w
X
z
y
f(w,x,y,z) /(O, 0, 0, 0)
1
yz
/(O, 0, 0, 1)
00
/(O, 0, 1,0)
1
1
1
1
1
1
1
1
00 /(0,0,0,0) /(0,0,0,1) /(0.0,1,1) /(0,0,1,0)
1
/(O, 1,0,1)
/(O,
1,
1,0)
1
/(O,
1,
1,
/(I,
0,0,0)
1
/(I, 0,0,1)
1
/(I, 0,1,0) /(I, 0,1,1)
1
10
11
/(0,0, 1,1) /(O, 1,0,0)
1
1
01
/(0,1,1,1) /(0,1,1,0)
01
/(0, 1,0,0) /(0, 1,0,1)
11
/(I, 1,0,0) /(I, 1.0,1) /(l.l, 1.1) /(1, 1,1,0)
10
/(I, 0,0,0) /(1, 0,0,1) /(
1)
1
,0,
1 , 1
/(I. 0,1,0)
/(I, 1,0,0)
1
1
1
1
1
1
1
/(I, 1,0, 1) ib)
/(I, 1
1, 1,0) /(l.l, 1,1)
(a)
A
FIG. 4-4
four-variable Boolean function (a) Truth table (b)
x
y
z
/
1
1
1
1
00
Karnaugh map
01
1 1
1
1
1
1
1
1
1
1
1
1
1
1
I
1
(a)
x(y
follows that the total 2.
(b)
4-5
FIG.
+
z
The Boolean
+
number
1
1
function
f(x,y,z)
xz (a) Truth table (b) Karnaugh
= map
of cells in a grouping must always be a power of
All future references to groupings will pertain only to those whose dimensions
are 2"
X
Minimal
2*.
Sums
One method
of obtaining a Boolean expression from a
Karnaugh map
is
to con-
These are called 1 -cells. They the minterms of the canonical expression. Every 2° X 2* grouping correspond to a product term that can be used in describing part
sider only those cells that have a logic- 1 entry.
correspond to of
1
-cells will
of the truth table. If a sufficient
every
1-cell
appears
in at least
number of groupings
are selected such that
one grouping, the ORing of these product terms
BOOLEAN ALGEBRA AND LOGIC NETWORKS
completely describe the function. By a judicious selection of groupings, sim-
will
ple Boolean expressions can be obtained.
of a Boolean expression
ity
4-15
i.e.,
is
One measure
a count of the
number
of the degree of simplic-
of occurrences of letters,
variables and their complements, called literals, in the expression. Expres-
sum
sions consisting of a
of product terms
and having a minimum number of
are called minimal sums. There are two guidelines for a judicious selection of groupings that will enable a minimal sum to be written. First, the groupings should be as large as possible. This guideline follows from the fact that the larger the grouping, the literals
fewer a
be the number of
will
minimum number
fact that
number
corresponding product term. Second,
literals in its
of groupings should be used. This guideline stems from the
minimum
each grouping corresponds to a product term. By using a
number
of groupings the
of product terms, and consequently the
num-
minimum. Karnaugh map and the optimal groupings of 1cells are shown. No larger groupings are possible on this map. Also, no fewer than three groupings will encompass all the 1 -cells. The columnar grouping corber of literals in the expression, can be kept to a In Fig. 4-6 a four-variable
responds to the rectangle with dimensions 2'
X
2°
=
X
4
1,
grouping has dimensions
2X2, 2
X
1. It
X
2'
dimensions
=
01
^
00
order
to
may
write
01
ri
1
11
Karnaugh map,
erence must be
made
along the map's axes.
to the
It is
^ wy
n
V
necessary to
determine which axis variables do not
y i
labels FIG. 4-6
u
1
i t_
10
ref-
1
i
Boolean
expression from a
10
=
2°
overlap.
the
11
*
should be noted that the rec-
tangular groupings In
2X2
00
and the small grouping of two
has
cells
yz
the square
wxy y
z
Groupings on a four-variable Kar-
naugh map
change value within each grouping. Those variables whose values are the same for each cell in the grouping will appear in the product term. A variable will be complemented if its value is always logic-0 in the grouping and will be uncomplemented if its value is always logic- 1.
To
illustrate the writing of a
Boolean expression, again consider Fig.
Referring to the square grouping,
we can
4-6.
see that the grouping appears in the
and second rows of the map. In these rows the variable w has the value of Thus, the product term for this grouping must contain w. Furthermore, since the x variable changes value in these two rows, this variable will not appear in the product term. When we now consider the two columns that contain the grouping, the y variable has the same value in these two columns, i.e., logic-1, and hence, the literal y must appear in the product term. Finally, we can see that the z variable changes value in these two columns and, hence, will not appear in the product term. Combining the results, we find that the square grouping corresponds to the product term vjy. first
Iogic-0.
If this
procedure
is
applied to the remaining two groupings in Fig. 4-6, their
corresponding product terms can be determined. The columnar grouping cor-
responds to the term yz, since the variables j' and z both have the value logic-
4-16
THE McGRAW-HILL COMPUTER HANDBOOK associated with every cell in this grouping. Furthermore, since no row vari-
same
ables have the
X
w
logic value for every cell of the grouping, neither the
nor
variables appear in the product term. In a similar manner, the two-cell group-
sum
ing corresponds to the product term wxy. Thus, the minimal
naugh map
is
for this
Kar-
given by the expression f(w, X, y, z)
= wy +
+
yz
wxy
Although the three- and four-variable Karnaugh maps are normally drawn as the two-dimensional configurations shown in Figs. 4-3 to 4-4, from the point of view of the permissible rectangular groupings that can be formed, it is necessary to regard
them
as three-dimensional configurations.
For the three-variable right edges of the
map
map
of Fig. 4-3,
it is
necessary to regard the
as being connected, thus forming a cylinder.
left
It is
and
on the
surface of this cylinder that the rec-
tangular 00
01
10
11
Hence, appear
(e:
groupings rectangular
split
shows a
are
formed.
groupings
may
when drawn. Figure
split
4-7
rectangular grouping.
The corresponding product term FIG. 4-7
Split grouping
on a three-variable
Karnaugh map
is
obtained as explained previously and is
xz
for the case
Split
shown
rectangular
in Fig. 4.7.
groupings
can
and right edges of a fourvariable map are connected as well as the top and bottom edges. Thus, the fourvariable map of Fig. 4-4 should be regarded as appearing on the surface of a toroid. Fig. 4-8 shows some examples of split rectangular groupings on a fourvariable map. In Fig. 4-8a the grouping of the four cells corresponds to the term xz and the grouping of the two cells corresponds to xyz. Special attention should be paid to the grouping illustrated in Fig. 4-8b. The four corners form a also appear
2'
X
2'
on four-variable maps. In general, the
rectangular grouping
corresponding product term
is
if
the
map
is
left
visualized as being a toroid.
The
xz.
summary, the basic approach to determining the optimal groupings on a Karnaugh map leading to a minimal sum is as follows. First a 1-cell is selected that can be placed in only one grouping that is not a subgrouping of some larger grouping. The largest grouping containing this 1-cell is then formed. Next, another 1-cell with the above property, not already grouped, is selected and its In
yz
00 00
of 11
01
11
'\
'1 1
10
01
11
'.
10
V
I
01
11
10
(.a)
FIG. 4-8
00
viJ
J
00
10
\^
p (b)
Examples of split groupings on a four-variable Karnaugh map
BOOLEAN ALGEBRA AND LOGIC NETWORKS
4-17
some groupmore than one way. At this point, a minimum number of additional groupings are formed to account for the remaining 1 -cells. The following examples illustrate this procedure for obtaining minimal sums from Karnaugh maps. grouping formed. This process ing or there remain
Example 4.5 in the
is
ungrouped
repeated until 1
-cells that
all
the
1
-cells
can be grouped
map shown
Consider the Karnaugh
are in
upper right-hand corner can be grouped with the
in
The
in Fig. 4-9.
1
-cells in
1-cell
the other three
corners. Furthermore, this 1-cell can appear in no other groupings that are not
subgroupings of these four
sum. Next,
it is
cells.
noted that the
can be placed
Thus, the term xz must appear
1-cell in
the
first
in
the minimal
row, second column,
grouping of four
still is
not
term xy. Finally, the remaining ungrouped 1-cell can be grouped with the cell just below it to produce the term wyz. The minimal sum is in a
grouping.
It
in a
f(w, X, y, z)
which consists of seven
Example 4.6 in the
=
xz
+
xy
+
cells to yield the
wyz
literals.
Consider the Karnaugh
map shown
in Fig. 4-10.
upper left-hand corner can be grouped only with the y^
00 00
1
The
1-cell
next to
it.
yz 10
11
u
1-cell
00
J
J
00
01
01
10
11
(1
01
wx ,
11
10
tt
FIG. 4-9
{\
Example
n
11
^
10
(
T^
(0
u
FIG. 4-10
4.5
^
1
Example
4.6
Similarly, the 1-cell in the lower right-hand corner can be grouped only with
the 1-cell above
only by
itself.
it.
At
The
1-cell in
the second row, third column, can be grouped
this point there
placed in some grouping.
It
still
remain three
1
-cells that
should be noticed that these
1-cells,
have not been
unlike the other
more than one grouping. To complete the process, a minimum number of groupings must be selected to account for these remaining 1-cells. The groupings shown on the map correspond to the minimal sum cases, can be placed into
f(w, X, y, z)
= wxy + wyz + wxyz + wxy + wyz
which consists of 16 literals. There are two other equally good minimal sums that could have been formed:
and It
can be seen from
a given function.
f(w, X, y, z)
= wxy + wyz + wxyz + wyz + wxz
f(w, X, y, z)
= wxy + wyz + wxyz + wxy +
this
xyz
example that more than one minimal sum can
exist for
4-18
THE McGRAW-HILL COMPUTER HANDBOOK
Minimal Products Thus far it has been shown how a minimal sum can be obtained from a Karnaugh map. Karnaugh maps can also be used to construct minimal expressions, as measured by a literal count, consisting of a product of sum terms. These expressions are called minimal products.
To
map
obtain a minimal product, attention
given to those cells in the Karnaugh
complement of a given function by including every
written for the
is
is
that contain a logic-0. These are called 0-cells. In this case a minimal 0-cell,
sum and
only 0-cells, in at least one grouping while satisfying the requirements of using the largest and the fewest groupings possible. Again, the three-dimensional
maps must be kept in mind. Then, DeMorgan's law is applied to complement of the expression. This results in an expression for the Karnaugh map (and, hence, the truth table). Furthermore, it consists of a product of sum terms and a minimum number of literals. nature of the
the
Example 4. 7
in Example 4.5, whose Karnaugh map The map is redrawn in Fig. 4-11, where the 0-cells are minimal sum for the complement of the function:
Consider the function
given in Fig. 4-9.
is
grouped
form a
to
=
+ wx + xy = (yz + wx + xy)
f(w, X, y, z) f(w, X, y, z)
or
By applying DeMorgan's
we
law,
obtain the minimal product
=
f(w, X, y, z)
yz
(y
+
z)(w
which consists of
six literals. In this case the
has fewer
than
literals
+
+
x)(x
y)
minimal product of the function
minimal sum.
its
yz
00
01
11
1
1
H
01
11
10
10
00
01
1
1
00
1
10
11
(o
11 0)
°1
p u
wx
yz
00
oj
01
(0
0)
1
1
y
1
1
1
f°l
1
FIG. 4-11
©
1
1
Example
FIG. 4-12
4.7
y
1
Example
1
4.8
are three minimal products that
Example 4.6, whose Karnaugh map By grouping the 0-cells, there can be formed. The minimal product corre-
sponding to the groupings
4-12
Example 4.8 is
shown
f(w,
=
in Fig.
^^y^^)_ (w + X
The two
Consider the function
4-10 and
+
_ y)(w
+
is
redrawn
in Fig.
y
+
_
_
z)(w
in
in Fig. 4-12.
is
+
X
+
y
+
z)(w
+
y
+
+
X
+
y
+
z)(w
+
y
+
z)(w
+
x
+
y)
z)(x
+
y
+
z)
other minimal products are
f(w, X, y, z)
= (w +
_ X
+
_ y)(w
+
_ y
+
_
_
z)(w
BOOLEAN ALGEBRA AND LOGIC NETWORKS
4-19
and f(w,
^^y^^)_
= (w +
+
X
_
_
+
y)(w
_
_
+
y
z)(w
+
+
x
y
+
z)(w
+
+
x
In each of these expressions, 16 literals appear. Hence, the literals
imal
appear
sum
minimal product descriptions of
in the
+
y)(w
x
+
z)
same number of
this function as in its
min-
descriptions.
Don't-Care Conditions Before we close this discussion on Karnaugh maps, one more situation must be considered.
should be recalled that Boolean expressions are used to describe
It
the behavior and structure of logic networks.
of a
Karnaugh map) corresponds
Each row of a truth
to the response
as a result of a combination of logic values on
(i.e.,
output) of the network
input terminals
its
table (or cell
the values
(i.e.,
of the input variables). Occasionally, a certain input combination
never to occur, or cases,
it is
if it
does occur, the network response
is
known
not pertinent. In such
is
not necessary to specify the response of the network
the func-
(i.e.,
These situations are known as don't-care conditions. When don't-care conditions exist, minimal sums and products can still be obtained with Karnaugh maps. tional value in the truth table).
maps by dash
Don't-care conditions are indicated on the Karnaugh
To
obtain a minimal
may be
sum
used optionally
order to form the best possible groupings.
care
cells,
Any
of the don't-care cells can be used
Furthermore,
it is
entries.
or product, the cells with dash entries, called don'tin
when grouping
not necessary that they be used at
the
1
-cells or
the 0-cells.
or that they be used
all
only for one particular type of grouping.
Figure 4-13 shows a Karnaugh Fig. 4-1 3a
map
f(w, X, y, z)
while the
map
It
should be noted that the
0,
and
=
=
yz
+
wxy
of Fig. 4-1 3b can be used to obtain a minimal product f(w, X, y, z)
z
The map of
with don't-care conditions.
can be used to obtain a minimal sum
is
cell
=
(y
+
z)(y
+
z)(w
+
y)
used for both a minimal
sum and
yz 11
-
00 01
01
(-
1
-
ia)
FIG. 4-13
-
J
10
00
10
n
11
x
0,
00 01
11
=
\,
y
=
a minimal product; while the
yz
00
w =
corresponding to the values
01
11
10
1
f^
f°l
1
c
10 lloJ
1
-
-
^
1
'1 oj
W
Karnaugh maps involving don't-care conditions
4-20
THE McGRAW-HILL COMPUTER HANDBOOK corresponding to the values
cell
at
w =
0,
x =
0,
>^
=
0,
and z
=
1
is
not used
all.
Although the Karnaugh ables, the
maps
map method can
be extended to more than four
get increasingly difficult to analyze.
To handle
vari-
these larger
problems, computer techniques have been developed.
4-7 LOGIC
NETWORKS
Boolean algebra serves to describe the logical aspects of the behavior and strucThus far we have considered only its behavioral descrip-
ture of logic networks. tive properties.
That
is,
the algebraic expression or the truth table provides a
mechanism
for describing the output logic value of a
logic values
on
its
input
lines.
network in terms of the However, Boolean algebra expressions can also
provide an indication of the structure of a logic network.
The Boolean
algebra, as described in the preceding sections, includes the
AND, OR, and NOT. some sense correspond
whose terminal
three logic operators:
If there are circuits
logic properties in
to these three operators, then the
interconnection of such circuits, as indicated by a Boolean expression, will provide a logic network. Furthermore, the terminal logic behavior of this network
be described by the expression. In the next chapter
it will be seen that such and are called gates. Of course, electrical signals really appear at the terminals of the gates. However, if these signals are classified as two-valued, then logic-0 can be associated with one of the signal values and logic- 1 with the other. In this way, the actual signal values can be disregarded at the terminals of the gate circuits, and the logic values themselves can be assumed to appear. The gate symbols for the three Boolean operations introduced thus far are shown in Fig. 4-14. Inasmuch as these symbols denote the Boolean operators,
will
circuits exist
(b)
(a)
FIG .4-14
Gate symbols
(a)
(c)
AND gate (b) OR gate (c) NOT gate
(or inverter)
the terminal characteristics for these gates are described by the definitions pre-
That is, the output from the AND gate will and only if all its inputs are logic- 1; the output from the OR gate will be logic- 1 if and only if at least one of its inputs is logic- 1; and the output from the NOT gate will be logic- 1 if and only if its input is logic-0. NOT gates viously stated in Tables 4-1 to 4-3.
be
logic- 1 if
are also
commonly
called inverters.
A
drawing that depicts the interconnection of the logic elements is called a logic diagram. In general, when a logic diagram consists only of gate elements with no feedback lines around them, the diagram tional network.
A
combinational network
is
is
said to be of a
one that has no
combina-
memory
property
and, thus, one in which the inputs to the network alone determine the outputs
from the network. There is a correspondence between the
logic
diagram of a combinational
net-
BOOLEAN ALGEBRA AND LOGIC NETWORKS
4-21
L_i>^^ FIG. 4-15
Logic diagram whose terminal behavior
the Boolean expression f(w,x,y,z)
=
w(xyz
+
is
described by
yz)
work and a Boolean expression. Hence, Boolean expressions serve as descriptions of combinational networks. As an example, consider the logic diagram shown in Fig. 4-15. The two NOT gates are used to generate y and z. The output from the upper-left-hand AND gate is described by xyz, and the output from the lower-left-hand AND gate is given by yz. These two outputs serve as inputs to the OR gate. Thus, the output from the OR gate is described by xyz + yz. Finally, the output from the OR gate enters the remaining AND gate along with a w input. Hence, the logic diagram of Fig. 4-15 is described by the equation
= w(xyz +
f(w, X, y, z)
Clearly,
it is
just as easy to reverse the
Boolean expression,
it
is
yz)
above process. That
is,
from a given
a simple matter to construct a corresponding logic
diagram. In order that the gate symbols can
and
all
be kept the same size
order to prevent the crowding of several inputs to
in
gates, the generalized single gate has a large
symbols shown
number
in Fig.
gates or
OR
when
a
accommodate
a
4-16 are frequently used
FIG. 4-16
Gate symbols
number of
inputs (a)
to
AND gate (b) OR
gate
(b)
4-8 ADDITIONAL LOGIC
AND
diagram
of input lines.
large (a)
in a logic
GATES
were introduced in the previous section. However, several additional ones frequently appear in logic diagrams. Fig. 4-17 summarizes the commonly encountered gate symbols. First, it should be noted that several additional logic functions are symbolized. Second, two gate symbols are shown for
Three
logic gates
each function. These symbols
The As
utilize the inversion
bubble notation.
Inversion Bubble Notation
indicated in Fig. 4-17, a simple triangle denotes a buffer amplifier. These
circuits are
needed
to provide isolation, amplification, signal restoration,
and
/
4-22
THE McGRAW-HILL COMPUTER HANDBOOK Gate symbols
Function
AND ^
OR
„
Boolean description
j ^/
l~j'^^f
f=xy
)— :£>^
:^0
f=x+y = (xy)
y^f
J
NOT
iTTJ)
f=x
(inverter)
NAND
V>-/
^
I
NOR
)—f
— T>' :3E>—
/
}— x>^ :x>
/
)
)|
NOT EXCLUSIVE OR (EQUIVALENCE)
J)
>X>- /
„JJ
(buffer amplifier)
Summary
)
[]
;,__r\^/
IDENTITY
>— /
:^0
yo-f :T>^
EXCLUSIVE OR
FIG. 4-17
=
f=(x+y)
/=
x_
-0100 = 1.1011 -0111 = 1.1000 -1011 pl.OOll
1.0001
The output 1
>
I
1
.
1
1.0100
of the adder will be in Is complement form in each case, with a
in the sign-bit position.
the above we see that in order to implement an adder which will handle magnitude signed Is complement numbers, we can simply add another full adder to the configuration in Fig. 6-5. The sign inputs will be labeled Xq and Yo, and the output from the adder connected to Xj and Yi will be connected to the C, input of the new full adder for Xq and Yq. The Co output from the adder for Xq and Yq will be connected to the C, input for the adder for X4 and Y4. The Sq output from the new adder will give the sign digit for the sum.
From
4-bit
Q
(Overflow
6-9 ADDITION IN
will not
be detected
in this adder; additional gates are required.)
THE 2S COMPLEMENT
SYSTEM When
negative numbers are represented in the 2s complement system, the oper-
ation of addition
is
very similar to that in the Is complement system. In parallel
machines, the 2s complement of a number stored
in
a register
may be formed
—
—
THE ARITHMETIC-LOGIC UNIT
by
first
complementing the
register
and then adding
1
bit of the register. This process requires two steps and
6-11
to the least significant
more timeconsuming than the Is complement system. However, the 2s complement system has the advantage of not requiring an end-around carry during addition. The four situations which may occur in adding two numbers when the 2s complement system is used are as follows:
1.
When
both numbers are positive, the situation
that in case 2.
When is
one number
positive
is
therefore
completely identical with
complement system which has been discussed.
in the Is
1
is
is
and the other negative, and the larger number
the positive number, a carry will be generated through the sign
carry
may
bit.
This
be discarded, since the outputs of the adder are correct, as shown
below:
0.0111
+1000=
0.1000
+1.1101
-0111 = +0001
+1.1001
+ 0111= -0011 =
+ 0100 I
> carry
'
3.
When is
0.0100 is discarded
a positive and negative
'
number
Note:
A
negative
it
is
discarded
bit,
and the answer
number
will
again
stands:
+ 0011 =
0.0011
-0100 = -0001
1.1100
must be added
1
> carry
are added and the negative
the larger, no carry will result in the sign
be correct as
0.0001
I
1.1111
to the least significant bit of a 2s
number when converting 1.001
1
=
+0100 = 0.0100 -1000 = 1.1000 -0100 1.1100
1
it
to a
magnitude. For example:
100 form the
0001 add
complement
Is
complement
1
-1101
When
both numbers are the same magnitude, the result
+ 0011 =
When
is
as follows:
0.0011
-0011 =
1.1101
0000
0.0000
number of the same magnitude
a positive and a negative
are added,
the result will be a positive zero. 4.
When
the two negative numbers are added together, a carry will be gener-
ated in the sign bit and also in the bit to the right of the sign
cause a
1
to
the sign bit
be placed
may be
in the sign bit,
which
is
correct,
discarded.
-0011 = -0100 =
1.1100
-0011 -1011
1.1001
1110
1.1101
Ollli •
carry
is
discarded
= =
bit.
This
will
and the carry from
1.1101 1.0101
1.0010
—
6-12
THE McGRAW-HILL COMPUTER HANDBOOK For parallel machines, addition of positive and negative numbers
is
quite sim-
any overflow from the sign bit is simply discarded. Thus for the parallel adder in Fig. 6-5 we simply add another full adder, with Xq and Yq as inputs and with the CARRY line Co from the full adder, which adds Xi and is placed F/, connected to the carry input C, to the full adder for Xq and Yq. A on the C, input to the adder connected to X4 and Y4. This simplicity in adding and subtracting has made the 2s complement sysple, since
tem the most popular for parallel machines. In fact, when signed-magnitude systems are used, the numbers generally are converted to 2s complement before addition of negative numbers or subtraction is performed. Then the numbers are changed back to signed magnitude.
AND SUBTRACTION IN A PARALLEL ARITHMETIC ELEMENT
6-10 ADDITION
We
now examine
tract
the design of a gating network which will either add or sub-
two numbers. The network
TRACT
to
is
diff'erence
is
the output First
ative
to
When
bers to be added or subtracted.
numbers
is
have an
ADD
SUB-
input line and a
input line as well as the lines that carry the representation of the
we
is
be on the output
to be to
be
lines,
on the output
the
ADD
line
is
a
1,
the
sum
num-
of the
SUBTRACT line is a 1, the ADD and SUBTRACT are Os,
and when the
lines. If
both
0.
note that
the machine
if
numbers, subtraction
may
is
capable of adding both positive and neg-
be performed by complementing the subtra-
hend and then adding. For instance, 8 — 4 yields the same result as 8 + ( 4), and 6 — ( — 2) yields the same result as 6 + 2. Subtraction therefore may be performed by an arithmetic element capable of adding, by forming the complement of the subtrahend and then adding. For instance, in the Is complement system, four cases
may
arise:
TWO POSITIVE NUMBERS 0.0011
0.0011
— 0.0001
complementing the subtrahend 1.1 1 10 0.0001 and adding
—
-^ carry
1
0.0010
TWO NEGATIVE NUMBERS 1.1101
1.1101
complementing
1.1011
0.0100 0.0001
—> carry
1
0.0010
POSITIVE
MINUEND
NEGATIVE SUBTRAHEND 0.0010
-1.1101
^
0.0010 0.0010 0.0100
NEGATIVE MINUEND POSITIVE SUBTRAHEND 1.0101 ^ 1.0101 -0.0010
—
1.1101
1.0010
-> carry
1
1.0011
THE ARITHMETIC-LOGIC UNIT
The same
basic rules apply to subtraction in the 2s
except that any carry generated
in the sign-bit
adders
is
6-13
complement system,
simply dropped. In this
case the 2s complement of the subtrahend
is formed, and the complemented number is then added to the minuend with no end-around carry. We now examine the implementation of a combined adder and subtracter network. The primary problem is to form the complement of the number to be
subtracted. This complementation of the subtrahend
may
be performed
in sev-
complement system, if the storage register is composed of flip-flops, the Is complement can be formed by simply connecting the complement of each input to the adder. The 1 which must be added to the least significant position to form a 2s complement may be added when the two numbers are added by connecting a 1 at the CARRY input of the adder for the least eral ways.
For the
Is
significant bits.
A
complete logical circuit capable of adding or subtracting two signed 2s
complement numbers is shown in Fig. 6-6. One number is represented by Xq, Xi, Xj, Xs, and X4, and the other number by Y/Yi, Y2, V3, and Y4. There are two control signals, ADD and SUBTRACT. If neither control signal is a I (that is, both are Os), then the outputs from the five full adders, which are So, S,, S2, S3, and S4 will all be Os. If the ADD control line is made a 1, the sum of the number X and the number V will appear as So, S/, S2, S3, and S4. If the SUBTRACT line is made a 1, the diff'erence between A' and K(that is, — Y) will appear on So, Si, S2, S3, and S4.
X
Notice that the either full
Y or
AND-to-OR
SUBTRACT
adder, while a
To
either
adder.
a subtraction
gated into the
full
ADD causes
X input
adder, and a
is
1
full
adder.
connected to the appropriate
is
called for, the
is
Y, to enter the appropriate
causes F/ to enter the
add or subtract, each
When
complement of each
added by connecting the
signal to the C, input of the full adder for the lowest order bits
the
SUBTRACT
when
addition
is
line will
F input selects
gate network connected to each
Y, so that, for instance, an
when we add, a
be a
X^ and
Since
F4.
carry will be on this line
performed.
ADD
r^
i
,n
,K
i
,H
\
i
Co
"
t
"
^1
«2
S3
To add: the
ADD
To subtract the
Numbers
FIG. 6-6
Parallel addition
and subtraction
line is
made
SUBTRACT
are to be
in
a
1
line is
made
a
2s complement form
1
full
Y flip-flop is SUBTRACT
'^""
adder
C '
6-14
THE McGRAW-HILL COMPUTER HANDBOOK
The
simplicity of the operation of Fig. 6-6
makes
and subtraction very attractive for computer use, and
2s
complement addition
it is
the most frequently
used system.
The
configuration in Fig. 6-6
subtraction because
is
the most frequently used for addition and
provides a simple direct
it
means
for either
tracting positive or negative numbers. Quite often the Sq, Sx,
gated back into the
Y replaces
and
An
X flip-flops,
when
Since the registers
from
+
5 to
1
,
is
overflow. In digital computers an overflow
that an overflow this
is
the performance of an operation results in a quantity beyond
in Fig. 6-6
—
have a sign
bit plus
is
to receive the result.
4 magnitude
6 in 2s complement form. Therefore,
1
addition or subtraction were greater than -H
be +20, and
.
X
the capacity of the register (or storage register) which
store
.
the original value of X.
important consideration
said to occur
adding or sub-
S4 lines are so that the sum or difference or the numbers .
1
5 or less than
had occurred. Suppose we add cannot be represented
+8
if
—
1
they can
bits,
the result of an 6,
we would
say
+12; the result should complement on the lines we add — 13 and —7 or if we to
(fairly) in 2s
and S4. The same thing happens if from +12. In each case logical circuitry is used to detect the overflow condition and signal the computer control element. Various options are then available, and what is done can depend on the type of instruction being executed. (Deliberate overflows are sometimes used in double-precision routines. Multiplication and division use the results as are.) 5*0,
Si, S2, Si,
subtract
6-11 FULL
—8
ADDER DESIGNS
The
full
adder
is
component of an arithmetic element. Figure
a basic
6-3 illus-
trated the block diagram symbol for the full adder, along with a table of
binations for the input-output values and the expressions describing the
com-
sum and
Succeeding figures and text described the operation of the full adder. Notice that a parallel addition system requires one full adder for each carry
lines.
bit in the basic
word.
There are of course many gate configurations for full binary adders. Examples of an IBM adder and an MSI package containing two full adders follow. 1.
Full binary adder
used
in several
Figure 6-7 illustrates the
IBM
full
binary adder configuration
general-purpose digital computers. There are three
inputs to the circuit: the
X input
is
from one of the storage devices
in the
Y input is from the corresponding storage device in the added to the accumulator register, and the third input is the CARRY input from the adder for the next least significant bit. The two outputs are the SUM output and the CARRY output. The SUM output will accumulator, the register to be
contain the
sum
output
be connected to the
bit's
will
value for this particular digit of the output.
adder (refer to Fig.
The outputs from
CARRY
The
CARRY
input of the next most significant
6-5).
the three
AND
gates connected directly to the X, Y,
and C inputs are logically added together by the OR gate circuit directly and C, or Y and C input lines contains a beneath. If either the X and Y, there should be a CARRY output. The output of this circuit, written in 1
X
,
THE ARITHMETIC-LOGIC UNIT
U
6-15
hfi\.
li li
X
r
Y
C
'
Sum
Carry
[{XC + YC + XY) + XYC] {X + Y + C XYC + XYC + XYC + XYC FIG. 6-7
Full
adder used
logical equation form,
is
in
IBM
machines
shown on the
figure.
This
may be compared
with
the expression derived in Fig. 6-3.
The
SUM output XY + XC + YC
derivation of the
output expression ing
{XY + XC + YQ. The
AND
gate and
XYC. The
logically
is
sum
logical
is
not so straightforward.
is first
logical product of ^, Y,
added
to this,
of X, Y, and
C is
The
CARRY
inverted (complemented), yield-
forming
and
C is
formed b y an
(XY + XC +
then multiplied times
this,
YQ + forming
the expression
[{XY
When
+ XC + YQ + XYQiX + Y +
Q
multiplied out and simplified, this expression will be
+ XYC + XYC,
the expression derived
in Fig. 6-3.
XYC + XYC
Tracing through the
logical operation of the circuit for various values will indicate that the
output
will
be
1
when
only one of the input values
three input values are equal to
output value 2.
Two two
full full
will
be a
1
.
For
all
5"
1,
or
SUM
when
all
other combinations of inputs the
adders in an integrated circuit (IC) container
Figure 6-8 shows
adders. This package was developed for integrated circuits using
(TTL). The entire circuitry is packaged in one IC The maximum delay from an input change to an output change output is on the order of 8 nanoseconds (ns). The maximum delay
container.
an
equal to
0.
transistor-transistor logic
for
is
from input
to the
C2 output
is
about 6
ns.
6-16
THE McGRAW-HILL COMPUTER HANDBOOK
B2
^H>
A2
^=H>^
FIG. 6-8 Two full adders Texas Instruments)
The amount of delay
in
an IC container (courtesy
associated with each carry
is
an important figure
in
adder for a parallel system, because the amount of time required to add two numbers is determined by the maximum time it takes
evaluating a
full
through the adders. For instance, if we add 01111 complement system, the carry generated by the Is in the least significant digit of each number must propagate through four carry stages and a sum stage before we can safely gate the sum into the accumulator. A study of the addition of these two numbers using the configuration in Fig. 6-5 will make this clear. The problem is called the carry-ripple for a carry to propagate
10001
to
in the 2s
problem.
There are a number of techniques which are used in high-speed machines to alleviate this problem. The most used is a bridging or carry-look-ahead circuit which calculates the carry-out of a number of stages simultaneously and then delivers this carry to the succeeding stages.
6-12 THE BINARY-CODED-DECIMAL (BCD) ADDER Arithmetic units which perform operations on numbers stored
must have the
BCD
ability to
A
add
4-bit representations of
decimal
in
digits.
BCD
form
To do
this
shown in Fig. 69. The adder has an augend digit input consisting of four lines, an addend digit input of four lines, a carry-in and a carry-out, and a sum digit with four output lines. The augend digit, addend digit, and sum digit are each represented in 8, a
4, 2,
1
adder
BCD
is
used.
code.
block diagram symbol for an adder
is
THE ARITHMETIC-LOGIC UNIT Carry
6-17
in
'
^^1
Augend
A'2
digit
X4
Carry o
Decimal adder
2i Z2
Sum
Z4
digit
Addend ^2 digit Ya
FIG. 6-9
Serial-parallel addition
of the BCD adder in Fig. 6-9 is to add the augend and addend and the carry-in and produce a sum digit and carry-out. It is possible to make a BCD adder using full adders and AND or OR gates. An adder made in this way is shown in Fig. 6-10.
The purpose
digits
4
^1^
4
r
Carry to next higher order
Sum
adder
FIG. 6-10
BCD
adder
digits
Carry from lower order
adder
6-18
THE McGRAW-HILL COMPUTER HANDBOOK
© 41 © B1
A2
^O
©
^—
B2 ©I
© A2 B3 /14
©
B4 ®r
FIG. 6-1
1
Complete
BCD
adder
in
an IC package
There are eight inputs to the BCD adder, four A',, or augend, inputs and four or a 1 during a F„ or addend, digits. Each of these inputs will represent a = given addition. If 3(001 1) is to be added to 2(0010), then Xg 0, A'^ = 0, X2 = 1, and X, = 1; Fs = 0, Y4 = 0, F^ = 1, and Y, = 0. The basic adder in Fig. 6-10 consists of the four binary adders at the top of the figure and performs base 16 addition when the intent is to perform base 10 addition. Some provision must therefore be made to (1) generate carries and (2) correct sums greater than 9. For instance, if 3io(001 1) is added to 8io(1000), the result should be lio(OOOl) with a carry generated.
The
actual circuitry which determines
when a carry
is
to
be transmitted
to
the next most significant digits to be added consists of the
full binary adder to which sum (S) outputs from the adders for the 8, 4, 2 inputs are connected and of the OR gate to which the carry (C) from the eight-position bits is connected. An examination of the addition process indicates that a carry should be
generated when the 8
AND
4, or 8
AND
2,
or 8
AND 4 AND
2
sum
outputs
from the base 16 adder represent Is, or when the CARRY output from the eight-position adder contains a 1. (This occurs when 8s or 9s are added together.)
Whenever the sum of two digits exceeds 9, the CARRY TO NEXT HIGHER ORDER ADDER line contains a for the adder in Fig. 6-10. A further difficulty arises when a carry is generated. If 7io(01 1) is added to 1
1
610(01 10), a carry will be generated, but the output from the base 16 adder will 11 01. This 1101 does not represent any decimal digit in the 8, 4, 2, 1 system and must be corrected. The method used to correct this is to add 6io(01 10) to the sum from the base 16 adders whenever a carry is generated. This addition is performed by adding Is to the weight 4 and weight 2 position output lines from the base 1 6 adder when a carry is generated. The two half adders and the full adder at the bottom of Fig. 6-10 perform this function. Essentially then,
be
THE ARITHMETIC-LOGIC UNIT
the adder performs base 16 addition and corrects the sum, 9,
by adding
6.
if it is
6-19
greater than
Several examples of this are shown below. (8) (4) (2) (1)
8
+
7
=
15
1000
+
0111
=
1
+
0110
10
1
1
1
1
1=5
t.with a carry generated (8)
9
+
5
=
(4)(2)(1)
10 10
14
+
1
1110 0110 1
1 or 4 t_with a carry generated 1
BCD adder and the outputs are
Figure 6-11 shows a complete digits
A
and
included.
The
digits B,
circuit line
used
is
in
an IC package. The inputs are
5*.
A
carry-in
and a carry-out are
CMOS.
AND NEGATIVE BCD NUMBERS
6-13 POSITIVE
The techniques
for handling
binary numbers.
A
sign bit
BCD is
numbers greatly resemble those
for handling
used to indicate whether the number
is
positive
methods of representing negative numbers which must be considered. The first and most obvious method is, of course, to represent a negative number in true magnitude form with a sign bit, so that — 645 is represented as 1.645. The other two possibilities are to represent negative numbers in a 9s or a 10s complement form, which resembles the binary Is and 2s complement forms. or negative, and there are three
AND SUBTRACTION IN THE 9S COMPLEMENT SYSTEM
6-14 ADDITION
When
decimal numbers are represented
in a binary code in which the 9s comformed when the number is complemented, the situation is roughly the same as when the Is complement is used to represent a binary number. Four cases may arise: two positive numbers may be added; a positive and negative number may be added, yielding a positive result; a positive and a negative number may be added, yielding a negative result; and two negative numbers may be added. Since there is no problem when two positive numbers are added, the three latter situations will be illustrated.
plement
is
6-20
THE McGRAW-HILL COMPUTER HANDBOOK Negative and positive number
—
sum:
positive
+ 692 =
0.692
-342 =
1.657
+ 350
pO.349 >
I
1
0.350 Positive
Two
and negative number
— negative sum:
-631 =
1.368
+ 342 =
0.342
-289
1.710
= -289
negative numbers:
•248
329
= =
577
1.751
1.670
r
1.421
L ^
1
1.422
The rules same as
the
for handling negative
full
numbers
in the 10s
complement system are
those for the binary 2s complement system in that no carry must
be ended-around. only the
= -577
A
BCD
parallel
BCD
adder
may
therefore be constructed using
adder as the basic component, and
all
combinations of posi-
and negative numbers may thus be handled. There is an additional complexity in BCD addition, however, because the 9s complement of a BCD digit cannot be formed by simply complementing each bit in the representation. As a result, a gating block called a complementer must tive
be used.
may be used to form complements of numbers, a block diagram of a logical circuit which form the 9s complement of a code group representing a decimal number in
To
illustrate the
the code groups for will
type of circuit which
BCD
Series parallel inputs
NOTE:
XY + XY
'.^ This gate
9s
Binary
coded
-
P
Af +
1
-P
left
No
/ICC
Shift
left
P- ^P 1
FIG. 6-19
Flowchart of division algorithm
Shift /ICC,
B
left
remainder
THE ARITHMETIC-LOGIC UNIT
6-37
6-20 LOGICAL OPERATIONS In addition to the arithmetic operations,
by ALUs. Three
many
logical operations are
performed
be described here: logical multiplication,
logical operations will
and sum modulo 2 addition (the exclusive OR operation). Each of these will be operations between registers, where the operation specified will be performed on each of the corresponding digits in the two registers. The result logical addition,
will
be stored
The
first
in
one of the
registers.
operation, logical multiplication,
AND
The
is
often referred to as an extract,
have been Suppose that the contents of the accumulator register are "logically multiplied" by another register. Let each register be five binary digits in length. If the accumulator contains 01101 and the other register 001 1, the contents of the accumulator after masking, or
defined as
•
=
operation.
0;
=
1
•
0;
rules for logical multiplication
=
•
1
and
0;
•
1
=
1
1
.
1
the operation will be 00101.
The masking, or extracting, operation is words. To save space in memory and keep pieces of information
may
may
be stored
the
in
computer
useful in "packaging"
associated data together, several
same word. For
instance, a
word
contain an item number, wholesale price, and retail price, packaged as
follows:
s
-6 7—15
1
16
— 24 V
item
wholesale
retail
number
price
price
To extract the retail price, the programmer will simply logically multiply the word above by a word containing Os in the sign digit through digit 15, and with Is in positions
remain
The vided
in
16 through 24. After the operation, only the retail price will
the word.
logical addition operation, or the
in
most computers. The rules
Figure 6-20 shows
how
gated together so that
The
1
also pro-
10 1=0
1
a single accumulator flip-flop
all
circuit in Fig. 6-20
=
is
000=0 001=1 100=1
0+0=0 0+1=1 1+0=1 +
2 operation,
MODULO 2 ADDITION
LOGICAL ADDITION
1
sum modulo
for these operations are:
and
B
flip-flop
can be
three of these logical operations can be performed.
would be repeated
for
each stage of the accumulator
register.
There are three control signals, LOGICAL MULTIPLY, LOGICAL ADD, and 2 ADD. If one of these is up, or 1, when a clock pulse arrives, this operation is performed and the result placed in the ACC (accumulator) flipflop. If none of the control signals is a 1 nothing happens, and the ACC remains
MOD
,
as
it is.
The and
actual values desired are found by three sets of gates; that
ACC +
B, and
ACC © B are all
formed
first.
Each
of these
is,
is
ACC
then
•
B,
AND-
1
.
6-38
THE McGRAW-HILL COMPUTER HANDBOOK MOD
FIG.
6-20
2
ADD
Circuit for gating logical operations into accumulator flip-flop
gated with the appropriate control signal. Finally the three control signals are
ORed
ACC
together,
and
this signal
is
used to gate the appropriate value into the
when one of the control signals is a 1 how a choice of several difl"erent
flip-flop
Figure 6-20 shows
gated into a single
flip-flop
using control signals.
We
function values can be
could include an
ADD
SHIFT RIGHT and a SHIFT LEFT
by simply adding more gates. Figure 6-21 shows an example of the logic circuitry used in modern computers to form sections of an ALU. All the gates shown in this block diagram are contained in a single IC chip (package) with 24 pins. The chip is widely
and a
signal
used
(in the
With
TTL
ns.
(There
8-bit
etc.,
PDP-11
an is
two
ECL
and Data General NOVAs, for example). the maximum delay from input to output is 1
series
(Schottky) circuits is
This chip
OR,
DEC
version with a 7 ns
maximum
delay.)
and can add, subtract, AND, chips could be used for the logic in an
called a 4-bit arithmetic-logic unit 4-bit register sections.
Two
accumulator, four chips would form a 16-bit accumulator,
The function performed by
this chip
is
four function select inputs Sq, S,, S2, and Sj. (a 0),
the
etc.
M and M low
mode input mode input
controlled by the
When
the
74S181 performs such arithmetic operations
as
ADD
or
is
SUB-
TRACT. When the mode input M is high (a ), the ALU does logic operations on the A and B inputs "a bit-at-a-time." (Notice in Fig. 6-21 that the carry generating gates are disabled by M = 1.) For instance, if M is a 0, 5*/ and S2 1
are also Os, and So and S3 are
M
is
ORs
Is,
the 74S181 performs arithmetic addition. If
and S3 are Is, and Si and S2 are Os, the 74S181 chip exclusive Bq, A, © Bi, A2 © B2 and A3 © (mod 2 adds) A and B. (It forms Aq a
1, 5*0
B3.)
The
table in Fig. 6-21 further describes the operation of this chip.
THE ARITHMETIC-LOGIC UNIT
1
ACTIVE
MODE SELECT INPUTS*
LOW
INPUTS AND OUTPUTS
L
L L L
L
H
AB
A AB_-
AB -
L
H
Logical
L L
H
H H
H
+ B
L L L
H
{C„=
1
A ® B A + B
AB A ®
1
-1 _ Al^ {A + ^)
ABT
+
{A
A - B_~ A + B A + {A +
AT
z
=
is
B)
L
L
Logical
L
H
H H
L
AB AB
AB T A
H
A
A
4-bit arithmetic-logic unit
the symbol for a
X®
OR
mod
2 adder
gate)
y
1
H
A
is
(exclusive
B)
B
ABT
—
1
AB T (A + A + B A ^ A*
1.
X y
B B A + B
L
Note: L)
1
H H
H H
*L = 0;H =
A A + B
L L
H H H H
FIG. 6-21
L)
L
L L L L
H H
ARITHMETIC
(W=
L
H H H H
H H H H H H
Sn.9,
"1
LOGIC (M = H)
L L L L L L L L
6-39
B)
the sign for arithmetic addition
6-40
THE McGRAW-HILL COMPUTER HANDBOOK
NUMBER SYSTEMS
6-21 FLOATING-POINT The preceding
number
sections describe
and negative integers are stored used, the binary point
numbers
integer.
it
the end of each word, and so computers calculate with binary
lies at
When
format, the operations are called fixed-point arithmetic.
in this
In science
an
is
representation systems where positive
binary words. In the representation system
"fixed" in that
is
each value represented
in
is
it
often necessary to calculate with very large or very small
in which a number. For instance, 4,900,000 may be written as 0.49 X 10^, where 0.49 is the mantissa and 7 is the value of the exponent, or 0.00023 may be written as 0.23 X 10"^. The notation is based on the relation y = a X r^, where y is the number to be
numbers. Scientists have therefore adopted a convenient notation mantissa plus an exponent are used
represented, a
=
decimal, and r It is
the mantissa, r
is
number system
the base of the
2 for binary), and
p
is
bX
we form (a X b) X 10'"+". To 10'""". To add a X 10"" to Z? X
10'",
we form a/b X equal to n.lfm = making
m
n,
X
then a
equal to n
is
10'"
-\-
b
X
=
10"
divide 10",
(a
+
aX
To
b)
X
10 for
is
raised.
multiply a
10'"
we must
=
by
Z)
X
X
10",
make m The process
first
10'".
called scaling the numbers.
Considerable "bookkeeping" can be involved there can be
(r
the power to which the base
possible to calculate with this representation system.
10" times
of
is
to represent a
in scaling the
numbers, and
maintaining precision during computations when the
difficulty in
numbers vary over a very wide range of magnitudes. For computer usage these problems are alleviated by means of two techniques whereby the computer (not the programmer) keeps track of the radix (decimal) point, automatically scaling the numbers. In the first, programmed floating-point routines automatically scale the numbers used during the computations while maintaining the precision of the results and keeping track of the scale factors. These routines are used with small computers having only fixed-point operations. lies in
A
second technique
building what are called floating-point operations into the computer's
hardware. The logical circuitry of the computer scaling automatically
performed.
To
point system,
A
and
number
effect this, a is
is then used to perform the keep track of the exponents when calculations are
to
representation system called the floating-
used.
floating-point
number
in a
computer uses the exponential notation system
described above, and during calculations the computer keeps track of the exponent as well as the mantissa.
A
computer number word
in a floating-point sys-
tem may be divided into three pieces: the first is the sign bit, indicating whether the number is negative or positive; the second part contains the. exponent for the number to be represented; and the third part is the mantissa. As an example, let us consider a 1 2-bit word length computer with a floatingpoint word. Figure 6-22 shows this.
C Characteristic
It is
common
I Integer part
\E
'
One FIG. 6-22
practice to call the exponent
12-bit
Binary point
word
12-bit floating-point
word
THE ARITHMETIC-LOGIC UNIT
part of the
6-41
word the characteristic and the mantissa section the integer
we shall adhere to this practice. The integer part of the floating-point word shown
represents
signed-magnitude form (rather than 2s complement, although
value in
its
this
part;
has been
The characteristic is also in signed-magnitude form. The value of the number expressed is / X T, where / is the value of the integer part, and C is used).
the value of the characteristic.
Figure 6-23 shows several values of floating-point numbers both in binary form and after being converted to decimal. Since the characteristic has 5 bits
is
2^ X
Value
is
2-^
|1|0|1|0|1 |0|0|0|0|1 |0|lj Value
is
2"^ X 5 =
is
2-6 X -9
|0|0|1|1|1 |0|0|0|1|0|lTil Value
C
11000111
lOlOlOM
C
1 1
= 1408
/=+11
= +7
X (-7) = -56
/=-7
= +3
C=-5
/=+5
1|0|1|1 |0|l|0|0|li0|0|lJ Value
C=-6
/
FIG. 6-23
=
-9
Values of floating-point numbers
in
^ =
-
^
12-bit
all-integer systems
and
is
+ 15.
in
signed-magnitude form, the
The value of
/
is
can have values from system would have a
C in / X
2""
can have values from
a sign-plus-magnitude binary integer of 7
bits,
—
1
5 to
and so /
—63 to +63. The largest number represented by this maximum / and would be 63 X 2'^ The least number
would be -63 X 2'^ This example shows the use of a floating-point number representation system to store "real" numbers of considerable range in a binary word. One other widely followed practice is to express the mantissa of the word as a fraction instead of as an integer. This is in accord with common scientific usage since we commonly say that 0.93 X lO"* is in "normal" form for exponential notation (and not 93
X
10^). In this
mally has a value from 0.1 to 0.999.
.
.
.
usage a mantissa
in
decimal nor-
Similarly, a binary mantissa in normal
form would have a value from 0.5 (decimal) to less than 1. Most computers maintain their mantissa sections in normal form, continually adjusting words so that a significant (1) bit
is
always
in the leftmost
mantissa position (next to the
sign bit).
When
the mantissa
is
in fraction
form, this section
is
called the fraction. For
example we can express floating-point numbers with characteristic and fraction by simply supposing the binary point to be to the left of the mag-
our 12-bit
nitude (and not to the right as in integer representation). In this system a
ber to be represented has value is
F X
2^ where
F
is
num-
the binary fraction and
For the 12-bit word considered before, fractions would have values from
-
C
the characteristic.
2-^ which
is
0.111111, to -(1
-
2"^),
which
is
1
1.111111. Thus numbers
6-42
THE McGRAW-HILL COMPUTER HANDBOOK from
-
(1
X
2"^)
-
-(1
2'^ to
2^^)
X
2'^
can be represented, or about
32,000 to —32,000. The smallest value the fraction part could have
which
fraction 0.1000000,
is
2"',
now
and the smallest characteristic, which
number representable
so the smallest positive
is
is
X
2~'
2"'^ or 2"'^.
is
+ the
2~'^,
Most com-
puters use this fractional system for the mantissa, although computers of Bur-
roughs Corporation and the National Cash Register
Company
use the integer
system previously described.
The Univac 1108
represents single-precision floating-point
numbers
in this
format: 9 10
2
1
36
c
s
is
number
F
t
t
t
sign
characteristic
fraction part
bit
8 bits
27 bits
For positive numbers, the characteristic sign bit
bit
a 0, and the fraction part
is
C
treated as a binary integer, the
is
a binary fraction with value 0.5
W words
1 1
1
/ Bit
Words
FIG. 7-1
01001011
into
shall read the
memory
V
Bit 2
1
in
high-speed
Each word contains the same number of bits
memory
address 17 and later read from this same address,
word 0100101
1.
If
we again read from
(and have not written another word This means the
V
memory
is
in),
this
we
address at a later time
the word 0100101
1
will
again be read.
nondestructive read in that reading does not destroy
or change a stored word. It is
important to understand the difference between the contents of a
ory address and the address as
many drawers
itself.
A memory
as there are addresses in
is
mem-
like a large cabinet containing
memory. In each drawer
is
a word,
and the address of each word is written on the outside of the drawer. If we write word at address 17, it is like placing the word in the drawer labeled 17. Later, reading from address 17 is like looking in that drawer to see its contents. We do not remove the word at an address when we read, but change the
or store a
contents at an address only
when we
store or write a
new word.
main memory looks very much like a "black box" with a number of locations or addresses into which data can be stored or from which data can be read. Each address or location contains a fixed number of binary bits, the number being called the word length for the memory. A memory with 4096 locations, each with a different address, and with each location storing 16 bits, is called a 4096-word 16-bit memory, or, in the vernacular of the computer trade, a 4K 16-bit memory. (Since memories generally come with a number of words equal to 2" for some n, if a memory has 2'"* = 16,384 words, computer literature and jargon would refer to it as a 16K memory, because it is always understood that the full 2" words actually occur in the memory. Thus, 2' Word 16-bit memory is called a 32K 16-bit memory.) Memories can be read from (that is, data can be taken out) or written into (that is, data can be entered into the memory). Memories which can be both read from and written into are called read-write memories. Some memories have programs or data permanently stored and are called read-only memories. A block diagram of a read-write memory is shown in Fig. 7-2. The computer places the address of the location into which the data are to be read into the
From an
exterior viewpoint, a high-speed
memory address flip-flops),
The data
where to
register. This register consists of n binary devices (generally
2"
is
the
number of words
be written into the
memory
that can be stored in the
are placed in the
memory
memory.
buffer reg-
THE MEMORY ELEMENT -/?)
bits per
7-5
word-
Read write
random access
Read
memory 2" words
Write
Memory
buffer
register
-m FIG. 7-2
ister,
memory
which has as many binary storage devices as there are
ory word.
The memory
The memory
line.
Read-write random-access
bits-
will
is
told to write
by means of a
then store the contents of the
1
bits in
each
signal on the
memory
mem-
WRITE
buffer register in
memory address register. Words are read by placing the address of the location to be read from into signal is then placed on the READ line, and the memory address register. A the contents of that location are placed by the memory in the memory buffer the location specified by the
1
register.
computer communicates with the memory by means of memory buffer register, and the READ and WRITE inputs. Memories are generally packaged in separate modules or packages. It is possible to buy a memory module of a specified size from a number of different manufacturers, and, for instance, an 8K 16-bit memory module can be purchased on a circuit board ready for use. Similarly, if a computer is purchased with a certain amount of main memory, more memory can generally later be added by purchasing additional modules and "plugging them in."
As can be
the
memory
If
there the
it is
is
seen, the
address register, the
possible to read from or write into any location "at once," that
no more delay
memory
is
in
memory (RAM). Computers
called a random-access
memories
invariably use random-access read-write
memory and
7-3 LINEAR-SELECT
is, if
reaching one location as opposed to another location,
almost
for their high-speed
main
then use backup or slower speed memories to hold auxiliary data.
MEMORY ORGANIZATION
The most used random-access memories memories. Both are organized
are
in a similar
IC memories and magnetic core
manner, as
will
In order to present the basic principles, an idealized
be shown.
IC memory
will
be
shown, followed by details of several actual commercial memories. In any
memory
memory
there must be a basic
cell consisting
of an
RS
memory
flip-flop
cell.
Figure 7-3 shows a basic
with associated control circuitry. In
7-6
THE McGRAW-HILL COMPUTER HANDBOOK
Will
FIG. 7-3
memory
Basic
be drawn as
cell
order to use this cell in a memory, however, a technique for selecting those cells
addressed by the
memory
address register must be used, as must a method to
control whether the selected cells are written into or read from.
memory organization for a linear-select IC memmemory with 3 bits per word. The memory address the memory cells (flip-flops) to be read from or written
Figure 7-4 shows the basic
This
ory.
register
is
a four-address
(MAR)
selects
into through a decoder which selects three
be
in the
memory
flip-flops for
each address that can
address register.
Figure 7-5(a) shows the decoder in expanded form. flip-flop (bit) to
be decoded.
there will be four output lines, take. For instance,
if
memory
cells contain a
three lines a line
with a
always be a
1
0.
MAR contains
the
of the decoder will be a
It has an input from each two input bits as in Fig. 7-5(a), then one for each state (value) the input register can
If there are
1 1
,
in
both
flip-flops,
and the remaining three
lines
a
the lowest output line will be a
then the upper line
0. Similarly, if 1
both
and the remaining
Similar reasoning will show that there will be a single output
output for each possible input
state,
and the remaining
lines will
0.
Figure 7-5(b) shows a decoder for three inputs. The decoder has eight output lines.
In general, for n input bits a decoder will have 2" output lines.
The decoder 5(a).
in Fig. 7-5(b) operates in the
For each input state the decoder
same manner
will select a particular
as that in Fig. 7-
output
line, plac-
on the selected line and a on the remaining lines. Returning to Fig. 7-4, we now see that corresponding to each value that can be placed in the MAR, a particular output line from the decoder will be selected and carry a 1 value. The remaining output lines from the decoder will contain ing a
1
THE MEMORY ELEMENT
7-7
Data inputs I^
MAR, I
t—w
00 -§
0—1
"
01
o
10
I
5 -1
11
/
O
P
E MAR 2 Two flip-flops in
MAR
/
w Write.
Read-
FIG. 7-4
Linear-select
Os, not selecting the
IC memory
AND
gates at the inputs and outputs of the flip-flops for
these rows. (Refer also to Fig. 7-3.)
The memory
in Fig. 7-4 is
organized as follows: There are four words, and
comprises a word. At any given time the MAR memory. If the READ line is a 1, the contents of the three cells in the selected word are read out on the Oi, O2, and O3 lines. If the WRITE line is a 1, the values on //, 1 2, and /^ will be read into the memory. The AND gates connected to the OUT lines on the memory cells in Fig. 73 must have the property that when a number of AND gate output lines are
each row of three selects a
word
memory
cells
in
connected together, the output goes to the highest line
goes to
memory is
a
1,
in this
1,
otherwise
cells in the first
it is
a 0.) This
is
column are wire-ORed
the entire line will be a
1.
level. (If
called a wired
(Memory
any
OR.
together, so
cells in
OUT is a
In Fig. 7-4 if
1,
all
any output
the
four line
IC memories are constructed
manner.)
Now
if
the
READ
line is a
1
in Fig. 7-4, the
output values for the
flip-flops
1
7-8
THE McGRAW-HILL COMPUTER HANDBOOK Decoder
~1 00
MB 01
AB Decoder outputs
10
AB
11
AB L
^-^=P=?
AG
4=1
AG
4=l=l=? 4=r=?
AG
1
1
AG
4=r^
AG
X,-
Jv
Ji.
,
y
Ji.
-^
AG 1
1
I
(6)
FIG. 7-5
row
in the selected
(a)
Four-output decoder (b) Parallel decoder
will all
be gated onto the output
line for
each
bit in the
memory. For example,
if
the second row in the
and if the ory decoder (marked 01) cells,
three
memory
MAR will
cells will
memory
contains 110 in the three
mem-
contains 01, then the second output line from the
be a
1,
and the input gates and output gates
be selected.
If the
READ
line
is
a
1,
to these
then the outputs
THE MEMORY ELEMENT
7-9
from the three memory cells in the second row will be 1 10 to the AND gates at the bottom of the figure, which will transmit the value 1 10 as an output from the memory. and the again contains 01, the second row If the WRITE line is a of flip-flops will have selected inputs. The input values on //, I2, and 1 3 will then
MAR
1
be read into the
flip-flops in
the second row.
As may be seen, this is a complete memory, fully capable of reading and The memory will store data for an indefinite period and will operate as
writing.
fast as the gates
memory
—
its
and
flip-flops will
complexity.
ated circuitry)
is
The
permit. There
basic
memory
is
only one problem with the
cell (the flip-flop
with
complicated, and for large memories the decoder
its
will
associ-
be large
in size.
In order to further explore
decoder construction used, and finally
7-4
in
more
memory
we will first examine schemes that are commonly
organization,
detail, the selection
some examples of IC memories now
production.
in
DECODERS An
important part of the system which selects the
written into
is
cells to
the decoder. This particular circuit
is
be read from and
called a
many-to-one
decoder, a decoder matrix, or simply a decoder, and has the characteristic that
each of the possible 2" binary input numbers which can be taken by the n input cells, the matrix will have a unique one of its 2" output lines selected. for
Figure 7-5(b) shows a decoder which is completely parallel in construction and designed to decode three flip-flops. There are then 2^ = 8 output lines, and for each of the eight states which the three inputs (flip-flops) may take, a unique output line will be selected. This type of decoder is often constructed using
AND gates. The rule AND gate equal to the
the number of diodes (or number of inputs to each AND gate. For Fig. 7-5(b) this is equal to the number of input lines (flip-flops which are being decoded). Further, the number of AND gates is equal to the number of output lines, which is equal to 2" (n is the number of input flip-flops being decoded). The total number of diodes is therefore equal to n X 2", and
diodes (or transistors) in the transistors)
used
for the binary
in
each
decoding matrix
is:
is
in Fig. 7-5(b)
24 diodes are required
to construct
As may be seen, the number of diodes required increases sharply number of inputs to the network. For instance, to decode an eight-flipregister, we would require 8X2^ = 2048 diodes if the decoder were con-
the network.
with the flop
structed in this manner.
As
which are often used in building decoder networks. One such structure, called a tree-type decoding network, is shown in Fig. 7-6. This tree network decodes four flip-flops and therefore has 2"* = 16 output lines, a unique one of which is selected for each state of the flip-flops. An examination will show that 56 diodes are required to build this particular network, while 2'* X 4 = 64 diodes would be required to build the parallel decoder type shown in Fig. 7-5. Still another type of decoder network is shown in Fig. 7-7. It is called a balanced multiplicative decoder network. Notice that this network requires only a result there are several other types of structures
7-10
THE McGRAW-HILL COMPUTER HANDBOOK
^3^3
X,
^1-
FIG. 7-6
48 diodes.
It
Tree decoder
can be shown that the type of decoder network illustrated
7-7 requires the
minimum number
in Fig.
of diodes for a complete decoder network.
number of diodes, or decoding elements, to construct a network such as shown in Fig. 7-7, compared with those in Figs. 7-5 and 7-6, becomes more significant as the number of flip-flops to be decoded increases. The network shown in Fig. 7-5, however, has the advantage of being the fastest
The
difference in the
and most regular in construction of the three types of networks. Having studied the three types of decoding matrices which are now used in digital machines, we will henceforth simply draw the decoder networks as a box with n inputs and 2" outputs, with the understanding that one of the three types of circuits shown in Figs. 7-5-7-7 will be used in the box. Often only the uncomplemented inputs are connected to decoders, and inverters are included in the decoder package. Then a three-input (or three-flip-flop) decoder will have only three input lines and eight outputs.
^
,
>
r
I
THE MEMORY ELEMENT
7-11
x,x. X,x,
x,x. X\ X2 x^ x^
A X2 A3 A^
,
J
^g\ ^->
Xj A2
-^3
u^ A^
i-f
A) Aj A3 A
.
1^2
^n
u^ I
—
.
I
U^
x,^' X, X2
A A 2 A3 A4
,
A] A2 A3 A^
Xi X2
Aj A2 A3 A^
A|
A2A3A4
n
ag)
2
1
3
4
I
2
1
^ —
3
4
Aj X2 A3 A^
\A AG
iiE) J
Aj A3 A^
>
—
r
=
AG
_E)
tiE) FIG. 7-7
7-5
X, Xj
.
)
X3X4
Balanced decoder
DIMENSIONS OF MEMORY ACCESS The memory organization selection system. This
is
in Fig. 7-4
has a basic linear-select (one-dimensional)
the simplest organization. However, the decoder in the
becomes quite large as the memory size increases. As an example we assume a parallel decoder as shown in Fig. 7-5b. These are widely used in IC packages because of their speed and regular (symmetric)
selection system
construction.
Consider now a decoder for a 4096-word memory, a package. There will be 12 inputs per required. If a diode (or transistor)
12
X
4096
=
is
AND
is
and 4096
required at each
49,152 diodes (or transistors)
of components
gate,
will
the primary objection to this
common
AND
size for
AND
add another
7-8.
Now
both
gates are
gate's input, then
be required. This large number
memory
organization.
Let us now consider a two-dimensional selection system. First we to
an IC
will
need
SELECT input to our basic memory cell. This is shown in Fig. the SELECT and the SELECT 2 must be Is for a flip-flop to 1
be selected. Figure 7-9 shows a two-dimensional
Two
memory
selection system using this cell.
memory, which has 16 words of only 1 bit per word (for clarity of explanation). The MAR has 4 bits and thus 16 states. Two of the MAR inputs go to one decoder and two to the other. To illustrate the memory's operation, if the MAR contains 0111, then the value 01 goes to the left decoder and 1 1 goes to the upper decoder. This will select the second row (line) from the left decoder and the rightmost column from the top decoder. The result is that only the cell (flip-flop) at this intersecdecoders are required for
this
7-12
THE McGRAW-HILL COMPUTER HANDBOOK Select 2
Select
1
Write Will
FIG. 7-8
Two-dimensional memory
tion of the second lines
be drawn as
cell
row and the rightmost column
(and as a result
its
AND
single cell will be selected,
gates) enabled.
and only
will
As a
this flip-flop
have both
its
SELECT
result, only this particular
can be read from or written
into.
As another example, the
left
decoder
will
if
be a
the 1
MAR contains
1001, the line for the third row of
as will be the second
at the intersection of this row and
column
will
column
line.
The memory
be enabled, but no other
cell
cell will
If the READ line is a 1, the enabled cell will be read from; if the WRITE line is a 1, the enabled cell will be written into. Now let us examine the number of components used. If a 16-word 1-bit mem-
be enabled.
ory was designed using the linear-select or one-dimensional system, then a
decoder with
16X4
inputs and therefore 64 diodes (or transistors) would be
required.
For the two-dimensional system two 2-input 4-output decoders are required, each requiring 8 diodes
(transistors);
so
16 diodes are required for both
decoders.
For a 4096-word 1-bit-per-word memory the numbers are more striking. A 4096-word linear-select (one-dimensional) memory requires a 12-bit MAR. This decoder therefore requires 4096 X 12 = 49,152 diodes or transistors. The two-dimensional selection system would have two decoders, each with six inputs. Thus each would require 2^ X 6 = 384 diodes or transistors, that is, a total of 768 diodes or transistors for the decoders. This is a remarkable saving, and extends to even larger memories.
THE MEMORY ELEMENT
MAR, MAn.
MAR,
7-13
MAR.
Row decoder
Decoder
Column decoder
Decoder
(sonietinies
10
11
01
00
00 01 10
called .V
11
(sometimes called Y decoder)
decoder
WRITEINPUT-
(All U
inputs on cells are connected to this input)
(All / int)uts
on
cells are
connected to
this input line)
READ READ OUT All outputs from cells are connected to this point
Two-dimensional IC memory organization
FIG. 7-9
In order to
memory
make
like that
a
memory
shown
with more bits per word,
in Fig. 7-9 for
each
bit in the
we simply make a
word (except that only
MAR
and the original two decoders are required). The above memory employs a classic two-dimensional selection system. This is the organization used in most core memories and in some IC memories. Figure 7-10 shows an IC memory with 256 bits on a single chip. As can be seen,
one
memory. In a two-dimensional memory, however, simplification in decoder complexity is paid for with cell complexity. In some cases this extra cell complexity is inexpensive, but it is often a problem, and so a variation of this scheme is used.
this
is
A
a two-dimensional select
variation on the basic two-dimensional selection system
is
illustrated in
7-14
THE McGRAW-HILL COMPUTER HANDBOOK
736
Package outline
0.830
295 325
BO
PIN
1
(O
0010 t 0.005
_L
H
0.055
260
0.125
0.293
0)
oo
200
MAX, 0.010 -"0 002
°^^p^ KO
1
0.020
100 MIN.
^U ^'*A
100-^0 010*1 TYP.
Z^ "*
0.074
0.290
o E
0.410
0.032 REF.
Block diagrann
Pin configuration
256 BIT
ADDRESS
5
1
RAM
CHIP SELECT
16
PLANE
^4
ADDRESS
8
2
15
R/W
ADDRESS
1
3
14
DATA OUT
4
13
DATA OUT
5
12
DATA
DATA
a:
5
6
11
ADDRESS
4
ADDRESS
1
;
10
ADDRESS
2
8
9
ADDRESS
3
Vui>
I
/ 1-/
CQ
ADDRESS
y ADDRESS DECODE
CIRCUIT
OUT IN
I
SENSE
Y
=)
INPUT BUFFERS
DATA
ODT
5 3
o
CS
6
1
13
2
R/W DATA
FIG. 7-10
Single-chip 256-bit
memory
IN
1
(courtesy of Intel Corp.)
memory uses two decoders, as in the previous scheme; however, memory cells are basic memory cells, as shown in Fig. 7-3. The selection scheme uses gating on the READ and WRITE inputs to
Fig. 7-11. This
the
achieve the desired two-dimensionality.
Let us consider a
WRITE
operation. First
assume that the
MAR
0010. This will cause the 00 output from the upper decoder to be a the top row of 1,
and
the
this
is
memory
cells.
in the third
row and third column the
In the lower decoder the 10 output will
contains selecting
become a
AND gate near the bottom of the diagram, turning
gated with an
W inputs on
1,
column. As a
S
result, for the
input and the
W input
memory
will
be a
1
cell in .
the top
For no other
^
THE MEMORY ELEMENT
MAR, I
-^
MAR'
—
T~T
'
'
—'—
I
^
Row
decoder (sometimes
MAR
MAR,
Column decoder (sometimes called
Y
decoder
WRITE D/-
All
inputs to
memory cells are
connected to this line
FIG. 7-1
1
IC memory chip layout
7-15
7-16
THE McGRAW-HILL COMPUTER HANDBOOK
memory
RS
its
cell will
both
flip-flop set to
cells are
S
Whe
and
a.
1
and so no other memory
,
the input value. (Notice that
connected to the input value
memory
cell will
MAR
is
decoder's 01 line will be a
As a
cells.
writing a
1
1,
turning the
S
.
This
1
on the output
lines.
(Again, the
will
have input
turns on the rightmost
memory
1 1
,
and so
AND gate in
the output from the rightmost column of
down has
its
mem-
result only these four cells in the entire array are capable of
The lower decoder 1
0111, then the upper
inputs on in the second row of
having their outputs connected together, this time a
have
memory
will indicate that for each value be selected for the write operation. Therefore for each
MAR state only one memory cell will be written into. similar. If the MAR contains The read operation ory
cell will
/ inputs on the
/)/.)
Consideration of other values for the a unique
all
its
in
cells are
wire-ORed by
groups of four.)
lowest output line will carry
the lowest row, which enables
memory
Only the second
cells.
cell
output enabled, however, and so the output from the rightmost
AND gate will have as output the value in the cell. This value then goes through the OR gate and the AND gate at the bottom of the diagram, the AND gate having been turned on by the
READ
signal.
show that each input value from the MAR will select a unique memory cell to be read from, and that cell will be the same as would have been written into if the operation were a write operation. This is basically the organization used by most IC memories at this time. The chips contain up to 64K bits. The number of rows versus the number of columns in an array is determined by the designers who decide upon the numbers that will reduce the overall component count. All the circuits necessary for a memory are placed on the same chip, except for the MAR flip-flops which quite often are not placed on the chip, but the Examination
will
inputs go directly to the decoders. This will be clearer
when
interfacing with a
bus has been discussed.
7-6
CONNECTING MEMORY CHIPS TO A COMPUTER BUS The
present trend in computer
central processing unit
connection
is
to
connect the computer
(CPU), which does the arithmetic, generates
memory by means of a bus. The bus is simply shared by all the memory elements to be used.
etc., to
are
memory
the
control,
a set of wires which
Microprocessors and minicomputers almost always use a buS to interface
memory, and in this case the memory elements will be IC chips, which are in IC containers just like those shown in Fig. 7-10. The bus used to connect the memories generally consists of (1) a set of address lines to give the address of the word in memory to be used (these are effectively
an output from a
MAR on the microprocessor chip); (2) a set of data
wires to input data from the
memory and
output data to the memory; and (3)
a set of control wires to control the read and write operations.
Figure 7-12 shows a bus for a microcomputer. In order to simplify drawings
and
clarify explanations,
we
will use
a
memory bus
with only three address
THE MEMORY ELEMENT
7-17
Address
A,
line
A2
,
^
Data
in
lines
Bus
h
lines
J
Data out
0;
0, to
CPU
CPU
lines J
R/W\
Control
ME
lines 1
3us
chip
chip
chip
(b)
FIG. 7-12
Bus
CPU/memory
lines,
computer system
for
Bus
(a)
lines (b)
Bus/
organization
three output data lines, two control signals, and three input data lines.
The memory to be used is therefore an 8-word 3-bit-per-word memory. The two control signals work as follows. When the R/W line is a 1 the memory is to be read from; when the R/W line is a 0, the memory is to be written into. The MEMORY ENABLE signal ME is a 1 when the memory is either ,
to
be read from or to be written
The IC memory package
to
into;
otherwise
be used
is
has three address inputs Aq, Ai, and A2, an input bit Di, and a bit
CHIP SELECT
it is
shown
a
0.
Each IC package
in Fig. 7-13.
R/W
input, an output bit Dq,
CS. Each package contains an 8-word
memory.
Logic symbol
Pin CO ntigL ration
^0 A,
c C
A^Q RIW
C C
1
6
::^/v
2
7
H Do
3
8
Z\
CS
4
9
3
+5
5
10
^
Gh'D
FIG. 7-13
IC package and block diagram symbol
chip (a) Pin configuration (b) Logic symbol
for
RAM
an 1-
H
7-18
THE McGRAW-HILL COMPUTER HANDBOOK
The IC memory chip works be
set to the address to
operation (the
is
CS
a
READ,
line
is
as follows.
The address
lines Aq, A,,
and
A 2 must
be read from or written into (refer to Fig. 7-13).
the
R/W
normally a
line
is
The data
1).
may
bit
CS
and the
set to a 1,
line
is
If the
brought to
then be read on line Dq.
Certain timing constraints must be met, however, and these will be supplied by the IC manufacturer. Figure 7-14 shows several of these.
minimum
The value Tr
is
the
cycle time a read operation requires. During this period the address
must be stable. The value T^ is the access time, which is the maximum time from when the address lines are stable until data can be read from the memory. The value Tco is the maximum time from when the CS line is made lines
until data
a
can be read.
The bus timing must accommodate
the above time.
>
R/W
\
1
means read from memory
CE A
means enable chip (lowered after address lines are
y
Dj^ not used
J
Do
—
important that the
lines are set
R/W: A
f
CE
Address
It is
Memory
V
in
place
1
set)
read cycle (or 0)
output
on bus
Tco-
read
(q)
^ln~
~\
cycle
r-
/
>
\
Address
lines are set
A,RIW~~\ CE
r\
R/IV:
/
Di,__v
CE
A
.
enables chip set to value to
is
be written into chip
n
D(, not used T|,, is
Tf-K;
(6)
FIG, 7-14
means write
memory
D/Y
\
.4
into
Timing
for bus
WRITE
IC memory
(a)
in
minimum
is
write operation
cycle time for
minimum time CE must
cycle
READ
A
cycle (b)
WRITE
cycle
write
be
THE MEMORY ELEMENT
7-19
bus not operate too fast for the chip and that the bus wait for at least the time T^ after setting its address lines before reading and wait at least Tco after lowering the CS line before reading. Also, the address line must be held stable for at least the period 7,^.
For a lines,
WRITE operation the address to be written into is set up on the address
the
R/W
line
is
are placed on the D,
The time is
made
a 0,
CS
is
brought down, and the data to be read
line.
interval Tyy
is
the
minimum
time for a
WRITE
cycle; the time
T^
the time the data to be written into the chip must be held stable. Different
types of memories have different timing constraints which the bus must accom-
We
assume that our bus meets these constraints. memory from these IC packages (chips), the interconnection scheme in Fig. 7-15 is used. Here the address line to each chip is connected to a corresponding address output on the microcomputer bus. modate.
will
In order to form an 8-word 3-bit
The CHIP
MEMORY
ENABLE input of CS
of each chip is connected to the from the microprocessor via an inverter, and the R/W bus line is connected to the R/W input on each chip. If the microprocessor CPU wishes to read from the memory, it simply places the address to be read from on the address lines, puts a 1 on the R/W line, and
ENABLE
then raises the line,
ME
output
and the
ME
CPU
a chip's output
is
line.
can read these values on
FIG. 7-15
selected bit onto
its //, I2,
and
Is lines.
its
output
(Notice that
a bus input.)
Similarly, to write a
Bit
Each chip then reads the
word
into the
1
Interfacing chips to a bus
memory, the
Bit 2
CPU
places the address to
Bit 3
7-20
THE McGRAW-HILL COMPUTER HANDBOOK
,
o> '
A
1 ,
SO
q^q^
=
^ chips Remain
o •
c
(
o
>