Encyclopedia of graphics file formats [1st ed.] 9781565920583, 1565920589

529 118 69MB

English Pages 894 [942] Year 1994

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Encyclopedia of graphics file formats [1st ed.]
 9781565920583, 1565920589

  • Commentary
  • Companion CD-ROM as PDF attachment
Citation preview

OF;0]

ENCYCLOPEDIA

GRAPHICS FILE FORMATS;1] INCLUDES A CD-ROM;0];1] WITH GRAPHICS FILE FORMAT SPECIFICATIONS,

CODE. IMAGES, AND SOFTWARE PACKAGES FOR PC. UNIX, AND MACINTOSH PLATFORMS

F7 10 AVYV -, i:v:v fz7 VZL’iV*AV, r 2;0]

Jyz.

]t7.5.

8

29. ii

ir D;0]

F. 3.

F7. 6

1?99.;1]

JAMES D. MURRAY & WILLIAM VANRYPER;0];1]

O’REILLY & ASSOCIATES, INC.

Encycloped

Graphics File Formats

Encyclopedia

of

Graphics File Formats

James D. Murray and William vanRyper

O'Reilly & Associates, Inc. 103 Morris Street, Suite A

Sebastopol, CA 95472

Encyclopedia of Graphics File Formats by James D. Murray and William vanRyper Copyright © 1994 O'Reilly & Associates, Inc. All rights reserved. Printed in the United States of America. Editor: Deborah Russell

Production Editor: Ellen Siever

Printing History: July 1994

First Edition

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O'Reilly & Associates, Inc., was aware of a trademark claim, the designations have been printed in caps or initial caps. Specific copyright notices and restrictions for specification documents and programs included on the CD-ROM accompanying this book are included on that CD-ROM. All of the specification documents and programs described in this book and provided by vendors for inclusion on the CD-ROM are subject to change without notice by those vendors. While every precaution has been taken in the preparation of this book, the publisher assumes no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.

O'Reilly & Associates, Inc., and the authors specifically disclaim all other warranties, express or implied, including but not limited to implied warranties of merchantability and fitness for a particular purpose with respect to the CD-ROM, the programs therein contained, the program listings in the book, and/or the techniques described in the book, and in no event shall the publisher or authors be liable for any loss of profit or any other commercial damage, including but not limited to special, incidental, consequential, or other damages.

This book is printed on acid-free paper with 50% recycled content, 10-15% post-consumer waste. O'Reilly & Associates is committed to using paper with the highest recycled content available consistent with high quality. ISBN: 1-56592-058-9

[10/94]

Preface

xv

Part One

Overview 1. Introduction

3

Why Graphics File Formats?

3

The Basics

4

Graphics Files Graphics Data Types of Graphics File Formats Elements of a Graphics File Converting Formats Compressing Data

5

7 12 13 15 15

Format Summaries

16

2. Computer Graphics Basics

27

Pixels and Coordinates Pixel Data and Palettes

27 31

Color

46 51 53

Overlays and Transparency For Further Information

Table of Contents

v

3. Bitmap Files How Bitmap Files Are Organized Header

57

Examples of Bitmap Headers

64

Bitmap Data Footer

67

Other Bitmap File Data Structures

75

Other Bitmap File Features

75

Pros and Cons of Bitmap File Formats

76

4. Vector Files Vector Versus Bitmap Files What Is Vector Data?

59

74

77 77 77

Vector Files and Device Independence Sources of Vector Format Files

78

How Vector Files Are Organized Vector File Size Issues

80

Scaling Vector Files Text in Vector Files

85

Pros and Cons of Vector Files

86

5. Metafiles

79 83 85

89

Platform Independence?

90

How Metafiles Are Organized Pros and Cons of Metafiles

90

6. Platform Dependencies

90 93 93

Byte Order File Size and Memory Limitations Floating-point Formats

95

Bit Order

96

Filenames For Further Information

vi

57

Table of Contents

95

99 100

7. Format Conversion

101 101

Is It Really Possible? Don't Do It If You Don't Need to

102

... But If You Do

102

Other Format Conversion Considerations

106

107

8. Working With Graphics Files Reading Graphics Data

107

Writing Graphics Data Test Files

118

Designing Your Own Format For Further Information

121

120 124 125

9. Data Compression Introduction

125

Data Compression Terminology

127

Pixel Packing

130

Run-length Encoding (RLE)

132

Lempel-Ziv-Welch (LZW) Compression

142

CCITT (Huffman) Encoding

148

JPEG Compression

159

For Further Information About Data Compression

170

10. Multimedia

173

Beyond Traditional Graphics File Formats Multimedia File Formats

173

Types of Data For Further Information

175

174 184

Part Two

Graphics File Formats 189

Adobe Photoshop Atari ST Graphics Formats

193

AutoCAD DXF

208

BDF

212

BRL-CAD

217

Table of Contents

vii

vill

BUFR

220

CALS Raster

223

CGM

232

CMU Formats

238

DKB

244

Dore Raster File Format

246

Dr. Halo

252

Encapsulated PostScript

256

FaceSaver

267

FAX Formats

270

FITS

273

FLI

282

GEM Raster

298

GEMVDI

303

GIF

309

GRASP

331

GRIB

337

Harvard Graphics

340

Hierarchical Data Format

342

IGES

345

Inset PIX

347

Intel DVI

358

Interchange File Format JPEG File Interchange Format

371

Kodak Photo CD

384

Kodak YCC

388

Lotus DIF

390

Lotus PIC

392

Lumena Paint

395

Macintosh Paint

400

Macintosh PICT

407

Microsoft Paint

411

Microsoft RIFF

416

Microsoft RTF

426

Microsoft SYLK

431

Microsoft Windows Bitmap

434

377

Microsoft Windows Metafile

444

MIFF

451

MPEG

456

table of Contents

MTV

4b5

NAPLPS

469

NFF

471

OFF

479

OS/2 Bitmap

482

P3D

492

PBM, PGM, PNM, and PPM PCX

498

PDS

519

Pictor PC Paint

524

PixarRIB

537

Plot-10

540

503

POV

542

Presentation Manager Metafile

547

PRT

551

QRT QuickTime

558

Radiance

573

Rayshade

577

RIX

586

RTrace

617

SGI Image File Format

600

SGI Inventor

605

SGIY\ODL

611

SGO

617

Sun Icon

622

Sun Raster

625

TDDD

630

TGA

644

560

TIFF

663

TTDDD

689

uRay

695

Utah RLE

698

VICAR2

702

VIFF

706

VIS-5D

716

Vivid and Bob

724

WavefrontOBJ

727

WavefrontRLA

735

Table of Contents

ix

WordPerfect Graphics Metafile

746

XBM

758

XPM

762

XWD

768

Part Three

Appendices

Appendix A: What's On The CD-ROM?

777

Using the CD-ROM Contents of the CD-ROM

777

Using the CD-ROM With MS-DOS

790

Using the CD-ROM With Windows

791

Using the CD-ROM With OS/2

792

Using the CD-ROM With UNIX

793

Using the CD-ROM With the Macintosh Software Summaries

800

What To Do If You Have Problems?

842

Appendix B: Graphics and Imaging Resources

781

805

843

Commercial Resources

843

Internet Access Methods

845

847 General Imaging Resources 852 Chemical and Biomedical Imaging Resources Meteorological, Oceanographic, and Geophysical Imaging Resources. 854 855 Astronomy and Space Exploration Imaging Resources

X

Glossary

859

Index

879

Table of Contents

6

1-1 The graphics production pipeline 1-2 Vector data

8

1-3 Bitmap data 1-4 Object data 2-1 Physical and logical pixels 2-2 Displaying data with few colors ona device with many colors 2-3 Displaying data with many colors on a device with few colors 2-4 Using a palette to specify a color 2-5 Types of palettes 3-1 Organization of pixel data into scan lines 3-2 Organization of pixel data into color planes 3-3 Examples of bitmap data organization (contiguous scan lines, strips, and tiles)

9

4-1 Gradient fill

11

28 31 31 33 40

70 71

72 84

9-1 Pixel packing 9-2 Run-length encoding variants 9-3 Basic run-length encoding flow 9-4 Bit-, byte-, and pixel-level RLE schemes 9-5 RLE scheme with three bytes 9-6 RLE scheme with 1- and 2-byte vertical replication packets 9-7 CCITT Group 3 encoding 9-8 CCITT Group 3 encoding (EOL code) 9-9 CCITT Group 3 encoding (RTC code) 9-10 Group 3 CCITT encoding (TIFF Compression Type 2) 9-11 JPEG compression and decompression GIF-1 GIF87a file layout GIF-2 Arrangement of interlaced and non-interlaced scan lines GIF-3 Layout of a GIF 89a file TGA-1 Run-length encoding packet types

Table of Contents

131 134

137 139 140 141

152 153

153 154 163

312 319

323 653

xi

Xll

TGA-2 Pixel data formats

654

TIFF-1 Three possible physical arrangements of data in a TIFF file TIFF-2 Logical organization of a TIFF file TIFF-3 Format of an Image File Directory A-l CD-ROM directory structure

667 668

671

782

A-2 BBEdit files

804

A-3JPEGView files

805

Table of Contents

1-1 Graphics File Formats Described in This Book

17

1-2 Format Names, Cross-Referenced to Articles in This Book

19

2-1 Equivalent RGB, CMY and HSV values CALS Raster-1 Typical CALS Raster Pel Count Values TGA-1 TGA Palette Entry Sizes TIFF-1 TIFF Tag Types Arranged Alphabetically by Name TIFF-2 Minimum Required Tags for Each TIFF Class

49

649

A-l formats Directories

786

A-2 software Directories

788

Table of Contents

229

673 677

xiii

Why did we write this book? The short answer is that graphics file formats are immortal Like it or not, data files from the dawn of the computer age are still with us, and they're going to be around for a long time to come. Even when the way we think about data itself changes (as it inevitably will), hundreds of millions of files will still be out there in backup storage. We'll always need a way to read, understand, and display them. Computer technology evolves rapidly. Hardware, particularly on the desktop, turns over every year or so. Software can become obsolete overnight with the release of a new version. The one thing that remains static is data, which for our purposes means information stored in data files on disk or tape. In this book we're interested in one specific type of data—that used for the interchange and reconstruction of graphics images. Graphics data files are structured according to specific format conventions, which are (or should be) recorded in format specification documents written and maintained by the creator of the format. Not all formats are documented, however, and some documents are so sparse, poorly written, or out of date that they are essentially useless. Moreover, some format specifications are very difficult to obtain; the creator of the format might have moved; the format might have been sold to another organization; or the organization that owns the format might not actively support or distribute it. These facts make it difficult for the person who needs to find out about the specifics of a particular graphics file format to locate and understand the file format specification. We wrote this book because we saw a need for a centralized source of information, independent of the commercial marketplace, where anyone could obtain the information needed to read graphics files.

PREFACE

XV

When we set out to write this book, we asked the obvious questions: How would we implement an existing format? What resources would we need? Ideally, we would like to have on hand a good book on the subject, and perhaps some working code. Barring that, we'd make do with the original format specification and some advice. Barring that, we'd scrape by with the format specification alone. This book provides as much of this as possible; the format specification is here in most cases, as is some code—even some advice, which, because it's coming from a book, you're free to take or leave as you choose. To give you some idea about what was on our minds during the planning of this book, we'd like to mention some issues that frequendy come up for programmers who need to learn about and implement file formats. In the course of writing this book, both of us (as consultants and veteran users of networked news and bulletin board systems) talked with and observed literally hundreds of other programmers. The following is a sampling of questions frequently asked on the subject of graphics file formats, and comments on how we have addressed them in this book:

"How can I get a copy of the written specification for format XYZ?" Rarely does a day go by without a request for a written format specification—TIFF, GIF, FaceSaver, QRT, and many, many more. Unfortunately, although a few sparse collections exist online and in books, there is no single source for even the most common format specifications. One of the goals of this book is to gather as many as possible of the original specification documents in one place. "I'm trying to implement specification XYZ. I'm having trouble with ABC." Programmers almost always believe that only the specification document is needed in order to implement a file format. Sadly, if you read a few format specifications, you'll soon discover that there is no law requiring that documentation be written clearly. Specifications, like all technical documents, are written by people with varying degrees of literacy, knowledge, and understanding of the subject in question. Consequently, they range from clearly written and very helpful to unorganized and confusing. Some documents, in fact, are nearly useless. The programmer is eventually forced to become conversant with the oral tradition.

Even if the specification document is well done, written between the lines is a complex set of assumptions about the data. How complicated can a format be? After hours of fiddling with color triples, index map offsets, page tables, multiple header versions, byte-order problems, and just plain bad design, you may well find yourself begging for help, while the clock counting your online dollars ticks on. Another goal of this book is to provide a second opinion for use

xvi

Preface

by programmers who find themselves confused by the contents of the documents. "What does Z mean?"

In this case, Z is basic technical graphics information. Everything a programmer needs to know to read, write, encode, and decode a format is in the specification document, right? Unfortunately, writers of format specifications often use vocabulary foreign to most programmers. The format might have been created, for instance, in support of an application that used terminology from the profession of the target users. The meaning of a term might have changed since the time the format was written, years ago. You might also find that different format specifications have different names for the same thing (e.g., color table, color map, color palette, look-up table, color index table, and so on). In this book, we provide basic guidance whenever possible. "What is an X.Y file?"

If you scan the computer graphics section of any online service, bulletin board system, or news feed, you will find numerous general questions from users about graphics files, the pros and cons of each format, and sources of image files. Surprisingly, there is no single source of information on the origin, use, and description of most of the graphics file formats available today. Some of this information, particularly on the more common formats (e.g., TIFF, GIF, PCX), is scattered through books and magazine articles published over the last ten years. Other information on the less common formats is available only from other programmers, or, in some extreme cases, from the inventor of the format. Another goal of this book is to include historical and contextual information, including discussions of the strengths and weaknesses of each format. "Is there a newer version of the XYZ specification than version 1.0?" Occasionally, this question comes from someone who, specification in hand, just finished writing a format reader only to have it fail when processing sample files that are known to be good. The hapless programmer no doubt found a copy of the format specification, but not, of course, the latest revision. Another of our goals is to provide access to the latest format revisions in this book and to update this information periodically. "How can I convert an ABC file to an XYZ file?"

Programmers and graphic designers alike are often stumped by this question. They've received a file from a colleague, an author, or a client, and they need to read it, print it, or incorporate it in a document. They need to convert it to something their platform, application, or production environment knows how to deal with. If this is your problem, you'll find this book helpful in a number of ways. In the first place, it will give you the information you need to identify PREFACE

XV11

what this file is and what its conversion problems are. We'll give you specific suggestions on how to go about converting the file. Most importantly, on the CD-ROM that accompanies this book, we've included a number of software packages that will convert most graphics files from one format to another. Whether you are operating in an MS-DOS, Windows, OS/2, UNIX, or Macintosh environment, you should be able to find a helpful tool. About This Book and the CD-ROM We'd like to make it easier for you to understand and implement the graphics file formats mentioned in this book. Where does information on the hundreds

of graphics file formats in use today come from? Basically, from four sources: • Format specifications. These should be the ultimate references, shouldn't they? Unfortunately specifications aren't always available (or useful!). Nevertheless, they are the starting point for information about file formats. • Secondary sources (magazine articles, books). These are most useful when the specification isn't handy and when the author can provide some kind of insight or relevant experience. • Sample code. This is all we usually want, isn't it? Unfortunately the sample code may not work right, may be out of date, or may be too platformspecific for your present needs. • Sample images. Images fully conforming to the format specification might not seem like a source of information until you actually need something on which to test your application. What we've tried to do is to collect these four elements together in one place. Of course not all were available for every format, and sometimes we weren't allowed to include the original specification document on the CD-ROM that accompanies this book. Nevertheless, we've pulled together all the information available. Taken together, the information provided in this book and in the materials on the CD-ROM should allow you to understand and implement most of the formats.

Our primary goal in writing this book is to establish a central repository of graphics file format specifications. Because the collected specification documents (not to mention the sample images and associated code and software packages!) total in the hundreds of megabytes, the best way to put them in your hands is on a CD-ROM. What this means is that the CD-ROM is an integral part of the book, if only for the fact that all this information couldn't ever be crammed between two covers.

XVlll

PREFACE

We've written an article describing each graphics file format; this article condenses and summarizes the information we've been able to collect. In some cases this information is extensive, but in other cases it's not much. This is the

name of the game, unfortunately. When we do have adequate information, we've concentrated on conveying some understanding of the formats, which in many cases means going through them in some detail. Remember, though, that sometimes the specification document does a better job than we could ever do of explaining the nitty-gritty details of the format. On the CD-ROM, you'll find the original format specifications (when available and when the vendors gave us permission to include them). If we know how to get the specifications, but couldn't enlist the aid of the vendors, we tell you where to go to find them yourself. Also on the CD-ROM is sample code that reads and writes a variety of file formats and a number of widely-used thirdparty utilities for file manipulation and conversion. Finally, we've included sample images for many formats. Who Is the Book For? This book is primarily for graphics programmers, but it's also for application programmers who need to become graphics programmers (if only for a little while). It's also for people who need a quick way to identify a graphics file of unknown origin. If you're not a graphics programmer, but want to get up to speed quickly, you'll find that Part I of the book requires little prior knowledge of computer graphics. It will help you become familiar with concepts associated with the storage of graphics data. In fact, a working knowledge of a programming language is useful, but not absolutely essential, if you're only looking for the big picture. But, note that this book is also written to accommodate the specific needs of other types of readers. If you just want some background on graphics file formats, you might want to read Part I and refer, as needed, to the articles in Part II and the appendices in Part III. If you're in search of implementation guidance, you will want to refer to the articles and example code. Of course if you're a computer graphics professional, you might be interested primarily in the specification documents and tools on the CD-ROM accompanying this book.

In the unlikely event that you are creating your own new graphics file format, we fervently hope that this book provides you with some perspective on your task, if only by exhibiting the decisions, good and bad, that are frozen in the formats described in these pages.

Preface

xix

How to Use the Book This book is divided into three parts. Part One, Overview, is an introduction to those computer graphics concepts that are especially helpful when you need to work with graphics file formats. • Chapter 1, Introduction, introduces some basic terminology, and gives an overview of computer graphics data and the different types of graphics file formats (bitmap, vector, metafile, and the less frequently used scene description, animation, and multimedia) used in computer graphics. This chapter also lists and cross-references all of the formats described in this book.

• Chapter 2, Computer Graphics Basics, discusses some concepts from the broader field of computer graphics necessary for an understanding of the rest of the book.

• Chapter 3, Bitmap Files, describes the structure and characteristics of bitmap files. • Chapter 4, Vector Files, describes the structure and characteristics of vector files.

• Chapter 5, Metafiles, describes the structure and characteristics of metafiles. • Chapter 6, Platform Dependencies, describes the few machine and operating system dependencies you will need to understand. • Chapter 7, Format Conversion, discusses issues to consider when you are converting between the different format types (e.g., bitmap to vector). • Chapter 8, Working xvith Graphics Files, describes common ways of working with graphics files and gives some cautions and pointers for those who are designing their own graphics file formats. • Chapter 9, Data Compression, describes data compression, particularly as compression techniques apply to graphics data and the graphics files described in this book.

• Chapter 10, Multimedia, surveys multimedia formats and issues. Part Two, Graphics File Formats, describes the graphics file formats themselves. There is one article per format or format set, and articles are arranged alphabetically. Each article provides basic classification information, an overview, and details of the format. In many cases we've included short code examples. We've also indicated whether the specification itself (or an article that describes the details of the format) is included on the CD-ROM that

xx

preface

accompanies this book, as well as code examples and images encoded in that format. Also provided in the articles are references for further information. Part Three, Appendices, contains the following material: • Appendix A, What's On the CD-ROM?, tells you what's on the CD-ROM accompanying the book and how to access this information. • Appendix B, Graphics and Imaging Resources, suggests additional online sources of information.

We also include a Glossary, which gives definitions for terms in the text. "For Further Information" sections throughout the book list suggestions for further reading. Conventions Used in This Book We use the following formatting conventions in this book: • Bold is used for headings in the text • Italics are used for emphasis and to signify the first use of a term. Italics are also used for email addresses, FTP sites, directory and filenames, and newsgroups.

• All code and header examples are in Constant Width. • All numbers in file excerpts and examples are in hexadecimal unless otherwise noted.

• All code and header examples use the following portable data types: CHAR BYTE

WORD CWORD

LONG FLOAT

DOUBLE

8-bit signed data 8-bit unsigned data 16-bit unsigned integer 32-bit unsigned integer 32-bit signed integer 32-bit single-precision floating point number 64-bit double-precision floating point number

All source code that we produced is written in ANSI C. (This is relevant only if you are still using one of the older compilers.)

Terminology of Computer Graphics Computer graphics is in flux, and people working in the field are still busy creating vocabulary by minting new words. But they're also mutating the meanings of older words—words that once had a clear definition and context. Preface

xxi

Computer graphics is also an emerging field, fertilized by electronics, photography, film, animation, broadcast video, sculpture, and the traditional graphic arts. Each one of these fields has its own terminology and conventions, which computer graphics has inherited to some degree. Complicating matters, we're now in the era of electronic graphic arts. Color display adapters and frame buffers, paint and imaging programs, scanners, printers, video cameras, and video recorders are all being used in conjunction with the computer for the production of both fine and commercial art. A glance at any glossy magazine ad should give you some idea about how pervasive the mixing of digital and traditional media has become, if only because the overwhelming majority of magazines are now digitally composed. Indeed, the distinctions between traditional and computer art are becoming blurred. Today, we can find graphic artists producing work in traditional media, which is scanned into digital form, altered, re-rendered with a computer, and then distributed as original. While this is not a problem in itself, it nonetheless accelerates the reinjection of traditional terminology into computer graphics, countering any trend toward standardization. This will inevitably cause contradictions. Some are already apparent, in fact, and you'll probably notice them when we discuss the details of the formats.

There is no one, single, consistent set of terms used across all of computer graphics. It is customary to cite standard references (like the classic Computer Graphics: Principles and Practice by James D. Foley, Andries vanDam, et al) when arguing about terminology, but this approach is not always appropriate. Our experience is that usage in this field both precedes and succeeds definition. It also proceeds largely apart from the dictates of academia. To make matters worse, the sub-field of graphics file formats is littered with variant jargon and obsolete usage. Many of the problems programmers have implementing formats, in fact, can be traced to terminological misunderstandings. In light of this, we have chosen to use a self-consistent terminology that is occasionally at odds with that of other authors. Sometimes, we have picked a term because it has come into common use, displacing an older meaning. An example of this is bitmap, which is now often used as a synonym for raster, making obsolete the older distinction between bitmap and pixelmap. Occasionally, we have been forced to choose from among a number of terms for the same concept. Our decision to use the term palette is one example of this. For some of the same reasons, we use the term graphics, and avoid graphic and graphical We all have to face up to the fact that the field is known as computer graphics, establishing a persistent awkwardness. We have chosen to use graphics as a noun as well as an adjective.

XXll

PREFACE

We believe that the choices we made represent a simplification of the terminology, and that this shouldn't be a problem if you're already familiar with alternate usage. Should you have any questions in this area, our definitions are available in the Glossary.

About the File Format Specifications In preparing this book, we have made a monumental effort to collect, all in one place, the myriad graphics file format specifications that have, until now, been floating on the Internet, hiding in the basements of various organizations, and gathering dust on individual application authors' bookshelves and in their private directories. We've done our best to locate the specifications and their caretakers (perhaps the original author, and perhaps the vendor that now maintains or at least owns the specification) and to obtain permission to include these documents on the CD-ROM that accompanies this book. In most cases, we have been able to obtain permission, but in some cases we have not. There were several reasons for our failure to gain permission, some simple and some more complex. Although neither of us is a lawyer (or a bureaucrat!) or is particularly interested in legal issues, we did encounter some legalities while gathering these specifications. Given our special perspective on the world of graphics file formats, we want to share our reactions to these legalities with you—perhaps in the hope that we'll see fewer problems in the future. Here are the reasons why we couldn't include certain format specifications on the CD-ROM; here, we use the word "caretaker" to indicate either the author or owner of the specification or the organization that now has responsibility for its maintenance or distribution.

• We couldn't find the caretaker. We simply couldn't find out who owned the specification of some of the formats we knew about. This may or may not have been the vendor's fault, but try as we did, we just couldn't find the information. Here's where you can help us. If you know of a format that you yourself find useful, let us know what it is and how you think we might be able to obtain permission to include it in a future edition of this book. • The caretaker couldn't find the specification. Strange, but true. This happened twice. To be honest, these were both small companies. But still. . . • We couldn't get past caretaker bureaucracy. In some cases, we simply couldn't get through to the correct person in the organization in 18 months of trying. We know it's hard to believe. It seems that you could walk into any installation and in a few minutes figure out who knows what and where they are. We thought so too before we started this project. In fact, executive management at several vendors professed a willingness to Preface

xxin

provide us with information, but simply couldn't figure out how to do so. Here too, maybe our readers can help. . . • The caretaker wouldn't allow us to include the format. In some cases, we found this reasonable. One obvious case was the BRL-CAD specification, which is massive and readily available. The U.S. government will send it to you if you ask for it. Other companies prefer to license the information as part of a developer's kit. Still others wished to restrict the currency of older formats, presumably so they wouldn't be bothered by users calling them up about them. Although we are philosophically in disagreement with vendors in this latter group, we are willing to admit that they have a point. Some companies, however, feel that releasing information on their formats would somehow give their competitors an advantage or would otherwise be to their own disadvantage. We hope they'll change their minds when they see how many other formats are represented here and how useful this compendium is to everyone—programmers and vendors alike. Finally, several vendors have taken the most extreme position that information on their formats is proprietary and have used legal means to prevent developers from working with them. This last case is the most alarming, and we discuss it further below.

We find it hard to understand why vendors have patented their formats and/or used contract law arguments to restrict access to information on their formats. It seems obvious enough to us—and to others in the industry—that the way to get people to purchase your products is to make them easy to work with, not only for users, but for developers, too. Historically, this has been the case. Vendors who locked up their systems necessarily settled for a smaller share of the market.

Although some vendors seem nearly paranoid, we suspect that the majority that restrict their formats don't have a clear idea what they're selling. This is particularly true for vendors of large, vertically integrated systems, where the format plays a small, but key, role in the overall product strategy. Nevertheless, whether justified in our view or not, the restriction is real and serves as an alarming and dangerous precedent. As the various parts of the computer industry converge, competition for market share is necessarily increasing. There is a tendency for entities in the business to grow larger and more corporate. What one company does, its competitors must do to stay in the market. At least they consider doing it. Now, the reality of the situation is that no vendor can restrict information totally and indefinitely. This is particularly the case with file formats. Vendors generally seek to protect their formats through a combination of encryption and legal remedies. However, a person who buys the application which xxiv

Preface

produces the restricted format as output buys a generator of an infinite number of samples. Because applications are judged, among other things, by the speed with which they produce output, and because encryption and obfuscation schemes take time to both implement and use, not much time and effort has gone into making formats unbeatable. To date, encrypted and obfuscated formats have been pretty easy to crack. An example that comes to mind is Adobe's Type 1 font format encryption. This was used by Adobe to protect its font outlines, but knowledge of the encryption scheme was fairly widespread in certain commercial circles before Adobe publicized it. Whether this resulted in commercial losses to Adobe from piracy of their outlines is hard to say. It certainly generated a good deal of ill will in the industry and ultimately proved futile. This being the case, some vendors have taken to the courts to protect their formats. We find this behavior futile and ill-conceived. Even if it has a short-term

benefit on revenues, the long-term losses due to a restricted market and developer ill-will seem to outweigh this benefit. In a sense, it is a form of monopolistic behavior, or certainly a type of positioning designed to support future monopolistic behavior. Now, it's a fact of life that almost every format that has made it to the market has been reverse-engineered. This has seldom been for profit—more for the challenge. If you truly have a need to track down the information, it's certain that it can be found through the Internet, provided the information exists. Is it legal to possess this information? This isn't clear at this time. Certainly it's illegal if the information was stolen from a vendor prior to publication. We, by the way, know of no instance where a restricted format has ever been stolen from a vendor. If you use or publicize information a vendor has tried to restrict legally, however, you run the risk of becoming involved in the legal affairs of the vendor, regardless of how that information was obtained. We do wish to point out that the legal way to influence the behavior of a commercial entity is in the marketplace. The best-known vendor in recent years that has tried to restrict developers, through legal means, from obtaining information on its format is Kodak in the case of Photo CD, as we describe in the Kodak Photo CD article in part II. In summary, although we could not include information on several of the formats we might have wished to, that information is almost surely available somehow for you to study so you'll understand more about format construction. However, if the format is legally restricted, you probably can't use it in your application, and there's no use thinking otherwise.

PREFACE

XXV

About the Examples You'll find short code examples associated with some of the articles, but in most cases the full examples are not included in the file format articles themselves. We have done this mainly because many of the code examples are quite long, and we wanted to make it easier to find information in the book. All of the code is included on the accompanying CD-ROM. The examples are, in most cases, C functions which parse and read (or write) format files. The examples are just that—examples—and are meant to give you a jump-start reading and writing image files. These are generally not standalone applications. In most cases, we wrote this code ourselves during the writing of the book or as part of other projects we've worked on. In some cases, code was contributed by other programmers or by those who own the file format specifications described in this book. We've also referred you to the source code for certain software packages on the CD-ROM that handle specific types of file formats—for example, the libtiff software, which provides extensive code illustrating the handling of TIFF files. The examples are usually written in a platform-independent manner. There is a bias for integer word lengths of 32 bits or less, for the simple reason that the overwhelming majority of files written to date have been on machines with a 32-bit or smaller word size. All examples and listings in this book and on the code disk are written in ANSI C.

The code is provided for illustrative purposes only. In some cases, we have spent considerable time constructing transparent examples, and it's not necessarily an easy job. So be forewarned: if you use our code, absolutely no attempt has been made to optimize it. That's your job! Can you use our code freely? In most cases, yes. We and O'Reilly & Associates grant you a royalty-free right to reproduce and distribute any sample code in the text and companion disk written by the authors of this book, provided that you:

1. Distribute the sample code only in conjunction with and as a part of your software product 2. Do not use our names or the O'Reilly & Associates name to market your software product 3. Include a copyright notice (both in your documentation and as a part of the sign-on message for your software product) in the form: "Portions Copyright (C) 1994 by James D. Murray and William vanRyper"

XXVI

PREFACE

Please also note the disclaimer statements on the copyright page of this book. Note as well that it is your responsibility to obtain permission for the use of any source code included on the CD-ROM that is not written by the authors of this book.

About the Images Along with the specification documents and code, we have collected sample images for many of the graphics file formats. You can use these sample images to test whether you are successfully reading or converting a particular file format.

About the Software We are not the first programmers who have discovered how cumbersome and troublesome graphics file formats can be. We have elected to organize the chaos by writing a book. Other programmers among us have responded by writing software that reads, converts, manipulates, or otherwise analyzes graphics files. Many of them have kindly agreed to let us include their software on the CD-ROM that accompanies this book. The packages we have elected to include provide an excellent sampling of what is available in the world of publicly available software. Although many of these packages are readily available on the Internet or via various PC bulletin board systems, their noncommercial nature should not in any way suggest that they lack value. These are excellent packages, and we are very grateful that we have been able to include them here. They should help you considerably in your dealings with graphics files. We include the following software packages on the CD-ROM: For MS-DOS:

IMDISP (Image Display) ISO MPEG-2 Codec (MPEG Encoder and Decoder) pbmplus (Portable Bitmap Utilities) VPIC For Windows: Conversion Assistant for Windows

Paint Shop Pro PhotoLab PictureMan

WinJPEG (Windows JPEG)

Preface

xxvii

For OS/2: GBM (Generalized Bitmap Module) PMJPEG (Presentation Manager JPEG) For UNIX: FBM (Fuzzy Bitmap Manipulation) ISO MPEG-2 Codec (MPEG Encoder and Decoder) JPEG (Independent JPEG Group's JPEG Library) libtiff (TIFF Library) pbmplus (Portable Bitmap Utilities) SAOimage (Smithsonian Astrophysical Observatory Image) xli xloadimage xv (X Viewer) For the Macintosh: GIFConverter GraphicsConverter JPEGView NIH Image Sparkle

Which Platforms? Most of the graphics file formats we describe in Part II of this book originated on a particular platform for use in a particular application—for example, MacPaint on the Macintosh. Despite their origins, most files can be converted readily to other platforms and can be used with other applications, as we describe later in this book. There are a few issues that you need to be aware of, though, having to do with the platform on which you are working or the platform on which a particular graphics file was developed. These issues are summarized in Chapter 6, Platform Dependencies.

Request for Comments As you might imagine, locating and compiling the information that went into this book was no easy task, and in some cases our way was blocked, as we've discussed earlier. We're sure that some of the information we searched for is out there—somewhere!

xxvin

Preface

We'd like to continue improving future editions, and for this we'll need all the help we can get. In addition to correcting any errors and omissions, we'd particularly like to expand our coverage of some of the more obscure graphics file formats that we might not know about or were unable to collect in time for publication. Also, if we were wrong, we want to know about it. If you're in a position to help us out, or if you have any comments or suggestions on how we can improve things in any way, we'd love to hear from you. Please write, email, or call: O'Reilly & Associates, Inc. 103 Morris Street, Suite A Sebastopol CA 95472 800-998-9938 (in the U.S. or Canada) 707-829-0515 (international/local) 707-829-0104 (FAX) Internet: bookquestions@ora. com UUCP: uunetloralbookquestions

Acknowledgments Writing this book, and collecting the voluminous material that is included on the CD-ROM that accompanies it, was a huge effort made possible only by the extraordinary generosity and common sense of a great many people. When we set out to collect the vast set of file format specifications in common use today, we frankly expected to be able to persuade only a small fraction of the specification owners and vendors that contributing them freely to this effort would be a good idea for them. We were convinced that it was a good idea, of course, but given the practicalities of bureaucracy, competition, and the many demands on people's time, we were not optimistic about conveying that conviction to others in the graphics community. It was not an easy effort, and sometimes we nagged, wheedled, and otherwise made nuisances of ourselves, but people came through for us in the most remarkable way. To all those who contributed specifications, images, and their expertise to this effort, thank you. We hope it pays off by increasing the general awareness of people in the market and by substantially reducing the number of support calls to those who maintain the file formats. We have tried to list all those who helped us, but there were so many over such a long period of time that we are bound to leave a few names off the list. Don't be shy about reminding us, and we'll include your names next time. To all those listed and unlisted, please accept our thanks. Individuals who helped us obtain and understand specifications include the following: Keith Alexander, Chris Allis, Jim Anderson, Tony Apodaca, Jim Preface

xxix

Atkinson, Ron Baalke, David Baggett, Dan Baker, Cindy Batz, Gavin Bell, Steve Belsky, Celia Booher, Kim Bovic, Neil Bowers, John Bridges, Richard Brownback, Rikk Carey, Steve Carlsen, Timothy Casey, Wesley Chalfant, Buckley Collum, Freda Cooke, Catherine Copetas, Antonio Costa, Stephen Coy, John Cristy, William Darnall, Ray Davis, Tom Davis, Bob Deen, Michael Dillon, Shannon Donovan, John Edwards, Jerry Evans, Lee Fisher, Jim Fister, Chad Fogg, Michael Folk, Roger Fujii, Jean-loup Gailly, Gary Goelhoeft, Bob Gonsales, Debby Gordon, Hank Gracin, Joy Gregory, Scott Gross, Hadmut, Paul Haeberli, Eric Haines, Eric Hamilton, Kory Hamzeh, William Hanlon, Fred Hansen, Paul Harker, Chris Hecker, Bill Hibbard, Michael Hoffman, Steve Hollasch, Terry Ilardi, Neale Johnston, Mike Kaltschnee, Lou Katz, Jim Kent, Pam Kerwin, Craig Kolb, Steve Koren, Don Lancaster, Tom Lane, Ian Lepore, Greg Leslie, Glenn Lewis, Paul Mace, Britt Mackenzie, Mike Martin, Pat McGee, Brian Moran, Mike Muuse, JoAnn Nielson, Gail Ostrow, Joan Patterson, Jeff Parker, Brian Paul, Brad Pillow, Andrew Plotkin, Jef Poskanzer, John Rasure, Dave Ratcliffe, Jim Rose, Randi Rost, Carroll Rotkel, Stacie Saccomanno, Jim Saghir, Barry Schlesinger, Louis Shay, Bill Shotts, Rik Segal, Mark Skiba, Scott St. Clair, John Stackpole, Marc Stengel, Ann Sydeman, Mark Sylvester, Spencer Thomas, Mark VandeWettering, Greg Ward, Archie Warnock, David Wecker, Joel Welling, Drew Wells, Jocelyn Willett, James Winget, and Shiming Xu. Thanks to these organizations that shared their specifications and helped us in other ways: 3D/Eye, Adobe Systems, Aldus, Andrew Consortium, Apple Computer, Autodesk, Avatar, Carnegie Mellon University, C-Cube Microsystems, Commodore Business Machines, CompuServe, Computer Associates, Digital Equipment Corporation, Digital Research, DISCUS, DuPont, IBM, Inset Systems, Intel Corporation, ISEP.INESC, Jet Propulsion Laboratory, Khoral Research, Kofax Image Products, Kubota Pacific Computer, Lawrence Berkeley Laboratory, Massachusetts Institute of Technology, Media Cybernetics, Metron Computerware, Microsoft, National Aeronautics and Space Administration, National Center for Supercomputer Applications, National Oceanic and Atmospheric Administration, Novell, Paul Mace Software, Pittsburgh Supercomputing Center, Pixar, Princeton University Department of Computer Science, RIX SoftWorks, Silicon Graphics, Sun Microsystems, Time Arts, Truevision, University of Illinois, University of Utah Department of Computer Science, University of Wisconsin at Madison, U.S. Army Ballistic Research Lab, U.S. Army Research Laboratory, Wavefront Technologies, X Consortium, and ZSoft. Very special thanks to those who read the manuscript of this book and made thoughtful and helpful comments on the content and organization: Dr. Peter Bono, Jim Frost, Tom Gaskins, Lofton Henderson, Dr. Tom Lane, Tim O'Reilly, Sam Leffler, Dr. Anne Mumford, and Archie Warnock. Double thanks to Lofton Henderson, who prepared CGM images for us under a tight xxx

Preface

deadline, and to Tom Lane, who reviewed the CD-ROM as well. We could not have done this without you! Many thanks to Shari L.S. Worthington, who let us include in this book many of the Internet resources she compiled for her article, "Imaging on the Internet: Scientific/Industrial Resources" in Advanced Imaging, February 1994. We are very grateful to those who contributed software that will allow you to convert, view, manipulate, and otherwise make sense of the many file formats described in this book. We could not include all of these packages on the CDROM (for this version, at least), but we appreciate very much your generosity. Thanks to these individuals: Dan Baker, Robert Becker, Alexey Bobkov, John Bradley, Mike Castle, John Cristy, Orlando Dare, D.J. Delorie, Guiseppe Desoli, Chris Drouin, Stefan Eckart, Mark Edmead, Chad Fogg, Oliver Fromme, Mike Fitzpatrick, Jim Frost, Aaron Giles, Graeme Gill, Maynard Handley, Jih-Shin Ho, Robert Holland, David Holliday, Allen Kempe, Andrew Key, Michail Kutzetsov, Tom Lane, Sam Leffler, Thorsten Lemke, Leonardo Loureiro, Eric Mandel, Michael Mauldin, Frank McKenney, Kevin Mitchell, Bob Montgomery, David Ottoson, Jef Poskanzer, Igor Plotnikov, Eric Praetzel, Wayne Rasband, Mohammed Ali Rezaei, Rich Siegel, Davide Rossi, Y Shan, Doug Tody, Robert Voit, Archie Warnock, Ken Yee, Norman Yee, and Paul Yoshimune. And thanks to these organizations: Alchemy Mindworks, Bare Bones Software, Carnegie Mellon University, CenterLine Software, Express Compression Labs, DareWare, Goddard Space Center, Handmade Software, Honeywell Technology Center, Independent JPEG Group, Labtam Australia Pty Ltd, MTE Industries, National Institutes of Health, National Optical Astronomy Observatories, Peepworks, PixelVision Software, Phase II Electronics, Stoik Ltd, TBH-SoftWorx, University of Waterloo. Special thanks to Rich Siegel and Leonard Rosenthal, who offered the use of their BBEdit and Stuffit products, as well as their knowledge of the Macintosh. Many, many thanks to all the good people at O'Reilly & Associates who made this book happen: to our editor, Debby Russell, who guided this effort from beginning to end; to Gigi Estabrook, who tirelessly collected permissions and documents from the many vendors; to Ellen Siever who as production manager got an enormous book out under incredible time pressure; to Nicole Gipson, Jessica Hekman, Mary Anne Weeks Mayo, Kismet McDonough, Clairemarie Fisher O'Leary, and Stephen Spainhour who worked on the production team; to Chris Tong, who produced the index; and to Chris Reilley, who created the figures. Special thanks to Len Muellner and Norm Walsh who spent many long hours developing tools, filters, and magical potions that tamed the SGML beast.

Preface

xxxi

We are also very grateful to those who produced the CD-ROM that accompanies this book: to Debby Russell, who headed up the effort; to Terry Allen, who converted and otherwise beat the files into submission; to Norm Walsh and Jeff Robbins, who lent their expertise in PC and Macintosh environments; and to Dale Dougherty, Lar Kaufman, Linda Mui, Tim O'Reilly, Eric Pearce, and Ron Petrusha, who all contributed their knowledge of CD-ROM technology, their experiences with other projects, and their opinions about how we ought to proceed. Thanks to Jeff Moskow and Ready-to-Run Software, who did the final CD-ROM development for us. A special thank you to P.J. Mead Books & Coffee of Orange, California, where James D. Murray sought caffeine and solace during the long days and nights of writing and revising this book. Finally, This book is dedicated to my son, James Alexander Ozbirn Murray, and his mother, Katherine

James D. Murray To all sentient beings working with graphics file formats. William vanRyper

XXXll

PREFACE

Part

On

Overview

Chapter 1

Introduction

Why Graphics File Formats? A graphics file format is the format in which graphics data—data describing a graphics image—is stored in a file. Graphics file formats have come about from the need to store, organize, and retrieve graphics data in an efficient and logical way. Sounds like a pretty straightforward task, right? But there's a lot under the covers, and that's what we're going to talk about. File formats can be complex. Of course they never seem complex until you're actually trying to implement one in software. They're also important, in ways that often aren't obvious. You'll find, for instance, that the way a block of data is stored is usually the single most important factor governing the speed with which it can be read, the space it takes up on disk, and the ease with which it can be accessed by an application. A program simply must save its data in a reasonable format. Otherwise, it runs the risk of being considered useless. Practically every major application creates and stores some form of graphics data. Even the simplest character-mode text editors allow the creation of files containing line drawings made from ASCII characters or terminal escape sequences. GUI-based applications, which have proliferated in recent years, now need to support hybrid formats to allow the incorporation of bitmap data in text documents. Database programs with image extensions also let you store text and bitmap data together in a single file. In addition, graphics files are an important transport mechanism that allows the interchange of visual data between software applications and computer systems. There is currently a great deal of work being done on object-based file systems, where a "data file" may appear as a cluster of otherwise unrelated elements and may be just as likely to incorporate graphics data as not. Clearly, traditional data classification schemes are in need of revision. Nevertheless, there will

Introduction

3

remain an enormous amount of graphics data accessible only by virtue of our ability to decode and manipulate the files we find around us today. The Basics Before we explore the details of any particular file formats, we first need to establish some basic background and terminology. Because we're assuming you have a general working knowledge of computers and know some programming, we'll start with some definitions. You'll find that we've simplified and condensed some of the terminology found in standard computer graphics references. The changes, however, always reflect modern usage. (You'll find a discussion of our rationale in the Preface. ) In what follows, we will be speaking of the output of a computer graphics process, or the production of a graphic work by a program. We don't mean to seem anthropomorphic here. The author of the work is usually human, of course, but his or her contribution is input from the point of view of this book. We're mainly concerned about the portion of the output that comes from a program and winds up in a file. Because the program is the last thing that "touches" the data before it winds up on disk or tape, we say that a graphic work is produced by a program, rather than by a human being. (In this case, we are referring to the form in which the data is stored, and not its meaning or content.) Graphics and Computer Graphics Traditionally, graphics refers to the production of a visual representation of a real or imaginary object created by methods known to graphic artists, such as writing, painting, imprinting, and etching. The final result of the traditional graphics production process eventually appears on a 2D surface, such as paper or canvas. Computer graphics, however, has expanded the meaning of graphics to include any data intended for display on an output device, such as a screen, printer, plotter, or film recorder. Notice what's happened here. Graphics used to refer to the actual output, something you could see. Now it means something only intended for display, or something meant to be turned into output. This distinction may seem silly to experienced users, but we've watched artists new to computers struggle with this. Where is the graphic output from a paint program? Does it appear as you compose something on the screen? Where is the representation when you write your work to a file? Does it appear for the first time when another program displays it on a screen or paper? In the practice of computer graphics, creation of a work is often separate from its representation. One way to put it is that a computer graphics process

4

Overview

produces virtual output in memory, or persistent output in a file on permanent media, such as a disk or tape. In other words, even though a program has written a file full of something, output doesn't yet exist from a traditional point of view because nothing has been displayed anywhere. So we say that graphics data is the virtual output of a program, from which a representation of the work can be constructed, or can be reconstructed from the persistent graphics data saved to a file, possibly by the same program. Rendering and Images In the interest of clarity, most people make a distinction between creation and rendering (sometimes also called realization). Traditionally, an image is a visual representation of a real-world object, captured by an artist through the use of some sort of mechanical, electronic, or photographic process. In computer graphics, the meaning of an image has been broadened somewhat to refer to an object that appears on an output device. Graphics data is rendered when a program draws an image on an output device. You will also occasionally hear people speak of the computer graphics production pipeline. This is the series of steps involved in defining and creating graphics data and rendering an image. On one end of the production pipeline is a human being; on the other end is an image on paper, screen, or another device. Figure 1-1 illustrates this process.

Graphics Files For the purpose of this book, graphics files are files that store any type of persistent graphics data (as opposed to text, spreadsheet, or numerical data, for example), and that are intended for eventual rendering and display. The various ways in which these files are structured are called graphics file formats. We will examine several categories of graphics file formats later in this chapter. People sometimes talk of rendering an image to a file, and this is a common and perfectly valid operation. For our purposes, when an image is rendered to a file, the contents of that file become persistent graphics data. Why? Simply because the data in the file now needs to be re-rendered as virtual graphics data before you can see what it looks like. Although the image is once again turned into graphics data in the process of rendering it to the file, it is now once more merely data. In fact, the data can now be of a different type. This is what happens in file conversion operations, for example. An image stored in a file of format type 1 is rendered (by a conversion program) to a second file of format type 2.

Introduction

5

rendering application

Figure 1-1: The graphics production pipeline

Which Graphics Files Are Included... Although we've tried in this book to stick to formats that contain graphics data, we've also tried to make sure that they are used for data interchange between programs. Now, you'd think that it's always perfectly clear whether a file contains graphics data or not. Unfortunately, this isn't always the case. Spreadsheet formats, for instance, are sometimes used to store graphics data. And what about data interchange? A format is either used to transfer data from one program to another or it isn't, right? Again, it's not so simple. Some formats, like TIFF, CGM, and GIF, were designed for interprogram data interchange. But what about other formats, such as PCX, which were designed in conjunction with a particular program? There is no easy answer, but these two criteria—graphics data and data interchange—will take you a long way, and we've tried as much as possible to follow them here. The section below called "Format Summaries" contains a complete list of all of the formats described in this book.

6

Overview

... And Which Are Not

For the purposes of this book, we're excluding three types of files that contain graphics data but are outside the scope of what we are trying to accomplish here: output device language files, page description language files, and FAX files. Output device language files contain hardware-dependent control codes that are designed to be interpreted by an output device and are usually used to produce hardcopy output. We exclude these, because they usually have a short lifetime as temporary files and with few exceptions are not archived or exchanged with other machines. Another reason is practical: many of the hundreds of types of printers and plotters built over the years use vendor-specific control information, which the market has traditionally ignored. By far the most common output device languages in use are Printer Control Language (PCL) and variants, used to control the Hewlett-Packard LaserJet series of laser printers and compatibles, and Hewlett-Packard Printer Graphics Language (HPGL), used to control plotters and other vector devices. Page description languages (PDLs) are sophisticated systems for describing graphical output. We exclude page description languages from our discussion, because the market is dominated by Adobe's PostScript and because the specification is voluminous and extremely well-documented in readily available publications. We do, however, provide an article describing Encapsulated PostScript format; that article briefly discusses EPS, EPSF, and EPSI formats. FAX-format files are usually program-specific, created by an application designed to support one or more FAX modems. There are many such formats, and we do not cover them, because they generally are not used for file exchange. We do, however, include a brief article on FAX formats that discusses some of the issues you'll face if you use these formats.

Graphics Data Graphics data is traditionally divided into two classes: vector and bitmap. As we explain below, we use the term bitmap to replace the older term raster. Vector Data

In computer graphics, vector data usually refers to a means of representing lines, polygons, or curves (or any object that can be easily drawn with lines) by numerically specifying key points. The job of a program rendering this key-point data is to regenerate the lines by somehow connecting the key points or by drawing using the key points for guidance. Always associated with vector data is attribute information (such as color and line thickness information) and a set of conventions (or rules) allowing a program to draw the desired objects. INTRODUCTION

7

These conventions can be either implicit or explicit, and, although designed to accomplish the same goals, are generally different from program to program. Figure 1-2 shows several examples of vector data.

T

"v

/r i

line

rectangle

spline-based object

¦ -key point

Figure 1-2: Vector data

By the way, you may be familiar with a definition of the word vector which is quite precise. In the sciences and mathematics, for instance, a vector is a straight line having both magnitude and direction. In computer graphics, 'Vector" is a sort of catch-all term. It can be almost any kind of line or line segment, and it is usually specified by sets of endpoints, except in the case of curved lines and more complicated geometric figures, which require other key points to be fully specified. Bitmap Data Bitmap data is formed from a set of numerical values specifying the colors of individual pixels or picture elements {pels). Pixels are dots of color arranged on a regular grid in a pattern representing the form to be displayed. We commonly say that a bitmap is an array of pixels, although a bitmap, technically, consists of an array of numerical values used to set, color, or "turn on" the corresponding pixels on an output device when the bitmap is rendered. If there is any ambiguity in the text, we will make the distinction clear by using the term pixel value to refer to a numerical value in the bitmap data corresponding to a pixel color in the image on the display device. Figure 1-3 shows an example of bitmap data. In older usage, the term bitmap sometimes referred to an array (or "map") of single bits, each bit corresponding to a pixel, while the terms pixelmap, graymap, and pixmap were reserved for arrays of multibit pixels. We always use the term 8

Overview

— pixel

+

y> Display Device

^^ pixel value

^^

Bitmap Data in File: 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 0, 0, 0, ...

Figure 1 -3: Bitmap data bitmap to refer to an array of pixels, whatever the type, and specify the bit depth, or pixel depth, which is the size of the pixels in bits or some other convenient unit (such as bytes). The bit depth determines the number of colors a pixel value can represent. A 1-bit pixel can be one of two colors, a 4-bit pixel one of 16 colors, and so on. The most commonly found pixel depths today are 1, 2, 4, 8, 15, 16, 24, and 32 bits. Some of the reasons for this, and other color-related topics, are discussed in Chapter 2, Computer Graphics Basics. Sources of Bitmap Data: Raster Devices Historically, the term raster has been associated with cathode ray tube (CRT) technology and has referred to the pattern of rows the device makes when displaying an image on a picture tube. Raster-format images are therefore a collection of pixels organized into a series of rows, which are called scan lines. Because raster output devices, by far the most popular kind available today, display images as patterns of pixels, pixel values in a bitmap are usually arranged so as to make them easy to display on certain common raster devices. For these reasons, bitmap data is often called raster data. In this book, we u$e the term bitmap data. As mentioned above, bitmap data can be produced when a program renders graphics data and writes the corresponding output image to a file instead of displaying it on an output device. This is one of the reasons bitmaps and bitmap data are often referred to as images, and bitmap data is referred to as

Introduction

9

image data. Although there is nothing to see in the traditional sense, an image can be readily reconstructed from the file and displayed on an output device. We will occasionally refer to the block of pixel values in a bitmap file as the image or image portion. Other sQurces of bitmap data are raster devices used to work with images in the traditional sense of the word, such as scanners, video cameras, and other digitizing devices. For our purposes, we consider a raster device that produces digital data to be just another source of graphics data, and we say that the graphics data is rendered when the program used to capture the data from the device writes it to a file. When speaking of graphics data captured from a real-world source, such as a scanner, people speak redundantly of a bitmap image, or an image bitmap. What About Object Data ? People sometimes speak of a third class: object data. In the past, this referred to a method of designating complex forms, such as nested polygons, through a shorthand method of notation, and relying on a program's ability to directly render these forms with a minimal set of clues. Increasingly, however, the term is used to refer to data stored along with the program code or algorithmic information needed to render it. This distinction may become useful in the future, particularly if languages which support object-oriented programming (such as Smalltalk and C++) become more popular. However, for now we choose to ignore this third primitive data type, mainly because at the time of this writing there are no standardized object file formats of any apparent commercial importance. In any case, the data portions of all objects can be decomposed into simpler forms composed of elements from one of the two primitive classes.

Figure 1-4 shows an example of object data. Other Data

Graphics files may also include data containing structural, color, and other descriptive information. This information is included primarily as an aid to the rendering application in reconstructing and displaying an image. From Vector to Bitmap Data... Twenty-five years ago, computer graphics was based almost entirely on vector data. Random-scan vector displays and pen plotters were the only readily obtainable output devices. The advent of cheap, high-capacity magnetic media, in the form of tapes and disks, soon allowed the storage of large files, which in turn created a need for the first standardized graphics file formats. 10

Overview

point

line segment

triangle

tetrahedron (polyhedron)

Figure 1-4: Object data Today, most graphic storage is bitmap-based, and displays are raster-based. This is due in part to the availability of high-speed CPUs, inexpensive memory and mass storage, and high-resolution input and output hardware. Bitmap graphics is also driven by the need to manipulate images obtained from raster digitizing devices. Bitmap graphics is important in applications supporting CAD and 3D rendering, business charts, 2D and 3D modelling, computer art and animation, graphical user interfaces (GUIs), video games, electronic document image processing (EDIP), and image processing and analysis. It is interesting to note that the increased emphasis on bitmap graphics in our time corresponds to a shift toward the output end of the graphics production pipeline. By volume, the greatest amount of data being stored and exchanged consists of finished images in bitmap format. ... And Back Again But the trend toward bitmap data may not last. Although there are certain advantages to storing graphics images as bitmap data (we cover these in the section called "Pros and Cons of Bitmap Format Files" in Chapter 3, Bitmap Files), bitmap images are usually pretty bulky. There is a definite trend toward networking in all the computer markets, and big bitmap files and low-cost networks don't mix. The cost of sending files around on the Internet, for example, can be measured not only in connect costs, but in lost time and decreased network performance. Because graphics files are surely not going to go away, we expect some sort of vector-based file format to emerge somewhere along the line as an interchange standard. Unfortunately, none of the present vector formats in common use is acceptable to a broad range of users (but this may change).

Introduction

11

Types of Graphics File Formats There are several different types of graphics file formats. Each type stores graphics data in a different way. Bitmap, Vector, and Metafile Formats Traditionally, there are three fundamental types of graphics file formats: bitmap, vector, and metafile. Bitmap formats (described in Chapter 3, Bitmap Files) are used for storing bitmap data. Vector formats (described in Chapter 4, Vector Files), naturally enough, are designed to store vector data, such as lines and geometric data. Metafile formats (described in Chapter 5, Metafiles) accommodate both bitmap and vector data in the same file. So far, essentially all of our digital graphics storage needs have been satisfied by using these three format types. Scene, Animation, and Multimedia Formats In addition to the traditional bitmap, vector, and metafile formats, there are three other types of formats which we include in this book, all of lesser importance: scene format, animation, and multimedia. Scene format files (sometimes called scene description files) are designed to store a condensed representation of an image or scene, which is used by a program to reconstruct the actual image. What's the difference between a vector-format file and a scene-format file? Just that vector files contain descriptions of portions of the image, and scene files contain instructions that the rendering program uses to construct the image. In practice it's sometimes hard to decide whether a particular format is scene or vector; it's more a matter of degree than anything absolute. Animation formats have been around for some time. The basic idea is that of

the flip-books you played with as a kid. Rapidly display one image superimposed over another to make it appear as if the objects in the image are moving. Very primitive animation formats store entire images that are displayed in sequence, usually in a loop. Slightly more advanced formats store only a single image, but multiple color maps for the image. By loading in a new color map, the colors in the image change, and the objects appear to move. Advanced animation formats store only the differences between two adjacent images (called frames) and update only the pixels that have actually changed as each frame is displayed. A display rate of 10-15 frames per second is typical for cartoon-like animations. Video animations usually require a display rate of 20 frames per second or better to produce a smoother motion.

12

Overview

Multimedia formats are relatively new and are designed to allow the storage of data of different types in the same file. Multimedia formats usually allow the inclusion of graphics, audio, and video information. Microsoft's RIFF and Apple's QuickTime are well-known examples, and others are likely to emerge in the near future. (Chapter 10, Multimedia, describes various issues for multimedia formats.) Hybrid Formats and Hypertext There are three other classes of formats which we should also mention. While

not covered in detail in this book, these may become important in the future. These are the various hybrid text, hybrid database, and hypertext formats. Currently, there is a good deal of research being conducted on the integration of unstructured text and bitmap data ("hybrid text") and the integration of record-based information and bitmap data ("hybrid database"). As this work bears fruit, we expect that hybrid formats capable of efficiently storing graphics data will emerge and will become steadily more important. Hypertext formats allow the arrangement of blocks of text so as to facilitate random access through links between the blocks. Hypertext formats that have been extended to allow the inclusion of other data (including graphics) are becoming known as hypermedia. Hypertext and hypermedia formats may also gain importance in the future. We don't cover them in this book chiefly because they're generally not used for data exchange between platforms.

Elements of a Graphics File As mentioned in the Preface, different file format specifications use different terminology. In fact, it's possible that there is not a single term with a common meaning across all of the file formats mentioned in this book. This is certainly true for terms referring to the way data is stored in a file, such as field, tag, block, and packet. In fact, a specification will sometimes provide a definition for one of these terms and then abandon it in favor of a more descriptive one, such as a chunk, a sequence, or a record. For purposes of discussion in this book, we will consider a graphics file to be composed of a sequence of data and data structures, called file elements, or data elements. These are divided into three categories: fields, tags, and streams. Fields

A field is a data structure in a graphics file that is a fixed size. A fixed field has not only a fixed size but a fixed position within the file. The location of a field is communicated by specifying either an absolute offset from a landmark in a file, such as the file's beginning or end, or a relative offset from some other Introduction

13

data. The size of a field either is stated in the format specification or can be inferred from other information.

Tags A tag is a data structure which can vary in both size and position from file to file. The position of a tag, like that of a field, is specified by either an absolute offset from a known landmark in the file, or through a relative offset from another file element. Tags themselves may contain other tags or a collection of related fields. Streams

Fields and tags are an aid to random access; they're designed to help a program quickly access a data item known in advance. Once a position in a file is known, a program can access the position directly without having to read intervening data. A file that organizes data as a stream, on the other hand, lacks the structure of one organized into fields and tags, and must be read sequentially. For our purposes, we will consider a stream to be made up of packets, which can vary in size, are sub-elements of the stream, and are meaningful to the program reading the file. Although the beginning and end of the stream may be known and specified, the location of packets other than the first usually is not, at least prior to the time of reading. Combinations of Data Elements You can imagine, then, pure fixed field files, pure tag files, and pure stream files, made up entirely of data organized into fixed fields, tags, and streams, respectively. Only rarely, however, does a file contain data elements of a single type; in most cases it is a combination of two or more. The TIFF and TGA formats, for example, use both tags and fixed fields. GIF format files, on the other hand, use both fixed fields and streams. Fixed-field data is usually faster and easier to read than tag and stream data. Files composed primarily of fixed-field data, however, are less flexible in situations in which data needs to be added to or deleted from an existing file. Formats containing fixed fields are seldom easily upgraded. Stream data generally requires less memory to read and buffer than field and tag data. Files composed primarily of stream data, however, cannot be accessed randomly, and thus cannot be used to find or sub-sample data quickly. These considerations are discussed further in Chapters 3, 4, and 5.

14

Overview

Converting Formats You often need to convert a graphics file from one format to another—for printing, for manipulation by a particular desktop publishing program, or for some other reason. Although conversion to and from certain file formats is straightforward, conversion of other formats may be quite hair-raising. You will find conversion particularly problematic if you need to convert between basic format types—for example, bitmap to vector. Fortunately, there are some excellent products that handle most of the complexities of conversion for you. If you are using UNIX and are lucky enough to be converting between the formats supported by the pbmplus (Portable Bitmap Utilities) package (a freely available set of programs developed by Jef Poskanzer that we've included on the CD-ROM), your job will be an easy one. Rather than converting explicitly from one graphics file format to another (for example, from PCX to Microsoft Windows Bitmap [BMP]), pbmplus converts any source format to a common format and then converts that common format to the desired destination format.

We also provide a number of other conversion programs on the CD-ROM, including the MS-DOS pbmplus port, Conversion Assistant for Windows, GraphicsConverter for the Macintosh, and many more publicly available programs.

Try these programs out and see which best suits your formats and applications. You may also want to consider buying Hijaak, an excellent commercial product developed by Inset Systems that converts to and from most of the common file formats.

Chapter 7, Format Conversion, contains a discussion of converting between types of graphics file formats.

Compressing Data Throughout the articles included in Part II of this book, you'll see references to methods of data compression or data encoding. Compression is the process used to reduce the physical size of a block of information. By compressing graphics data, we're able to fit more information in a physical storage space. Because graphics images usually require a very large amount of storage space, compression is an important consideration for graphics file formats. Almost every graphics file format uses some compression method. There are several ways to look at compression. We can talk about differences between physical and logical compression, symmetric and asymmetric compression, and lossless and lossy compression. These terms are described in

Introduction

15

detail in Chapter 9, Data Compression. That chapter also describes the five most common methods of, or algorithms for, compression, which we mention here briefly: • Pixel packing—Not a method of data compression per se, but an efficient way to store data in contiguous bytes of memory. This method is used by the Macintosh PICT format and by other formats that are capable of storing multiple 1-, 2-, or 4-bit pixels per byte of memory or disk space. • Run length encoding (RLE)—A very common compression algorithm used by such bitmap formats as BMP, TIFF, and PCX to reduce the amount of redundant graphics data. • Lempel-Ziv-Welch (LZW)—Used by GIF and TIFF, this algorithm is also a part of the v.42bis modem compression standard and of PostScript Level 2. • CCITT encoding—A form of data compression used for facsimile transmission and standardized by the CCITT (International Telegraph and Telephone Consultative Committee). One particular standard is based on the keyed compression scheme introduced by David Huffman and known widely as Huffman encoding. • Joint Photographic Experts Group (JPEG)—A toolkit of compression methods used particularly for continuous-tone image data and multimedia. The baseline JPEG implementation uses an encoding scheme based on the Discrete Cosine Transform (DCT) algorithm. Each of the articles in Part II lists the compression algorithms used for the particular graphics file format described. Format Summaries Table 1-1 lists each of the graphics file formats that we describe in this book, along with an indication of what type of format it is (bitmap, vector, metafile, scene description, animation, multimedia, or other [for "other" formats, refer to the appropriate article in Part II]). In some cases, a format may be known by a number of different names. To help you find the format you need, we've included Table 1-2, which lists the names by which graphics file formats may be known, or the file extensions they may have, with cross-references to the names under which they appear in Part II of this book.

Graphics files on most platforms use a fairly consistent file extension convention. Of the three platforms with the largest installed base (MS-DOS, Macintosh, and UNIX), all use a similar name.extension file-naming convention. The other platforms that are popular for computer graphics (Amiga, Atari, and 16

Overview

VMS) use a roughly similar naming convention. VMS, for example, uses the convention namel.name2:version, where version is an integer indicating the number of the file revision. Most of the 3-character names in Table 1-2 appear as graphics file extensions. Table 1 -1: Graphics File Formats Described in This Book Format

Type

Adobe Photoshop Atari ST Graphics Formats

Bitmap Bitmap and Animation

AutoCAD DXF

Vector

BDF

Bitmap

BRL-CAD

Other

BUFR

Other

CALS Raster

Bitmap

CGM

Metafile

CMU Formats

Multimedia

DKB

Scene Description Bitmap Bitmap Metafile (Page Description Language) Bitmap Bitmap

Dore Raster File Format Dr. Halo

Encapsulated PostScript FaceSaver FAX Formats FITS

Other

FLI

Animation

GEM Raster

Bitmap

GEMVDI

Metafile

GIF

Bitmap

GRASP

Animation

GRIB

Other

Harvard Graphics

Metafile

Hierarchical Data Format

Metafile

IGES

Other

Inset PIX

Bitmap

Intel DVI

Multimedia

Interchange File Format Bitmap JPEG File Interchange Format Bitmap Kodak Photo CD Bitmap Kodak YCC Bitmap Lotus DIF

Vector

Lotus PIC

Vector

Lumena Paint

Bitmap Bitmap

Macintosh Paint

Introduction

17

Format

Type

Macintosh PICT

Metafile

Microsoft Paint

Bitmap

Microsoft RIFF

Multimedia

Microsoft RTF

Metafile

Microsoft SYLK

Vector

Microsoft Windows Bitmap

Bitmap

Microsoft Windows Metafile

Metafile

MIFF

Bitmap

MPEG

Other

MTV

Scene Description

NAPLPS

Metafile

NFF

Scene Description Scene Description Bitmap Scene Description Bitmap Bitmap

OFF

OS/2 Bitmap P3D PBM, PGM, PNM, and PPM PCX PDS

Other

Pictor PC Paint Pixar RIB

Bitmap Scene Description

Plot-10

Vector

POV

Vector

Presentation Manager Metafile

Metafile

PRT

Scene Description Scene Description

QRT QuickTime Radiance

Other

SGI Y\ODL

Scene Description Scene Description Bitmap Scene Description Bitmap Scene Description Scene Description

SGO

Vector

Sun Icon Sun Raster

Bitmap Bitmap

TDDD

Vector and Animation

TGA TIFF

Bitmap Bitmap

TTDDD

Vector and Animation

uRay

Scene Description Bitmap Bitmap

Rayshade RIX RTrace

SGI Image File Format SGI Inventor

Utah RLE VICAR2

18

Overview

Format

type

VIFF

Bitmap

VIS-5D

Vector

Vivid and Bob

Scene Description

Wavefront OBJ

Vector

Wavefront RLA

Bitmap

WordPerfect Graphics Metafile

Metafile

XBM

Bitmap Bitmap Bitmap

XPM XWD

Table 1-2: Format Names, Cross-Referenced to Articles in This Book Name

Format Name

or Extension

in Book

3D Interchange File

SGI Inventor

Format ADI

AutoCAD DXF

Amiga Paint

Interchange File Format

Andrew Formats

CMU Formats

AM

Atari ST Graphics Formats Atari ST Graphics Formats

ANM

AutoCAD Drawing Exchange Format

AutoCAD DXF

AVI

Microsoft RIFF

AVS

Intel DVI

AVSS

Intel DVI

Ballistic Research Laborato-

BRI^CAD

ry CAD Package Binary Universal Form for the Representation of Meteorological Data Bitmap Distribution

BUFR

BDF

Format BMP BMP

Microsoft Windows Bitmap OS/2 Bitmap

BND

Microsoft RIFF

Bob

Vivid and Bob

BPX

Lumena Paint

BSAVE

Pictor PC Paint

BW

SGI Image File Format

C16

Intel DVI

Introduction

19

Name

Format Name

or Extension

in Book

CAL

CALS Raster

CALS

CALS Raster

CE1

CHT

Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats Harvard Graphics

CLP

GRASP

CLP

Pictor PC Paint

CMI

Intel DVI

CMQ CMU Bitmap CMU Bitmap

Intel DVI

CMY

Intel DVI

ColoRIX VGA Paint

RIX

Computer Aided Acquisition and Logistics Support Computer Graphics Metafile

CALS Raster

CUT

Dr. Halo

Data Interchange Format DBW_uRay

Lotus DIF

CE2 CE3

CMU Formats CMU Formats

CGM

uRay

DCX

PCX

DEGAS DIB

Atari ST Graphics Formats Microsoft Windows Bitmap OS/2 Bitmap

DIF

Lotus DIF

Digital Video Interface

Intel DVI

Dore

Dore Raster File Format

DIB

DVI

Intel DVI

DXB

AutoCAD DXF

DXF

AutoCAD DXF

EPS

EPSI

Encapsulated PostScript Encapsulated PostScript Encapsulated PostScript

Facsimile File Formats

FAX Formats

FAX

FAX Formats

FII

FLI

EPSF

FLC

FLI

Flexible Image Transport System

FITS

FLI Animation

FLI

Flic

FLI

FLM

Atari ST Graphics Formats

20

Overview

Name

Format Name

or Extension

in Book

FM 92-VIII Ext. GRIB

GRIB

FM 94-IX Ext. BUFR

BUFR1

FNT

GRASP

FTI

FITS

GDI

GEMVDI

GEM Vector

GEMVDI

GEM

GEM Raster

Graphics Interchange

GIF

Format Grid File Format

VIS-5D

Gridded Binary

GRIB

GL

GRASP

Graphical System for

GRASP

Presentation Haeberli

SGI Image File Format

HDF

Hierarchical Data Format

18

Intel DVI

116

Intel DVI

ICl

IC3

Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats

ICB

TGA

ICC

Kodak YCC

ICO

Sun Icon

IFF

ILBM

Interchange File Format Interchange File Format Interchange File Format

IMA

Intel DVI

1MB

Intel DVI

IC2

ILM

IMC

Intel DVI

IMG

GEM Raster

IMG

Intel DVI

IMI

Intel DVI

IMM

Intel DVI

IMQ

Intel DVI

IMR

Intel DVI

IMY

Intel DVI

Initial Graphics Exchange Specification

IGES

Intel Real-Time Video

Intel DVI

Inventor

SGI Inventor

IRIS

SGI Inventor

Name

Format Name

or Extension

in Book

JFI JFIF JPEG JPG Khoros Visualization/Image

JPEG File Interchange Format JPEG File Interchange Format JPEG File Interchange Format JPEG File Interchange Format VIFF

File Format Lotus Picture

Lotus PIC

MAC

Macintosh Paint

Machine Independent File

MIFF

Format Macintosh Picture

Macintosh PICT

MacPaint

Macintosh Paint

McIDAS

VIS-5D

MET

Microray

Presentation Manager Metafile uRay

MPEG-1

MPEG

MPEG-2

MPEG

MPG

MPEG

MSP

Microsoft Paint

NEO

Atari ST Graphics Formats

Neutral File Format

NFF

North American Presenta-

NAPLPS

tion Layer Protocol Syntax OBJ Object File Format

Wavefront OBJ

OVR

Pictor PC Paint

OFF

P10

Plot-10

PAC

Atari ST Graphics Formats

PAL

Dr. Halo

Parallel Ray Trace

PRT

PBM

PBM, PGM, PNM, and PPM Pictor PC Paint

PC Paint PC Paintbush File Format

PCX

PCI

PC3

Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats

PCC

PCX

PCT

Macintosh PICT

Persistence of Vision

POV

pbmplus

PBM, PGM, PNM, and PPM

PGM

PBM, PGM, PNM, and PPM

Photo CD

Kodak Photo CD

PC2

22

Overview

Name

Format Name

or Extension

in Book

Photoshop 2.5

PI3

Adobe Photoshop Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats

PIC

GRASP

PIC

Lotus PIC

PIC

Pictor PC Paint

PIC

Pictor PC Paint

PICT

Macintosh PICT

Pittsburgh Supercomputer

P3D

PIl PI2

Center 3D Metafilea PIX

Inset PIX

PIX

Lumena Paint

Planetary Data System For-

PDS

mat

Planetary File Format

VICAR2

PMBMP

PNM

OS/2 Bitmap OS/2 Bitmap PBM, PGM, PNM, and PPM

PNTG

Macintosh Paint

Portable Bitmap Utilities POV-Ray Powerflip Format

PBM, PGM, PNM, and PPM

PPM

POV

Presentation Manager Quick Ray Trace QuickDraw Picture Format QuickTime Movie Resource

OS/2 Bitmap QRT

PMDIB

POV SGI YAODL

Macintosh PICT

QuickTime

Format

QTM

QuickTime

RAS

CALS Raster

RAS

Sun Raster

RDI

Microsoft RIFF

RenderMan Interface

Pixar RIB

Bytestream Resource Interchange File

Microsoft RIFF

Format RFF

Dore Raster File Format

RGB RGBA

Atari ST Graphics Formats SGI Image File Format SGI Image File Format

RIB

Pixar RIB

RGB

INTRODUCTION

23

Name

Format Name

or Extension

in Book

Rich Text Format

Microsoft RTF

RIFF

Microsoft RIFF

RIFX

Microsoft RIFF

RIX Image File

RIX

RLA

Wavefront RLA

RLB

Wavefront RLA

RLE

SGI Image File Format

RLE

Utah RLE

RMI

Microsoft RIFF

RTF

Microsoft RTF

RTV

Intel DVI

Run-length Encoded

Wavefront RLA

Version A

Run-length Encoded

Wavefront RLA

Version B SCN

RTrace

SEQ

Atari ST Graphics Formats

SET

GRASP

SFF

RTrace

SGI SGI

SGI Image File Format SGI Image File Format

Showcase

SGO

Silicon Graphics Object

SGO

SLD

AutoCAD DXF

SLK

Microsoft SYLK

SYLK

Microsoft SYLK

Symbolic Link Format

Microsoft SYLK

T3D

TDDD

Targa Image File Tag Image File Format

TGA TIFF

TekPlot-10

Plot-10

Textual 3D Data Description

TTDDD

TNI

TNY

Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats Atari ST Graphics Formats

TPIC

TGA

Turbo Silver 3D Data De-

TDDD

TN2 TN3

scription TXT

GRASP

UC1

Atari ST Graphics Formats Atari ST Graphics Formats

UC2

24

Overview

Name

Format Name

or Extension

in Book

UC3

Atari ST Graphics Formats

VDI

GEM VDI

VDA

TGA

Visualization-5D

VIS-5D

Vivid

Vivid and Bob

VMG

RIX

VST

TGA

WAV

Microsoft RIFF

Wavefront Object Windows DIB

Wavefront OBJ Microsoft Windows Bitmap Microsoft Windows Bitmap

Windows Metafile

Microsoft Windows Metafile

Windows BMP

WMF

Microsoft Windows Metafile

WPG

WordPerfect Graphics Metafile

X BitMap X PixMap X Window Dump

XPM

YCC

Kodak YCC

Yet Another Object Description Language

SGI YAODL

XBM

XWD

introduction

25

Chapter 2

Computer Graphics Basics To understand graphics file formats, you need some background in computer graphics. Of course, computer graphics is an enormous subject, and we can't hope to do it justice here. In general, we assume in this book that you are not a novice in this area. However, for those who do not have an extensive background in computer graphics, this chapter should be helpful in explaining the terminology you'll need to understand the discussions of the formats found later in this book.

If you're interested in exploring any of these topics further, far and away the best overall text is Computer Graphics: Principles and Practice by James D. Foley, Andries van Dam, S.K. Feiner, and J.F. Hughes. This is the second edition of the book formerly known throughout the industry as "Foley and van Dam." You'll find additional references in the "For Further Information" section at

the end of this chapter. Pixels and Coordinates Locations in computer graphics are stored as mathematical coordinates, but the display surface of an output device is an actual physical object. Thus, it's important to keep in mind the distinction between physical pixels and logical pixels. Physical Pixels Physical pixels are the actual dots displayed on an output device. Each one takes up a small amount of space on the surface of the device. Physical pixels are manipulated directly by the display hardware and form the smallest independently programmable physical elements on the display surface. That's the ideal, anyway. In practice, however, the display hardware may juxtapose or overlay several smaller dots to form an individual pixel. This is true in the case of Computer Graphics Basics

27

most analog color CRT devices, which use several differently colored dots to display what the eye, at a normal viewing distance, perceives as a single, uniformly colored pixel. Because physical pixels cover a fixed area on the display surface, there are practical limits to how close together two adjacent pixels can be. Asking a piece of display hardware to provide too high a resolution—too many pixels on a given display surface—will create blurring and other deterioration of image quality if adjacent pixels overlap or collide. Logical Pixels In contrast to physical pixels, logical pixels are like mathematical points: they specify a location, but are assumed to occupy no area. Thus, the mapping between logical pixel values in the bitmap data and physical pixels on the screen must take into account the actual size and arrangement of the physical pixels. A dense and brightly colored bitmap, for example, may lose its vibrancy when displayed on too large a monitor, because the pixels must be spread out to cover the surface.

Figure 2-1 illustrates the difference between physical and logical pixels.

I finite

*

white pixel at (1,1)

Physical Pixels on Devices

Figure 2-1: Physical and logical pixels

28

Overview

*

/

extent

/

no extent

white pixel at (1,1)

Logical Pixels

4*

Pixel Depth and Displays The number of bits in a value used to represent a pixel governs the number of colors the pixel can exhibit. The more bits per pixel, the greater the number of possible colors. More bits per pixel also means that more space is needed to store the pixel values representing a bitmap covering a given area on the surface of a display device. As technology has evolved, display devices handling more colors have become available at lower cost, which has fueled an increased demand for storage space. Most modern output devices can display between two and more than 16 million colors simultaneously, corresponding to one and 24 bits of storage per pixel, respectively. Bilevel, or 1-bit, displays use one bit of pixel-value information to represent each pixel, which then can have two color states. The most common 1-bit displays are monochrome monitors and black-and-white printers, of course. Things that reproduce well in black and white—line drawings, text, and some types of clip art—are usually stored as 1-bit data.

A Little Bit About Truecolor People sometimes say that the human eye can discriminate between 2~24 (16,777,216 colors), although many fewer colors can be perceived simultaneously. Naturally enough there is much disagreement about this figure, and the actual number certainly varies from person to person and under different conditions of illumination, health, genetics, and attention. In any case, we each can discriminate between a large number of colors, certainly more than a few thousand. A device capable of matching or exceeding the color-resolving power of the human eye under most conditions is said to display truecolor. In practice, this means 24 bits per pixel, but for historical reasons, output devices capable of displaying 2" 15 (32,768) or 2" 16 (65,536) colors have also incorrectly been called truecolor. More recently, the term hicolor has come to be used for displays capable of handling up to 2" 15 or 2" 16 colors. Fullcolor is a term used primarily in marketing; its meaning is much less clear. (If you find out what it means, exactly, please let us know!)

Computer Graphics Basics

29

Issues When Displaying Colors It is frequently the case that the number or actual set of colors defined by the pixel values stored in a file differs from those that can be displayed on the surface of an output device. It is then up to the rendering application to translate between the colors defined in the file and those expected by the output device. There is generally no problem when the number of colors defined by the pixel values found in the file (source) is much less than the number that can be displayed on the output device (destination). The rendering application in this case is able to choose among the destination colors to provide a match for each source color. But a problem occurs when the number of colors defined by the pixel values exceeds the number that can be displayed on an output device. Consider the following examples. In the first case, 4-bit-per-pixel data (corresponding to 16 colors) is being displayed on a device capable of supporting 24-bit data (corresponding to more than 16 million colors). The output device is capable of displaying substantially more colors than are needed to reproduce the image defined in the file. Thus, colors in the bitmap data will likely be represented by a close match on the output device, provided that the colors in the source bitmap and on the destination device are evenly distributed among all possible colors. Figure 2-2 illustrates this case. The quantization process results in a loss of data. For source images containing many colors, quantization can cause unacceptable changes in appearance, which are said to be the result of quantization artifacts. Examples of common quantization artifacts are banding, Moire patterns, and the introduction of new colors into the destination image that were not present in the source data. Quantization artifacts have their uses, however; one type of quantization process, called convolution, can be used to remove spurious noise from an image and to actually improve its appearance. On the other hand, it can also change the color balance of a destination image from that defined by the source data. In the next case, the output device can display fewer colors than are defined by the source data. A common example is the display of 8-bit-per-pixel data (corresponding to 256 colors) on a device capable of displaying 4-bit data (corresponding to 16 colors). In this case, there may be colors defined in the bitmap which cannot be represented on the 4-bit display. Thus, the rendering application must work to match the colors in the source and destination. At some point in the color conversion process, the number of colors defined in the source data must be reduced to match the number available on the destination device. This reduction step is called quantization, illustrated in Figure 2-3.

30

Overview

Device red

red light red green

orange

light green blue

yellow

light blue black

green

white

blue

8 colors

violet black grey white 256 colors

Figure 2-2: Displaying data with few colors on a device with many colors

Data

Device

red light red green light green blue

red green blue 3 colors

light blue black white 8 colors

Figure 2-3: Displaying data with many colors on a device with few colors

Pixel Data and Palettes Obviously, pixel values stored in a file correspond to colors. But how are the colors actually specified?

COMPUTER GRAPHICS BASICS

31

One-bit pixel data, capable of having the values 0 and 1, can only fully represent images containing two colors. Thus, there are only two ways of matching up pixel values in the file with colors on a screen. In most situations, you'll find that a convention already exists that establishes which value corresponds to which color, although a separate mechanism may be available in the file to change this. This definition can also be changed on the fly by the rendering application. Pixel data consisting of more than one bit per pixel usually represents a set of index values into a color palette, although in some cases there is a direct numerical representation of the color in a color definition scheme. Specifying Color With Palettes A palette, which is sometimes referred to as a color map, index map, color table, or look-up table (LUT), is a 1-dimensional array of color values. As the synonym lookup table suggests, it is the cornerstone of the method whereby colors can be referred to indirectly by specifying their positions in an array. Using this method, data in a file can be stored as a series of index values, usually small integer values, which can drastically reduce the size of the pixel data when only a small number of colors need to be represented. Bitmaps using this method of color representation are said to use indirect, or pseudo-color, storage. Four-bit pixel data, for instance, can be used to represent images consisting of 16 colors. These 16 colors are usually defined in a palette that is almost always included somewhere in the file. Each of the pixel values making up the pixel data is an index into this palette and consists of one of the values 0 to 15. The job of a rendering application is to read and examine a pixel value from the file, use it as an index into the palette, and retrieve the value of the color from the palette, which it then uses to specify a colored pixel on an output device. Figure 2-4 illustrates how a palette may be used to specify a color. The palette is an array of colors defined as accurately as possible. In practice, each palette element is usually 24 bits, or three bytes, long, although to accommodate future expansion and machine dependencies, each element is sometimes stored as 32 bits, or four bytes. Curiously, color models, many of which existed prior to the computer era, are often built around the equal partition of the possible colors into three variables, thus neatly fitting into three bytes of data storage. (We include a discussion of color models in the section called "Color" later in this chapter.)

32

Overview

Pixel Value in File

Output Device

Palette (LUT) (255. 0. 0)

1,2,8,44

(255, 0, 10) (255, 10, 10) (200, 0, 0) (200, 10, 0) (200, 10, 10)

Program looks in

{ palette to translate "8".

m 10'0)

(50- 50> 50>

(0, 255, 0) is the set of numerical values used by the device to produce green.

(0, 255. 0)

green pixel

I "8" means green to the program Figure 2-4: Using a palette to specify a color What this means is that palettes are three or four times as large as the maximum number of colors defined. For instance, a 4-bit color palette is: 3 bytes per color X 16 colors =48 bytes in length or:

4 bytes per color X 16 colors = 64 bytes in length

depending on whether three or four bytes are used to store each color definition.

In a similar way, 8-bit pixel data may be used to represent images consisting of 256 colors. Each of the pixel values, in the range 0 to 255, is an index into a 256-color palette. In this case, the palette is: 3 bytes per color X 256 colors = 768 bytes in length or:

4 bytes per color X 256 colors = 1024 bytes in length

Issues When Using Palettes Let's say that the value (255,0,0) represents the color red in the color model used by our image format. We'll let our example palette define 16 colors, arranged as an array of 16 elements: ( 0, 0, 0) (255,255,255) (255, 0, 0) ( 0,255, 0) ( 0, 0,255)

Computer Graphics Basics

33

(255,255, 0) ( 0,255,255) (255, 0,255) (128, 0, 0) ( 0,128, 0) ( 0, 0,128) (128,128, 0) ( 0,128,128) (128, 0,128) (128,128,128) (255,128,128)

Because (255,0,0) happens to be the third element in the palette, we can store the value 2 (if the array is zero-based, as in the C language), with the implied convention that the values are to be interpreted as index values into the array. Thus, every time a specification for the color red occurs in the pixel data, we can store 2 instead, and we can do likewise for other colors found in the image. Color information can take up a substantial amount of space. In some cases, the use of palettes makes color storage more efficient; in other cases, storing colors directly, rather than through palettes, is more efficient. In the larger, more complex image formats, indirect storage through the use of palettes saves space by reducing the amount of data stored in the file. If you are, for example, using a format that stores three bytes of color information per pixel (a commonly used method) and can use up to 256 colors, the pixel values making up the bitmap of a 320x200 pixel image would take up 192,000 (320 * 200 * 3) bytes of storage. If the same image instead used a palette with 256 3-byte elements, each pixel in the bitmap would only need to be one byte in size, just enough to hold a color map index value in the 0-to-255 range. This eliminates two of every three bytes in each pixel, reducing the needed storage to 64,000 (320 * 200 * 1) bytes. Actually, we have to add in the length of the palette itself, which is 768 (256 * 3) bytes in length, so the relevant data in the file would be 64,768 bytes long, for a savings of nearly a factor of three over the former storage method. (Note, however, that if the amount of bitmap data in the file is very small, the storage overhead created by the inclusion of the palette may negate any savings gained by changing the storage method.) Indirect color storage through the use of palettes has several advantages beyond the obvious. First, if you need to know how many actual colors are stored in an image (i.e., a 256-color image does not always contain 256 colors), it is a simple task to read through the palette and determine how many of its

34

OVERVIEW

elements are being used or are duplicates of others. Unused elements in most formats are usually set to zero. Palettes are also handy when you want to change the colors in an image. If you want to change all of the red pixels in the rendered image to green, for instance, all you need do is change the appropriate value defining the color red in the palette to the appropriate value for green. As we've mentioned, the use of palettes is not appropriate in every case. A palette itself uses a substantial amount of space. For example, a palette defining 32,768 colors would take up a minimum of 98,304 bytes of storage space. For this reason, images containing more than 256 colors are generally stored in literal, absolute, or truecolorformat (rather than in palettes), where each pixel value corresponds directly to a single color. Palettes were devised to address the problem of the limited number of colors available on some display devices. However, if an output device does not provide hardware assistance to the application software, use of a palette-based format adds an extra level of complication prior to the appearance of the image on the display device. If the display device can support truecolor, it may be better to use a format supporting truecolor, even though the image may have only a few colors. As a general rule, images containing thousands or millions of colors are better stored using a format which supports truecolor, as the number and size of the elements needed in a palette-based format may cause the size of the palette needed to approach the size of the bitmapped image data itself. Before we continue the discussion of how colors are stored in a file, we have to digress briefly to talk about how colors are defined. Discussion of palettes resumes in the section below called "... And Back to Palettes."

A Word About Color Spaces Colors are defined by specifying several, usually three, values. These values specify the amount of each of a set of fundamental colors, sometimes called color channels, which are mixed to produce composite colors. A composite color is then specified as an ordered set of values. If "ordered set of values" rings a bell for you (in the same way as might "ordered pair"), rest assured that it also did for the people who create color definitions. A particular color is said to represent a point in a graphic plot of all the possible colors. Because of this, people sometimes refer to a color as a point in a color space. RGB is a common color definition. In the RGB color model or system, the colors red, green, and blue are considered fundamental and undecomposable. A color can be specified by providing an RGB triplet in the form (R,G,B). People sometimes think of color triplets in terms of percentages, although

Computer Graphics Basics

35

percentages are not, in fact, used to express actual color definitions. You might characterize colors in the RGB color model as follows:

(0%, 0%, 0%) (100%, 100%, 100%) (100%,0%,0%) (50%, 50%, 50%)

Black

White Red

Light gray

and so on.

There are many refinements of this, and you can always find somebody to argue about what numbers specify which color. This is the basic idea, though. Each of these RGB triplets is said to define a point in the RGB color space. When storing color data in a file, it's more practical to specify the value of each color component, not as a percentage, but as a value in a predefined range. If the space allotted for each color component is a byte (eight bits), the natural range is 0 to 255. Because colors are commonly defined using 24 bits, or three bytes, the natural thing to do is to assign each of the three bytes for use as the value of the color component in the color model. In RGB color, for instance, using three bytes for each color, colors are usually stored as RGB triplets in the range 0 to 255, with 0 representing zero intensity and 255 representing maximum intensity. RGB = ([0-255], [0-255], [0-255])

Thus, the pixel values in the previous example would be: (0,0,0)

Black

(255,255,255) White Red (255,0,0)

(127,127,127) Light gray This example assumes, of course, that 0 stands for the least amount, and 255 for the most amount of a particular color component. Occasionally, you will find that a format creator or application architect has perversely chosen to invert the "natural" sense of the color definition, and has made RGB (0, 0, 0) white and RGB (255, 255, 255) black, but, fortunately, this is rare. The section later in this chapter called "How Colors are Represented" describes RGB and other color systems.

36

OVERVIEW

Some More About Thiecolor

The word truecolor comes up in discussions about images that contain a large number of colors. What do we mean by large in this context? Given current hard disk prices, as well as the fact that few people have personal control of more than one gigabyte of disk space, most people consider 100 to 200K to be significantly large. Recall from the discussion above that a palette containing 256 color definitions uses a maximum of 64 bytes of storage, and that a palette with 32,768 or more colors uses nearly 100K, at a minimum. In light of this, 256 is not a "large" number of colors. Most people consider 32,768, 65,536, and 16.7 million colors to be "large," however. And this is only the space a palette takes up; we're not even talking about the image data! Instead of including in a file a huge palette in which pixel values are indices into the palette, pixel values can be treated instead as literal color values. In practice, pixel values are composed of three parts, and each part represents a component color in the color model (e.g., RGB) in use. Pixel values from images containing 32,768 or 65,536 colors are typically stored in two successive bytes, or 16 bits, in the file, because almost all machines handle data a minimum of one byte at a time. A rendering application must read these 16-bit pixel values and decompose them into 5-bit color component values: 16 bits = 2 bytes = (8 bits, 8 bits) => (1, 5, 5, 5) = (1, R, G, B)

Each 5-bit component can have values in the range of 0 to 32. In the case of 32,768-color RGB images, only 15 bits are significant, and one bit is wasted or used for some other purpose. 65,536-color RGB images decompose the 16-bit pixel value asymmetrically, as shown below, in order to get use out of the extra bit: 16 bits = 2 bytes = (8 bits, 8 bits) => (6, 5, 5) = (R, G, B)

Actually, a more common subdivision is: 16 bits = 2 bytes = (8 bits, 8 bits) => (5, 6, 5) = (R, G, B)

Here, the extra bit is given to the green component, because the human eye is more sensitive to green than it is to red and blue. The color component order is arbitrary, and the order and interpretation of the color components within a pixel value varies from format to format. Thus, components of a 16-bit pixel value may be interpreted as (G,B,R) just as readily as (R,G,B) and (B,R,G). Specifying RGB colors in the sequence (R,G,B) has some appeal, because the colors are arranged by electromagnetic frequency, establishing their order in the physical spectrum.

COMPUTER GRAPHICS BASICS

37

24-bit pixel values are stored in either three bytes: 24 bits = 3 bytes = (8 bits, 8 bits, 8 bits) = (R,G,B)

or four bytes: 24 bits = 4 bytes = (8 bits, 8 bits, 8 bits, 8 bits) = (R,G,B, unused)

Equal division among the color components of the model, one byte to each component, is the most common scheme, although other divisions are not unheard of. ... And Back to Palettes

Earlier in this chapter we introduced the use of palettes. Here, we continue with the discussion of different types of palettes and illustrate with some actual examples. Types of palettes There are several different ways to talk about palettes. A single-channel palette contains only one color value per element, and this color value maps directly to a single pixel color. Each element of a single-channel palette might have the following form, for example: (G) = (223)

A multiple-channel palette (or multi-channel palette) contains two or more individual color values per color element. Each element of a 3-channel palette using red, green, and blue might have the following form, for example: (R,G,B) = (255,128,78)

Here, R specifies the value of one channel, G specifies the value of the second channel, and B specifies the value of the third channel. If an image contains four color components, as with the CMYK color system described later in this chapter, then a 4-channel color map might be used, and so on. Pixel-oriented palettes store all of the pixel color data as contiguous bits within each element of the array. As we noted above, in an RGB palette, each element in the palette consists of a triplet of values. This corresponds to the way pixel values are stored in the file, which is usually in RGB or BGR order: (RGBRGBRGBRGBRGB . . . ) or (BGRBGRBGRBGRBGR . . . )

38

Overview

Thus the palette looks like this: (RGB) (RGB) (RGB) or (BGR) (BGR) (BGR)

In a plane-oriented palette, pixel color components are segregated; corresponding color-channel values are stored together, and the palette looks like it is made up of three single-channel palettes, one for each color channel. This corresponds to the way pixel values are arranged in the file (i.e., as multiple color planes): (RRRRR . . . QQGGG . . . BBBBB) or (BBBBB . . . GGGGG . . . RRRRR)

Thus, a small palette might look like this: (R) (R) (R) (G) (G) (G) (B) (B) (B) or:

(B) (B) (B) (G) (G) (G) (R) (R) (R)

Although this may look like a single palette containing three color planes, it is usually best to visualize it as three separate palettes, each containing a single color plane. This way you will have no trouble calling the first item in each color plane element zero. It should be clear from the above discussion that both single- and multichannel palettes can be pixel- or plane-oriented. For instance: • A single-channel pixel-oriented palette contains one pixel value per element.

• A multi-channel pixel-oriented palette also contains one pixel per element, but each pixel contains two or more color channels of data. • A single-channel plane-oriented palette contains one pixel per index and one bit per plane. • A multi-channel plane-oriented palette contains one color channel value per element. Figure 2-5 illustrates these different types of palettes. As noted above, the number of elements in a palette is usually a power of two and typically corresponds to the maximum number of colors contained in the image, which is in turn reflected in the size of the pixel value in the file. For example, an 8-bit pixel value can represent 256 different colors and is accompanied by a 256-element palette. If an image has fewer colors than the maximum size of the palette, any unused elements in the palette will ideally be set to zero. Several formats, most notably CGM and TGA, have the ability to vary the number of elements in the palette as needed. If a TGA image contains only 57 colors, for instance, it may have only a 57-element palette. Computer Graphics Basics

39

O Single-channel, pixel-oriented palette

O Multiple-channel, pixel-oriented palette Pixel 0

^

r 4F AT

Pixel 2

O Multiple-channel, plane-oriented palette

O Single-channel, plane-oriented palette 4F JT

Pixel 1

&

/V

& AT

4F Jf

SV &F AT

AAA

AAA

Plane

Blue Green

Plane Plane 0

r 4? J*

Red Plane

Figure 2-5: Types of palettes It is also interesting to note that the usable elements in a palette are not always contiguously arranged, are not always ordered, and do not always start with the zero index value filled. A 2-color image with a 256-color palette (yes, it's been done) may have its colors indexed at locations 0 and 1, 0 and 255, 254 and 255, or even 47 and 156. The locations are determined by the software writing the image file and therefore ultimately by the programmer who created the software application. (We choose not to comment further.) Examples of palettes Let's look at a few examples of palettes. The simplest is the 2-color, or monochrome, palette: /* A BYTE is an 8-bit character */ typedef struct _MonoPalette { BYTE Color[2];

40

Overview

} MONO_PALETTE; MONO_PALETTE Mono = { {0x00, 0x01} };

In this example, we see a 2-element array containing the color values 0x00 and 0x01 in elements 0 and 1 respectively. In the file, all pixel values are indices. A pixel with a value of 0 serves as an index to the color represented by the value 0x00. Likewise, a pixel with a value of 1 serves as an index to the color represented by the value 0x01. Because this bitmap contains only two colors, and each pixel color may be represented by a single bit, it may seem easier to store these values directly in the bitmap as bit values rather than use a palette. It is easier, of course, but some palette-only formats require that this type of palette be present even for monochrome bitmaps. This is a more practical example, a 16-element palette used to map a gray-scale palette: /* A BYTE is an 8-bit character */ typedef struct _GrayPalette { BYTE Color[16]; } GRAY_PALETTE; GRAY_PALETTE Gray = { {0x00, 0x14, 0x20, 0x2c, 0x38, 0x45, 0x51, 0x61, 0x71, 0x82, 0x92, 0x92, 0xa2, 0xb6, Oxcb, 0xe3, Oxff} };

Notice in these two examples that each color element is represented by a single value, so this is a single-channel palette. We could just as easily use a 3-channel palette, representing each gray color element by its RGB value.

Computer Graphics Basics

41

typedef struct _RGB { BYTE

Red;

BYTE

Green; /* Green channel value */

BYTE

Blue;

/* Red channel value */

/* Blue channel value */

} RGB;

Gray[16] = {0x00, 0x00, 0x00}, {0x14, 0x14, 0x14}, {0x20, 0x20, 0x20}, {0x2c, 0x2 c, 0x2c}, {0x38, {0x45, {0x51, {0x61, {0x71, {0x82, {0x92, {0xa2,

0x38, 0x45, 0x51, 0x61, 0x71, 0x82, 0x92, 0xa2,

0x38}, 0x45}, 0x51}, 0x61}, 0x71}, 0x82}, 0x92}, 0xa2},

{0xb6, {Oxcb, {0xe3, {Oxff,

0xb6, Oxcb, 0xe3, Oxff,

0xb6}, Oxcb}, 0xe3}, Oxff}

This last example is an example of a pixel-oriented multi-channel palette. We can alter it to store the color information in a plane-oriented fashion like this: TYPEDEF struct _PlanePalette < BYTE

Red [16];

BYTE

Green [16]; /* Green plane values */

BYTE

Blue[16];

/* Red plane values */

/* Blue plane values */

} PLANE_PALETTE; PLANE_PALETTE Planes = { {0x00, 0x14, 0x20, 0x2c, 0x38, 0x45, 0x51, 0x61, /* Red plane */ 0x71, 0x82, 0x92, 0xa2, 0xb6, Oxcb, 0xe3, Oxff}, {0x00, 0x14, 0x20, 0x2c, 0x38, 0x45, 0x51, 0x61, /* Green plane */ 0x71, 0x82, 0x92, 0xa2, 0xb6, Oxcb, 0xe3, Oxff}, {0x00, 0x14, 0x20, 0x2c, 0x38, 0x45, 0x51, 0x61, /* Blue plane */ 0x71, 0x82, 0x92, 0xa2, 0xb6, Oxcb, 0xe3, Oxff}

42

Overview

Finally, let's look at a real-world example, the IBM VGA palette in wide use. This 256-color palette contains a 16-color sub-palette (the "EGA palette"), a 16-element gray-scale palette, and a palette of 24 colors, each with nine different variations of saturation and intensity. Notice that the last eight elements of the palette are not used and are thus set to zero: struct _VgaPalette { BYTE

Red;

BYTE

Green;

BYTE

Blue;

} VGA_ PALETTE;

VGA_PALETTE VgaColors[256] { /* EGA Color ( Table */

=

{0x00, 0x00, 0x00}, {0x00, 0x00, Oxaa}, {0x00, Oxaa, 0x00}, {0x00, Oxaa, Oxaa}, {Oxaa, 0x00, 0x00}, {Oxaa, 0x00, Oxaa}, {Oxaa, 0x55, 0x00}, {Oxaa, Oxaa, Oxaa}, {0x55, 0x55, 0x55}, {0x55, 0x55, Oxff}, {0x55, Oxff, 0x55}, {0x55, Oxff, Oxff}, {Oxff, 0x55, 0x55}, {Oxff, 0x55, Oxff}, {Oxff, Oxff, 0x55}, {Oxff, Oxff, Oxff}, /* Gray Scale Table */ {0x00, 0x00, 0x00}, {0x14, 0x14, 0x14}, {0x20, 0x20, 0x20}, {0x2c, 0x2c, 0x2c}, {0x38, 0x38, 0x38}, {0x45, 0x45, 0x45}, {0x51, 0x51, 0x51}, {0x61, 0x61, 0x61}, {0x71, 0x71, 0x71}, {0x82, 0x82, 0x82}, {0x92, 0x92, 0x92}, {0xa2, 0xa2, 0xa2}, {0xb6, 0xb6, 0xb6}, {Oxcb, Oxcb, Oxcb}, {0xe3, 0xe3, 0xe3}, {Oxff, Oxff, Oxff}, /* 24- color Table * / {0x00, 0x00, Oxff}, {0x41, 0x00, Oxff}, {0x7d, 0x00, Oxff}, {Oxbe, 0x00, Oxff}, {Oxff, 0x00, Oxff}, {Oxff, 0x00, Oxbe}, {Oxff, 0x00, 0x7d}, {Oxff, 0x00, 0x41}, {Oxff, 0x00, 0x00}, {Oxff, 0x41, 0x00}, {Oxff, 0x7d, 0x00}, {Oxff, Oxbe, 0x00}, {Oxff, Oxff, 0x00}, {Oxbe, Oxff, 0x00}, {0x7d, Oxff, 0x00}, {0x41, Oxff, 0x00}, {0x00, Oxff, 0x00}, {0x00, Oxff, 0x41}, {0x00, Oxff, 0x7d}, {0x00, Oxff, Oxbe},

COMPUTER GRAPHICS BASICS

43

{0x00 Oxff Oxff} {0x00 Oxbe, Oxff} {0x00 0x7d Oxff} {0x00, 0x41, Oxff} {0x7d 0x7d Oxff} {0x9e, 0x7d, Oxff} {Oxbe 0x7d, Oxff} {Oxdf, 0x7d, Oxff} {Oxff 0x7d Oxff} {Oxff, 0x7d, Oxdf} {Oxff 0x7d Oxbe} {Oxff, 0x7d, 0x9e} {Oxff 0x7d 0x7d} {Oxff 0x9e 0x7d} {Oxff Oxbe 0x7d} {Oxff Oxdf 0x7d} {Oxff Oxff 0x7d} {Oxdf Oxff 0x7d} {Oxbe Oxff 0x7d} {0x9e Oxff 0x7d} {0x7d Oxff 0x7d} {0x7d Oxff 0x9e} {0x7d Oxff Oxbe} {0x7d Oxff Oxdf} {0x7d Oxff Oxff} {0x7d Oxdf Oxff} {0x7d Oxbe Oxff} {0x7d 0x9e Oxff} {0xb6 0xb6 Oxff} , {0xc7 0xb6 Oxff} {Oxdb 0xb6 Oxff} {Oxeb 0xb6 Oxff} {Oxff 0xb6 Oxff} , {Oxff 0xb6 Oxeb} {Oxff 0xb6 Oxdb} , {Oxff 0xb6 0xc7} {Oxff 0xb6 0xb6} , {Oxff 0xc7 0xb6} {Oxff Oxdb 0xb6} {Oxff Oxeb 0xb6} {Oxff Oxff 0xb6} , {Oxeb Oxff 0xb6} {Oxdb Oxff 0xb6} , {0xc7 Oxff 0xb6} {0xb6 Oxdf 0xb6} , {0xb6 Oxff 0xc7} {0xb6 Oxff Oxdb} , {0xb6 Oxff Oxeb} {0xb6 Oxff Oxff} , {0xb6 Oxeb Oxff} {0xb6 Oxdb Oxff} , {0xb6 0xc7 Oxff} {0x00 0x00 0x71} {Oxlc 0x00 0x71} {0x38 0x00 0x71} , {0x55 0x00 , 0x71} {0x71 0x00 0x71} , {0x71 0x00 0x55} {0x71 0x00 0x38} , {0x71 0x00 Oxlc} {0x71 0x00 0x00} , {0x71 Oxlc 0x00} {0x71 0x38 0x00} , {0x71 0x55 0x00} {0x71 0x71 0x00} , {0x55 0x71 0x00} {0x38 0x71 0x00} {Oxlc 0x71 0x00} {0x00 0x71 , 0x00} , {0x00 , 0x71 Oxlc} {0x00 0x71 , 0x38} , {0x00 0x71 0x55} {0x00 0x71 0x71} , {0x00 0x55 0x71} {0x00 0x38 0x71} , {0x00 Oxlc 0x71} {0x38 0x38 0x71} , {0x45 0x38 0x71} {0x55 0x38 0x71} {0x61 0x38 0x71} {0x71 0x38 0x71} , {0x71 0x38 , 0x61} {0x71 0x38 , 0x55} , {0x71 0x38 0x45} {0x71 0x38 0x38} , {0x71 0x45 0x38} {0x71 0x55 0x38} , {0x71 0x61 0x38} {0x71 0x71 0x38} {0x61 0x71 0x38}

44

Overview

{0x55 0x71 0x38} {0x45 0x71 0x38} {0x38 0x71 0x38} {0x38 0x71 0x45} {0x38, 0x71, 0x55} {0x38, 0x71 0x61} {0x38, 0x71, 0x71} {0x38, 0x61 0x71} {0x38, 0x55, 0x71} {0x38 0x45 0x71} {0x51, 0x51, 0x71} {0x59, 0x51 0x71} {0x61, 0x51 0x71} {0x69 0x51 0x71} {0x71 0x51 0x71} {0x71 0x51 0x69} {0x71 0x51 0x61} {0x71 0x51 0x59} {0x71 0x51 0x51} {0x71 0x59 0x51} {0x71 0x61 0x51} {0x71 0x69 0x51} {0x71 0x71 0x51} {0x69 0x71 0x51} {0x61 0x71 0x51} {0x59 0x71 0x51} {0x51 0x71 0x51} {0x51 0x71 0x59} {0x51 0x71 0x61} {0x51 0x71 0x69} {0x51 0x71 0x71} {0x51 0x69 0x71} {0x51 0x61 0x71} {0x51 0x59 0x71} {0x00 0x00 0x41} {0x10 0x00 0x41} {0x20 0x00 0x41} {0x30 0x00 0x41} {0x41 0x00 0x41} {0x41 0x00 , 0x30} {0x41 0x00 0x20} , {0x41 0x00 , 0x10} {0x41 0x00 0x00} {0x41 0x10 0x00} {0x41 0x20 0x00} {0x41 0x30 0x00} {0x41 0x41 0x00} {0x30 0x41 0x00} {0x20 0x41 0x00} {0x10 0x41 , 0x00} {0x00 0x41 0x00} , {0x00 0x41 , 0x10} {0x00 0x41 0x20} , {0x00 0x41 , 0x30} {0x00 0x41 0x41} , {0x00 0x30 , 0x41} {0x00 0x20 0x41} , {0x00 0x10 , 0x41} {0x20 0x20 0x41} , {0x28 0x20 , 0x41} {0x30 0x20 0x41} , {0x38 0x20 , 0x41} {0x41 0x20 0x41} {0x41 0x20 0x38} {0x41 0x20 0x30} , {0x41 0x20 , 0x28} {0x41 0x20 0x20} , {0x41 0x28 , 0x20} {0x41 0x30 0x20} , {0x41 0x38 , 0x20} {0x41 0x41 0x20} , {0x38 0x41 , 0x20} {0x30 0x41 0x20} , {0x28 0x41 , 0x20} {0x20 0x41 0x20} , {0x20 0x41 r 0x28} {0x20 , 0x41 0x30} , {0x20 0x41 , 0x38} {0x20 , 0x41 0x41} , {0x20 0x38 , 0x41} {0x20 , 0x30 , 0x41} , {0x20 , 0x28 r 0x41} {0x2c , 0x2c 0x41} , {0x30 0x2c 0x41} {0x34 , 0x2c 0x41} , {0x3c 0x2c 0x41} {0x41 , 0x2c 0x41} , {0x41 0x2c , 0x3c} {0x41 0x2c 0x34} , {0x41 0x2c , 0x30}

Computer Graphics Basics

45

{0x41, 0x2 c, 0x2c}, {0x41, 0x30, 0x2c}, {0x41, 0x34, 0x2c}, {0x41, 0x3 c, 0x2c}, {0x41, 0x41, 0x2c}, {0x3c, 0x41, 0x2c}, {0x34, 0x41, 0x2c}, {0x30, 0x41, 0x2c}, {0x2c, 0x41, 0x2c}, {0x2c, 0x41, 0x30}, {0x2c, 0x41, 0x34}, {0x2c, 0x41, 0x3c}, {0x2c, 0x41, 0x41}, {0x2c, 0x3 c, 0x41}, {0x2c, 0x34, 0x41}, {0x2c, 0x30, 0x41}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00}, {0x00, 0x00, 0x00} };

Color Understanding how colors are defined in graphics data is important to understanding graphics file formats. In this section, we touch on some of the many factors governing how colors are perceived. This is by no means a comprehensive discussion. We just want to make sure that you to have an appreciation of some of the problems that come up when people start to deal with color. How We See Color

The eye has a finite number of color receptors that respond to the full range of light frequencies (about 380 to 770 nanometers). As a result, the eye theoretically supports only the perception of about 10,000 different colors simultaneously (although, as we have mentioned, many more colors than this can be perceived, though not resolved simultaneously). The eye is also biased to the kind of light it detects. It's most sensitive to green light, followed by red, and then blue. It's also the case that the visual perception system can sense contrasts between adjacent colors more easily than it can sense absolute color differences, particularly if those colors are physically separated in the object being viewed. In addition, the ability to discern colors also varies from person to person; it's been estimated that one out of every twelve people has some form of color blindness. Furthermore, the eye is limited in its ability to resolve the color of tiny objects. The size of a pixel on a typical CRT display screen, for example, is less than a third of a millimeter in diameter. When a large number of pixels are packed together, each one a different color, the eye is unable to resolve where one pixel ends and the next one begins from a normal viewing distance. The brain, however, must do something to bridge the gap between two adjacent differently colored pixels and will integrate, average, ignore the blur, or otherwise 46

Overview

adapt to the situation. For these reasons and others, the eye typically perceives many fewer colors than are physically displayed on the output device. How a color is created also plays an important role in how it is perceived. Normally, we think of colors as being associated with a single wavelength of light. We know, however, that two or more colors can be mixed together to produce a new color. An example of this is mixing green and red light to produce yellow light, or mixing yellow and blue pigments to produce green pigment. This mixing of colors can also occur when an object is illuminated by light. The color of the object will always mix with the color of the light to produce a third color. A blue object illuminated by white light appears blue, while the same object illuminated by red light will appear violet in color. One implication of this is that images rendered to different devices will look different. Two different color monitors, for example, seldom produce identically perceived images, even if the monitors are the same make and model. Another implication is that images rendered to different types of devices will look different. An example is the difference between an image rendered to a monitor and one rendered to a color hardcopy device. Although there are numerous schemes designed to minimize color-matching problems, none is wholly satisfactory. For these and other reasons, the accurate rendition of color is full of difficulties, and much work continues to be done. Although a number of mechanical devices have recendy appeared on the market, they are for the most part designed to work with one type of output device. The ultimate arbiter of color quality will always be the person who views the image on the output device. How Colors are Represented Several different mathematical systems exist which are used to describe colors. This section describes briefly the color systems most commonly used in the graphics file formats described in this book. NOTE

Keep in mind that perception of color is affected by physiology, experience, and viewing conditions. For these reasons, no system of color representation has yet been defined, or indeed is likely to be, which is satisfactory under all circumstances. For purposes of discussion here, colors are always represented by numerical values. The most appropriate color system to use depends upon the type of data contained in the file. For example, 1-bit, gray-scale, and color data might each best be stored using a different color model.

Computer Graphics basics

47

Color systems used in graphics files are typically of the trichromatic colorimetric variety, otherwise known as primary 3-color systems. With such systems, a color is defined by specifying an ordered set of three values. Composite colors are created by mixing varying amounts of three primary colors, which results in the creation of a new color. Primary colors are those which cannot be created by mixing other colors. The totality of colors that can be created by mixing primary colors make up the color space or color gamut Additive and subtractive color systems Color systems can be separated into two categories: additive color systems and subtractive color systems. Colors in additive systems are created by adding colors to black to create new colors. The more color that is added, the more the resulting color tends towards white. The presence of all the primary colors in sufficient amounts creates pure white, while the absence of all the primary colors creates pure black. Additive color environments are self-luminous. Color on monitors, for instance, is additive. Color subtraction works in the opposite way. Conceptually, primary colors are subtracted from white to create new colors. The more color that is added, the more the resulting color tends towards black. Thus, the presence of all the primary colors theoretically creates pure black, while the absence of all primary colors theoretically creates pure white. Another way of looking at this process is that black is the total absorption of all light by color pigments. Subtractive environments are reflective in nature, and color is conveyed to us by reflecting light from an external source. Any color image reproduced on paper is an example of the use of a subtractive color system. No color system is perfect. As an example, in a subtractive color system the presence of all colors creates black, but in real-life printing the inks are not perfect. Mixing all ink colors usually produces a muddy brown rather than black. The blacks we see on paper are only approximations of the mathematical ideal, and likewise for other colors. The next few sections describe some common color systems. Table 2-1 shows corresponding values for the primary and achromatic colors using the RGB, CMY and HSV color systems.

48

Overview

Table 2-1: Equivalent RGB, CMY, and HSV values RGB

CMY

HSV

Red

255,0,0

0,255,255

0,240,120

Yellow

255,255,0

0,0,255

40,240,120

Green

0,255,0

255,0,255

80,240,120

Cyan

0,255,255

255,0,0

120,240,120

Blue

0,0,255

255,255,0

160,240,120

Magenta 255,0,255

0,255,0

200,240,120

Black

0,0,0

255,255,255 160,0,0 191,191,191

Shades

63,63,63

of

127,127,127 127,127,127 160,0,120

Gray

191,191,191 63,63,63

160,0,180

White

255,255,255 0,0,0

160,0,240

160,0,59

RGB (Red-Green-Blue) RGB is perhaps the most widely used color system in image formats today. It is an additive system in which varying amounts of the colors red, green, and blue are added to black to produce new colors. Graphics files using the RGB color system represent each pixel as a color triplet—three numerical values in the form (R,G,B), each representing the amount of red, green, and blue in the pixel, respectively. For 24-bit color, the triplet (0,0,0) normally represents black, and the triplet (255,255,255) represents white. When the three RGB values are set to the same value—for example, (63,63,63) or (127,127,127), or (191,191,191)—the resulting color is a shade of gray. CMY (Cyan-Magenta-Yellow) CMY is a subtractive color system used by printers and photographers for the rendering of colors with ink or emulsion, normally on a white surface. It is used by most hard-copy devices that deposit color pigments on white paper, such as laser and ink-jet printers. When illuminated, each of the three colors absorbs its complementary light color. Cyan absorbs red; magenta absorbs green; and yellow absorbs blue. By increasing the amount of yellow ink, for instance, the amount of blue in the image is decreased. As in all subtractive systems, we say that in the CMY system colors are subtracted from white light by pigments to create new colors. The new colors are the wavelengths of light reflected, rather than absorbed, by the CMY pigments. For example, when cyan and magenta are absorbed, the resulting color is

Computer Graphics basics

49

yellow. The yellow pigment is said to "subtract" the cyan and magenta components from the reflected light. When all of the CMY components are subtracted, or absorbed, the resulting color is black. Almost. Whether it's possible to get a perfect black is debatable. Certainly, a good black color is not obtainable without expensive inks. In light of this, the CMY system has spawned a practical variant, CMYK, with K standing for the color black. To compensate for inexpensive and offspecification inks, the color black is tacked onto the color system and treated something like an independent primary color variable. For this reason, use of the CMYK scheme is often called 4-color printing, or process color. In many systems, a dot of composite color is actually a grouping of four dots, each one of the CMYK colors. This can be readily seen with a magnifying lens by examining a color photograph reproduced in a glossy magazine. CMYK can be represented as either a color triple, like RGB, or as four values. If expressed as a color triple, the individual color values are just the opposite of RGB. For a 24-bit pixel value, for example, the triplet (255,255,255) is black, and the triplet (0,0,0) is white. In most cases, however, CMYK is expressed as a series of four values.

In many real-world color composition systems, the four CMYK color components are specified as percentages in the range of 0 to 100. HSV (Hue, Saturation, and Value) HSV is one of many color systems that vary the degree of properties of colors to create new colors, rather than using a mixture of the colors themselves. Hue specifies "color" in the common use of the term, such as red, orange, blue, and so on. Saturation (also called chroma) refers to the amount of white in a hue; a fully (100 percent) saturated hue contains no white and appears pure. By extension, a partly saturated hue appears lighter in color due to the admixture of white. Red hue with 50 percent saturation appears pink, for instance. Value (also called brightness) is the degree of self-luminescence of a color—that is, how much light it emits. A hue with high intensity is very bright, while a hue with low intensity is dark. HSV (also called HSB for Hue, Saturation, and Brightness) most closely resembles the color system used by painters and other artists, who create colors by adding white, black, and gray to pure pigments to create tints, shades, and tones. A tint is a pure, fully saturated color combined with white, and a shade is a fully saturated color combined with black. A tone is a fully saturated color with both black and white (gray) added to it. If we relate HSV to this color mixing model, saturation is the amount of white, value is the amount of black, and hue is the color that the black and white are added to.

50

Overview

The HLS (Hue, Lightness, and Saturation) color model is closely related to HSV and behaves in the same way. There are several other color systems that are similar to HSV in that they create color by altering hue with two other values. These include: • HSI (Hue, Saturation, and Intensity) • HSL (Hue, Saturation, and Luminosity) • HBL (Hue, Brightness, and Luminosity) None of these is widely used in graphics files. YUV (Y-signal, U-signal, and V-signal) The YUV model is a bit different from the other colorimetric models. It is basi-

cally a linear transformation of RGB image data and is most widely used to encode color for use in television transmission. (Note, however, that this transformation is almost always accompanied by a separate quantization operation, which introduces nonlinearities into the conversion.) Y specifies gray scale or luminance. The U and V components correspond to the chrominance (color information). Other color models based on YUV include YCbCr and YPbPr. Black, White, and Gray Black, white, and gray are considered neutral (achromatic) colors that have no hue or saturation. Black and white establish the extremes of the range, with black having minimum intensity, gray having intermediate intensity, and white having maximum intensity. One can say that the gamut of gray is just a specific slice of a color space, each of whose points contains an equal amount of the three primary colors, has no saturation, and varies only in intensity. White, for convenience, is often treated in file format specifications as a primary color. Gray is usually treated the same as other composite colors. An 8-bit pixel value can represent 256 different composite colors or 256 different shades of gray. In 24-bit RGB color, (12,12,12), (128,128,128), and (199,199,199) are all shades of gray.

Overlays and Transparency Certain file formats are designed to support the storage of still images captured from video sources. In practice, images of this sort are often overlaid on live video sources at render time. This is a familiar feature of conventional

broadcast television, where still images are routinely shown next to live readers on the evening news.

Computer Graphics basics

51

Normal images are opaque, in the sense that no provision is made to allow the manipulation and display of multiple overlaid images. To allow image overlay, some mechanism must exist for the specification of transparency on a perimage, per-strip, per-tile, or per-pixel basis. In practice, transparency is usually controlled through the addition of information to each element of the pixel data.

The simplest way to allow image overlay is the addition of an overlay bit to each pixel value. Setting the overlay bit in an area of an image allows the rendering application or output device to selectively ignore those pixel values with the bit set. An example is the TGA format, which supports data in the format: (15 bits) = (R,G,B) = (5 bits, 5 bits, 5 bits)

Actually, this 15-bit pixel value is stored in 16 bits; an extra bit is left over which can be used to support the overlaying of images: (R,G,B,T) = (16 bits) = (5 bits, 5 bits, 5 bits, 1 bit overlay)

The image creator or rendering application can toggle the overlay bit, which is interpreted by the display hardware as a command to ignore that particular pixel. In this way, two images can be overlaid, and the top one adjusted to allow holes through which portions of the bottom image are visible. This technique is in widespread, but not obvious, use. A rendering application can selectively toggle the overlay bit in pixel values of a particular color. More to the point, the application can turn off the display of any area of an image that is not a particular color. For example, if a rendering application encounters an image of a person standing in front of a contrasting, uniformly colored and lighted screen, the application can toggle the overlay bits on all the pixel values that are the color of the screen, leaving an image of the person cut out from the background. This cut-out image can then be overlaid on any other image, effectively adding the image of the person to the bottom image. This assumes, of course, that the color of the screen is different from any colors in the person portion of the image. This is often how broadcast television weather reporters are overlaid on background maps and displays, for instance. Certain conventions inherited from traditional analog broadcasting technology are in widespread use in the broadcasting industry, including the use of a particular blue shade for background screens. When used in this way, the process is called chromakeying. A more elaborate mechanism for specifying image overlays allows variations in transparency between bottom and overlaid images. Instead of having a single

52

overview

bit of overlay information, each pixel value has more (usually eight bits). An example is the TGA format, which supports data in the format: (24 bits) = (R,G,B) = (8 bits, 8 bits, 8 bits)

Because this 24-bit pixel value is stored in 32 bits, an extra eight bits are left over to support transparency: (R,G,B,T) = (8 bits, 8 bits, 8 bits, 8 bits transparency)

The eight transparency bits are sometimes called the alpha channel Although there are some complications in the TGA format, an ideal 8-bit alpha channel can support 256 levels of transparency, from zero (indicating that the pixel is meant to be completely transparent) to 255 (indicating that the pixel is meant to be opaque). Transparency data is usually stored as part of the pixel data, as in the example above, but it may also appear as a fourth plane, stored the same way as palette data in planar format files. It can, however, be stored as a separate block, independent of other image and palette information, and with the same dimensions as the actual image. This allows manipulation of the transparency data independent of the image pixel data. For Further Information In this chapter, we have been able to touch upon only a small part of the science of computer graphics and imaging. For additional information, see the references cited below.

Books About Computer Graphics The following books are some of the best texts available on the subject of computer graphics. Many of them contain detailed chapters on specific application areas, such as imaging, ray tracing, animation, art, computer-aided design, and 3D modelling. Artwick, Bruce A., Applied Concepts in Microcomputer Graphics., PrenticeHall, Englewood Cliffs, NJ, 1984. Conrac Corporation, Raster Graphics Handbook, second edition, Van Nostrand Reinhold Company, New York, NY 1985. Foley. James. D., van Dam, Andries, Feiner, S.K., and Hughes, J. F, Computer Graphics: Principles and Practice, second edition, Addison-Wesley, Reading, MA, 1990. Hearn, Donald and Baker, M. Pauline, Computer Graphics, Prentice-Hall, Englewood Cliffs, NJ, 1986. Computer Graphics Basics

53

Netravali, Aran N. and Haskell, Barry G., Digital Pictures: Representation and Compression, Plenum Press, New York, NX 1988. Newman, William N. and Sproull, Robert R, Principles of Interactive Computer Graphics, second edition, McGraw Hill, New York, NY 1973. Rogers, David F. and Adams, J. Alan, Mathematical Elements for Computer Graphics, second edition, McGraw Hill, New York, NX 1990. Rogers, David R, Procedural Elements for Computer Graphics., McGraw Hill, NewYork,NX1985. Rosenfeld, Azriel and Kak, Avinash C., Digital Picture Processing, second edition, Academic Press, San Diego, CA, 1982. Sharpe, L., "Tiling: Turning Unwieldy Drawings into Neat Little Packets," Inform, Association for Image and Information Management, March, 1989. Watt, Alan, Fundamentals of Three-Dimensional Computer Graphics, Addison-Wesley, Reading, MA, 1989. Books About Color and Colorimetry The following books are excellent reference works on color, its measurement, and its effects on the human psycho-biological system. Benson, K. Blair, Television Engineering Handbook, second edition, McGraw Hill, New York, NY 1986. Billmeyer, Fred W. and Saltzman, Max, Principles of Color Technology, second edition, John Wiley & Sons, New York, NY 1981. De Grandis, Luigina, Theory and Use of Color, translated by Gilbert, John, Harry N. Abrams, Inc., New York, NX 1986. Foley. James. D., van Dam, Andries, Feiner, S.K., and Hughes, J. R, Computer Graphics: Principles and Practice, second edition, Addison-Wesley, Reading, MA, 1990. Hunt, R.W.G., Measuring Color, second edition, R. Horwood, New York, NY, 1991. Hunt, R.W.G., The Reproduction of Color, third edition, John Wiley & Sons, New York, NY, 1975. Hunt, R.W.G., The Reproduction of Color in Photography, Printing and Television, Rountain Press, Tolworth, Rngland, 1987.

54

Overview

Judd, Deane B., Color in Business, Science, and Industry, third edition, John Wiley & Sons, New York, NY, 1975. Kueppers, Harald, Color; Origin, Systems, Uses, translated by Bradley, E, Von Nostrand Reinhold Ltd., London, England, 1973. Optical Society of America, Committee on Colorimetry, The Science of Color, Washington, DC, 1963. Wyszecki, Gunter and Stiles, W. S., Color Science: Concepts and Methods, Quantitative Data and Formulae, second edition, John Wiley & Sons, New York, NY, 1982.

computer Graphics Basics

55

Chapter 3

Bitmap Files Bitmap files vary greatly in their details, but they all share the same general structure. This chapter looks at the components of a typical bitmap file. Later on in this chapter we'll get into explanations of the details, but for now let's just get a feel for the overall structure. We'll explain as necessary as we go along. Bitmap files consist of a header, bitmap data, and other information, which may include a color palette and other data. A warning: inexplicably, people continue to design applications which use what are sometimes called raw formats. Raw format files consist solely of image data and omit any clues as to their structure. Both the creator of such files and the rendering applications must somehow know, ahead of time, how the files are structured. Because you usually can't tell one raw format file from another (except perhaps, by examining their relative sizes), we'll confine our discussion in this chapter to bitmap files, which at least contain headers.

How Bitmap Files Are Organized The basic components of a simple bitmap file are the following: ! Header

|

! Bitmap Data

Bitmap Files

57

If a file contains no image data, only a header will be present. If additional information is required that does not fit in the header, a footer will usually be present as well: | Header Bitmap Data Footer

An image with a palette may store the palette in the header, but it will more likely appear immediately after the header: Header

j Palette

|

Bitmap Data j Footer

A palette can also appear immediately appear after the image data, like a footer, or be stored in the footer itself: Header

Bitmap Data ! Palette Scan-line tables and color correction tables may also appear after the header and before or after the image data: Header

I

Palette

I

Scan Line Table

Color Correction Table (here) j Bitmap Data Color Correction Table (or here) Footer

58

Overview

If an image file format is capable of holding multiple images, then an image file index may appear after the header, holding the offset values of the starting positions of the images in the file: i Header 1

r

! Palette Bitmap Index

I Bitmap 2 Data Bitmap n Data i Footer

If the format definition allows each image to have its own palette, the palette will most likely appear before the image data with which it is associated:

| Header ! Palette | Bitmap Index

! Palette 1 Bitmap Data | Palette 2

i

Bitmap 2 Data '

! ! . . .

j

! Palette n

| Bitmap n Data | Footer

!

We'll now look at the parts of a bitmap file piece by piece. Header The header is a section of binary- or ASCII-format data normally found at the beginning of the file, containing information about the bitmap data found elsewhere in the file. All bitmap files have some sort of header, although the

bitmap Files

59

format of the header and the information stored in it varies considerably from format to format. Typically, a bitmap header is composed of fixed fields. None of these fields is absolutely necessary, or found in all formats, but this list is typical of those formats in widespread use today. The following information is commonly found in a bitmap header: I Header Palette

Bitmap Index Palette 1 i File Identifier File Version

j

j Number of Lines per Image ( Number of Pixels per Line | Number of Bits per Pixel I Number of Color Planes

j

Compression Type | X Origin of Image Y Origin of Image

j

Text Description

j

Unused Space

j

Later in this chapter we will present examples of headers from several actual formats, containing fields similar to those presented above. File Identifier A header usually starts with some sort of unique identification value called a. file identifier, file ID, or ID value. Its purpose is to allow a software application to determine the format of the particular graphics file being accessed. ID values are often magic values in the sense that they are assigned arbitrarily by the creator of the file format. They can be a series of ASCII characters, such as BM or GIF, or a 2- or 4-byte word value, such as 4242h or 596aa695h, or any other pattern that made sense to the format creator. The pattern is usually assumed to be unique, even across platforms, but this is not always the case, as

60

Overview

we describe in the next few paragraphs. Usually, if a value in the right place in a file matches the expected identification value, the application reading the file header can assume that the format of the image file is known. Three circumstances arise, however, which make this less than a hard and fast rule. Some formats omit the image file identifier, starting off with data that can change from file to file. In this case, there is a small probability that the data will accidentally duplicate one of the magic values of another file format known to the application. Fortunately, the chance of this occurring is remote. The second circumstance can come about when a new format is created and

the format creator inadvertently duplicates, in whole or in part, the magic values of another format. In case this seems even more unlikely than accidental duplication, rest assured that it has already happened several times. Probably the chief cause is that, historically, programmers have borrowed ideas from other platforms, secure in the belief that their efforts would be isolated behind the "Chinese Wall" of binary incompatibility. In the past, confusion of formats with similar ID fields seldom came about and was often resolved by context when it did happen. Obviously this naive approach by format creators is no longer a survival skill. In the future, we can expect more problems of this sort as users, through local area networking and through advances in regional and global interconnectivity, gain access to data created on other platforms. This third circumstance comes about when a vendor—either the format cre-

ator or format caretaker or a third party—changes, intentionally or unintentionally, the specification of the format, while keeping the ID value specified in the format documentation. In this case, an application can recognize the format, but be unable to read some or all of the data. If the idea of a vendor creating intentional, undocumented changes seems unlikely, rest assured that this, too, has already happened many times. Examples are the GIF, TIFF, and TGA file formats. In the case of the GIF and TGA formats, vendors (not necessarily the format creators) have extended or altered the formats to include new data types. In the case of TIFF, vendors have created and promulgated what only can be described as convenience revisions, apparently designed to accommodate coding errors or application program quirks. File Version

Following the identification value in the header is usually a field containing the file version. Naturally enough, successive versions of bitmap formats may differ in characteristics such as header size, bitmap data supported, and color capability. Once having verified the file format through the ID value, an application will typically examine the version value to determine if it can handle the image data contained in the file.

Bitmap Files

61

Image Description Information Next comes a series of fields that describe the image itself. As we will see, bitmaps are usually organized, either physically or logically, into lines of pixels. The field designated number of lines per image, also called the image length, image height, or number of scan lines, holds a value corresponding to the number of lines making up the actual bitmap data. The number of pixels per line, also called the image xvidth or scan-line width, indicates the number of pixels stored in each line.

The number of bits per pixel indicates the size of the data needed to describe each pixel per color plane. This may also be stored as the number of bytes per pixel, and is more properly called pixel depth. Forgetting the exact interpretation of this field when coding format readers is a common source of error. If the bitmap data is stored in a series of planes, the number of color planes indicates the number of planes used. Often the value defaults to one. There is an increasing tendency to store bitmaps in single-plane format, but multi-plane formats continue to be used in support of special hardware and alternate color models.

The number of bits in a line of the image can be calculated by multiplying the values of number of bits per pixel, number of pixels per line, and number of color planes together. We can determine the number of bytes per scan line by then dividing the resulting product by eight. Note that there is nothing requiring number of bits per pixel to be an integral number of 8-bit bytes. Compression Type^ If the format supports some sort of encoding designed to reduce the size of the bitmap data, then a compression type field will be found in the header. Some formats support multiple compression types, including raw or uncompressed data. Some format revisions consist mainly of additions or changes to the compression scheme used. Data compression is an active field, and new types of compression accommodating advances in technology appear with some regularity. TIFF is one of the common formats which has exhibited this pattern in the past. For more information about compression, see Chapter 9, Data Compression. xandy Origins x origin of image and y origin of image specify a coordinate pair that indicates where the image starts on the output device. The most common origin pair is 0,0, which puts one corner of the image at the origin point of the device. Changing these values normally causes the image to be displayed at a different location when it is rendered.

62

Overview

Most bitmap formats were designed with certain assumptions about the output device in mind, and thus can be said to model either an actual or virtual device having a feature called the drawing surface. The drawing surface has an implied origin, which defines the starting point of the image, and an implied orientation, which defines the direction in which successive lines are drawn as the output image is rendered. Various formats and display devices vary in the positioning of the origin point and orientation direction. Many place the origin in the upper-left corner of the display surface, although it can also appear in the center, or in the lower-left corner. Others, although this is far less common, put it in the upper- or lower-right corner. Orientation models with the origin in the upper-left corner are often said to have been created in support of hardware, and there may be some historical and real-world justification for this. People with backgrounds in mathematics and the natural sciences, however, are used to having the origin in the lowerleft corner or in the center of the drawing surface. You might find yourself guessing at the background of the format creator based on the implied origin and orientation found in the format. Some formats include provisions for the specification of the origin and orientation. An image displayed by an application incorporating an incorrect assumption about the origin point or orientation may appear upside down or backwards, or may be shifted horizontally some fraction of the width of the drawing surface on the output device. Sometimes the header will contain a text description field, which is a comment section consisting of ASCII data describing the name of the image, the name of the image file, the name of the person who created the image, or the software application used to create it. This field may contain 7-bit ASCII data, for portability of the header information across platforms. Unused Space At the end of the header may be an unused field, sometimes referred to as padding, filler, reserved space, or reserved fields. Reserved fields contain no data, are undocumented and unstructured and essentially act as placeholders. All we know about them are their sizes and positions in the header. Thus, if the format is altered at some future date to incorporate new data, the reserved space can be used to describe the format or location of this data while still maintain-

ing backward compatibility with programs supporting older versions of the format. This is a common method used to minimize version problems—creating an initial version based on a fixed header substantially larger than necessary. New fields can then be added to reserved areas of the header in subsequent revisions of the format without altering the size of the header.

Bitmap Files

63

Often format headers are intentionally padded using this method to 128, 256, or 512 bytes. This has some implications for performance, particularly on older systems, and is designed to accommodate common read and write buffer sizes. Padding may appear after the documented fields at the end of the header, and this is sometimes an indication that the format creator had performance and caching issues in mind when the format was created. Reserved fields are sometimes only artifacts left over from working versions of the format, unintentionally frozen into place when the format was released. A vendor will normally change or extend a file format only under duress, or as a rational response to market pressure typically caused by an unanticipated advance in technology. In any case, the upgrade is almost always unplanned. This usually means that a minimal amount of effort goes into shoehorning new data into old formats. Often the first element sacrificed in the process is complete backward compatibility with prior format versions.

Examples of Bitmap Headers To give you some idea about what to expect when looking at bitmap headers, we'll take a look at three typical ones. We'll start with one of the least complex (Microsoft Windows Bitmap), and proceed to two that are more complex (Sun raster and Kofax raster). To do this, we'll provide a C data structure, which will provide an idea of the size and relative position of each field in the headers. Example 1: Microsoft Windows Bitmap Version 1.x Format Header // // Header structure for the MS Windows 1.x Bitmap Format //

a BYTE is an unsigned char

//

a WORD is an unsigned short int (16-bits)

/ / typedef struct WinHeaderlx { // Name

Offset

WORD

Type;

/* OOh File Type Identifier (always 0) */

WORD

Width;

/* 02h Width of Bitmap in Pixels

*/

WORD

Height;

/* 04h Height of Bitmap in Scanlines

*/

WORD

*/

// Type

Comment

//

Width;

/* 06h Width of Bitmap in Bytes

BYTE

Planes ;

/* 08h

BYTE

BitsPixel; /* 09h Number of Bits Per Pixel

} OLDWINHEAD;

64

Overview

Number of Color Planes

*/ */

As you can see from the comments, this particular header contains a file identification value, the width and height of the image, the width of a single line of the image (in bytes), the number of color planes, and the number of bits per pixel. This information is close to the bare minimum required to describe a bitmap image so it can be read and rendered in an arbitrary environment. Note that the Windows 1.x header contains no information about color or

image data compression. A more advanced image format will have provisions for both color information and at least a simple compression scheme. An example of a more elaborate header is that found in the Sun raster image file format shown in the next example: Example 2: Sun Raster Format Header II I/ Header structure for the Sun Raster Image File Format // a WORD here is an unsigned long int (32 bits) // typedef struct _SunRasterHead { // // Type Name

Offset

Comment

// WORD

Magic;

/* OOh Magic Number (59a66a95h)

*/

WORD

Width;

/* 04h Width of Image in Pixels

*/

WORD

Height;

*/

WORD

Depth;

/* 08h Height of Image in Pixels Number of Bits Per Pixel /* OCh

WORD

Length;

/* lOh Length of Image in Bytes

*/

WORD

Type;

/* 14h File Format Encoding Type

*/

WORD

MapType ;

/* 18h Type of Color Map

*/

WORD

MapLength; /* ICh Length of Color Map in Bytes */

*/

} SUNRASHEAD;

The Sun raster header contains information similar to the Windows 1 .x bitmap header illustrated above. But note that it also contains fields for the type of data encoding method and the type and size of the color map or palette associated with the bitmap data. Neither of the two headers mentioned above contains a text description field. One such header that does is that associated with the Kofax Image File Format shown in Example 3.

Bitmap Files

65

Example 3: Kofax Raster Format Header 0 // // // // //

Header structure for the Kofax Raster Image File Format a LONG is a signed long int (32 bits) a SHORT is a signed short int (16 bits)

typedef struct _KofaxHeader { // // Type

Name

Offset

Comment

// LONG SHORT SHORT LONG SHORT SHORT SHORT CHAR CHAR SHORT SHORT CHAR CHAR SHORT CHAR LONG LONG LONG CHAR LONG LONG LONG LONG LONG LONG LONG LONG CHAR

*/ Magic; /* OOh Magic Number (68464B2Eh) Header Size /* 04h */ Headersize; */ HeaderVers ion; /* 06h Header Version Number Imageld; /* OAh Image Identification Number */ */ Width; /* OCh Image Width in Bytes */ Length; /* OEh Image Length in Scanlines Format; /* lOh Image Data Code (Encoding) */ /* llh Non-zero if Bitsex Reversed */ Bitsex; /* 12h Non-zero if Color Inverted */ Color; Horizontal Dots Per Inch /* 14h */ Xres; Vertical Dots Per Inch /* 16h */ Yres ; Number of Planes /* 18h */ Planes ; Number of Bits Per Pixel /* 19h */ BitsPerPixel; */ PaperSize; /* lAh Original Paper Size */ Reservedl[20] ; /* ICh 20-byte Reserved Field Date Created /* 3Oh */ Dcreated; Date Modified /* 34h */ Emodified; Date Accessed /* 38h */ Daccessed; */ Reserved2[4]; /* 3Ch 4-Byte Reserved Field Index Text Info Offset /* 40h */ Ioffset; /* 44h */ Ilength; Index Text Info Length Comment Text Offset /* 48h */ Coffset; Clength; /* 4Ch Comment Text Length in Bytes*/ User Data Offset /* 50h */ Uoffset; */ Ulength; /* 54h User Data Length in Bytes /* 58h */ Doffset; Image Data Offset Dlength; /* 5Ch Image Data Length in Bytes */ */ Reserved3[32] ; /* 6 Oh 32-byte Reserved Field

} KFXHEAD;

Note that the Kofax header is considerably larger than either the Windows bitmap or Sun raster headers. Included are fields which describe the horizontal and vertical resolution, paper size of the image subject, offset values of dif-

66

Overview

ferent types of data stored in the file, and the time and date that the image was created, last modified, and accessed. Also note the appearance of several fields marked reserved. The Kofax format is intentionally padded to 128K bytes to accommodate common read and write buffer sizes. It uses only 72 bytes of the header in the revision presented here, but is padded to 128 bytes. The Kofax format specification promises that the first 128 bytes of every Kofax image file will be the header, regardless of future revisions. Applications are thus free to ignore the reserved data, and the format is presumably designed to allow this without dire penalty. See the general discussion of reserved fields in the section above called "Unused Space." Optimizing Header Reading Header reading speed can be optimized by looking at the ways in which your application uses the data, because reading the header data can usually be performed in several ways. If only selected values in the header are needed, the application can calculate the offset of the data from some key landmark such as the start of the file. The application can then seek directly to the data value required and read the data value. The offset values appearing in the comments in the header examples above can be used as offset arguments for the seek function used.

If most of the data contained in the header is needed by the application, then it may be more convenient to read the entire header into a buffer or preallocated data structure. This can be performed quickly, taking advantage of any efficiencies provided by an integral power of two-block reads, as mentioned above. All of the header data will be available in memory and can be cached for use when needed. One problem, however, occurs when the byte order of the file is different from the native byte order of the system on which the file is being read. Most block read functions, for example, are not designed to supply automatic conversion of data. Another problem may arise when data structures are padded by the compiler or runtime environment for purposes of data member alignment. These problems and others are discussed in more detail in Chapter 6, Platform Dependencies.

Bitmap Data The actual bitmap data usually makes up the bulk of a bitmap format file. We'll assume you've read Chapter 2, Computer Graphics Basics, and that you understand about pixel data and related topics like color. In many bitmap file formats the actual bitmap data is found immediately after the end of the file header. It may be found elsewhere in the file, however, to

bitmap Files

67

accommodate a palette or other data structure which may be present. If this is the case, an offset value will appear in the header or in the documentation indicating where to find the beginning of the image data in the file. One thing you might notice while looking over the file format specifications described in Part II of this book is the relative absence of information explaining the arrangement of the actual bitmap data in the file. To find out how the data is arranged, you usually have to figure it out from related information pertaining to the structure of the file. Fortunately, the structuring of bitmap data within most files is straightforward and easily deduced. As mentioned above, bitmap data is composed of pixel values. Pixels on an output device are usually drawn in scan lines corresponding to rows spanning the width of the display surface. This fact is usually reflected in the arrangement of the data in the file. This exercise, of deducing the exact arrangement of data in the file, is sometimes helped by having some idea of the display devices the format creator had in mind. One or more scan lines combined form a two-dimensional grid of pixel data; thus we can think of each pixel in the bitmap as located at a specific logical coordinate. A bitmap can also be thought of as a sequence of values that logically maps bitmap data in a file to an image on the display surface of an output device. Actual bitmap data is usually the largest single part of any bitmap format file.

How Bitmap Data Is Written to Files Before an application writes an image to a file, the image data is usually first assembled in one or more blocks of memory. These blocks can be located in the computer's main memory space or in part of an auxiliary data collection device. Exactly how the data is arranged then depends on a number of factors, including the amount of memory installed, the amount available to the application, and the specifics of the data acquisition or file write operation in use. When bitmap data is finally written to a file, however, only one of two methods of organization is normally used: scan-line data or planar data. Scan-line data

The first, and simplest, method is the organization of pixel values into rows or scan lines, briefly mentioned above. If we consider every image to be made up of one or more scan lines, the pixel data in the file describing that image will be a series of sets of values, each set corresponding to a row of the image. Multiple rows are represented by multiple sets written from start to end in the file. This is the most common method for storing image data organized into rows.

68

Overview

If we know the size of each pixel in the image, and the number of pixels per row, we can calculate the offset of the start of each row in the file. For example, in an 8-bit image every pixel value is one byte long. If the image is 21 pixels wide, rows in the file are represented by sets of pixel values 21 bytes wide. In this case, the rows in the file start at offsets of 0, 21, 42, 63, etc. bytes from the start of the bitmap data. On some machines and in some formats, however, rows of image data must be certain even-byte multiples in length. An example is the common rule requiring bitmap row data to end on long-word boundaries, where a long word is four bytes long. In the example mentioned in the preceding paragraph, an image 21 pixels wide would then be stored in the file as sets of pixel values 24 bytes in length, and the rows would start at file offsets 0, 24, 48, 64. The extra three bytes per row are padding. In this particular case, three bytes of storage in the file are wasted for every row, and in fact, images that are 21 pixels wide take up the same amount of space as images 24 pixels wide. In practice, this storage inefficiency is usually (but not always) compensated for by an increase of speed gained by catering to the peculiarities of the host machine in regard to its ability to quickly manipulate two or four bytes at a time. The actual width of the image is always available to the rendering application, usually from information in the file header.

In a 24-bit image, each image pixel corresponds to a 3-byte long pixel value in the file. In the example we have been discussing, an image 21 pixels wide would require a minimum of 21 x 3 = 63 bytes of storage. If the format requires that the row starts be long-word aligned, 64 bytes would be required to hold the pixel values for each row. Occasionally, as mentioned above, 24-bit image data is stored as a series of 4-byte long pixel values, and each image row would then require 21 x 4 = 84 bytes. Storing 24-bit image data as 4-byte values has the advantage of always being long-word aligned, and again may make sense on certain machines.

In a 4-bit image, each pixel corresponds to one-half byte, and the data is usually stored two pixels per byte, although storing the data as 1-byte pixel values would make the data easier to read and, in fact, is not unheard of. Figure 3-1 illustrates the organization of pixel data into scan lines. Planar data

The second method of pixel value organization involves the separation of image data into two or more planes. Files in which the bitmap data is organized in this way are called planar files. We will use the term composite image to refer to an image with many colors (i.e., not monochrome, not grayscale, and

Bitmap Files

69

Columns of Pixels ' Pixel 0

Lines

osScan

Pixel 1

Pixel 2

Pixel 3

Pixel 4

Pixel 5

Scan Line 0 Scan Line 1 Scan Line 2

Row i Figure 3-1: Organization of pixel data into scan lines not one single color). Under this definition, most normal colored images that you are familiar with are composite images. A composite image, then, can be represented by three blocks of bitmap data, each block containing just one of the component colors making up the image. Constructing each block is akin to the photographic process of making a separation—using filters to break up a color photograph into a set of component colors, usually three in number. The original photograph can be reconstructed by combining the three separations. Each block is composed of rows laid end to end, as in the simpler storage method explained above; in this case, more than one block is now needed to reconstruct the image. The blocks may be stored consecutively or may be physically separated from one another in the file. Planar format data is usually a sign that the format designer had some particular display device in mind, one that constructed composite color pixels from components routed through hardware designed to handle one color at a time. For reasons of efficiency, planar format data is usually read one plane at a time in blocks, although an application may choose to laboriously assemble composite pixels by reading data from the appropriate spot in each plane sequentially. As an example, a 24-bit image two rows by three columns wide might be represented in RGB format as six RGB pixel values: (00, 01, 02) (03, 04, 05) (06, 07, 08) (09, 10, 11) (12, 13, 14) (15, 16, 17) but be written to the file in planar format as: (00) (03) (06) (red plane) (09) (12) (15) (10) (13) (01) (04) (07) (green plane) (02) (05) (08) (blue plane)

70

Overview

(16)

(11) (14) (17)

Notice that the exact same data is being written; it's just arranged differently. In the first case, an image consisting of six 24-bit pixels is stored as six 3-byte pixel values arranged in a single plane. In the second, planar, method, the same image is stored as 18 1-byte pixel values arranged in three planes, each plane corresponding to red, green, and blue information, respectively. Each method takes up exactly the same amount of space, 18 bytes, at least in this example. It's pretty safe to say that most bitmap files are stored in non-planar format. Supporting planar hardware, then, usually means disassembling the pixel data and creating multiple color planes in memory, which are then presented to the planar rendering subroutine or the planar hardware. Planar files may need to be assembled in a third buffer or, as mentioned above, laboriously set (by the routine servicing the output device) one pixel at a time. Figure 3-2 illustrates the organization of pixel data into color planes.

&

r— fV

/V

^4?

JT a" j£~ £~ *T £~ Blue Plane

Green

Red Plane

Figure 3-2: Organization of pixel data into color planes Different Approaches to Bitmap Data Organization Normally, we consider an image to be made up of a number of rows, each row a certain number of pixels wide. Pixel data representing the image can be stored in the file in three ways: as contiguous data, as strips or as tiles. Figure 3-3 illustrates these three representations. Contiguous data The simplest method of row organization is where all of the image data is stored contiguously in the file, one row following the last. To retrieve the data you read the rows in file order, which delivers the rows in the order in which they were written. The data in this organizational scheme is stored in the file

Bitmap Files

71

Image data organized into:

Q Contiguous scan lines

Q Strips of scan lines (3 scan lines per strip)

Strip oj Strip 1 \ Q Tiles of scan lines

Strip 2 L

Figure 3-3: Examples of bitmap data organization (contiguous scan lines, strips, and tiles) equivalent of a two-dimensional array. \fou can index into the data in the file knowing the width of the row in pixels and the storage format and size of the pixel values. Data stored contiguously in this manner can be read quickly, in large chunks, and assembled in memory quite easily. Strips In the second method of file organization, images are stored in strips, which also consist of rows stored contiguously. The total image, however, is represented by more than one strip, and the individual strips may be widely separated in the file. Strips divide the image into a number of segments, which are always just as wide as the original image. Strips make it easier to manage image data on machines with limited memory. An image 1024 rows long, for instance, can be stored in the file as eight strips, each strip containing 128 rows. Arranging a file into strips facilitates buffering. If this isn't obvious, consider an uncompressed 8-bit image 1024 rows long and 10,000 pixels wide, containing 10 megabytes of pixel data. Even dividing the data into eight strips of 128 rows leaves the reading application with the job of handling 1.25 megabytes of data per strip, a chore even for a machine with a

72

Overview

lot of flat memory and a fast disk. Dividing the data into 313 strips, however, brings each strip down to a size which can be read and buffered quickly by machines capable of reading only 32K of data per file read pass. Strips also come into play when pixel data is stored in a compressed or encoded format in the file. In this case, an application must first read the compressed data into a buffer and then decompress or decode the data into another buffer the same size as, or larger than, the first. Arranging the compression on a per-strip basis greatly eases the task of the file reader, which need handle only one strip at a time. You'll find that strips are often evidence that a file format creator has thought about the limitations of the possible target platforms being supported and has wanted to not limit the size of images that can be handled by the format. Image file formats allowing or demanding that data be stored in strips usually provide for the storage of information in the file header such as the number of strips of data, the size of each strip, and the offset position of each strip within the file. Tiles

A third method of bitmap data organization is tiling. Tiles are similar to strips in that each is a delineation of a rectangular area of an image. However, unlike strips, which are always the width of the image, tiles can have any width at all, from a single pixel to the entire image. Thus, in one sense, a contiguous image is actually one large tile. In practice, however, tiles are arranged so that the pixel data corresponding to each is between 4Kb and 64Kb in size and is usually of a height and width divisible by 16. These limits help increase the efficiency with which the data can be buffered and decoded. When an image is tiled, it is generally the case that all the tiles are the same size, that the entire image is tiled, that the tiles do not overlap, and that the tiles are all encoded using the same encoding scheme. One exception is the CALS Raster Type II format, which allows image data to be composed of both encoded and unencoded image tiles. Tiles are usually left unencoded when such encoding would cause the tile data to increase in size (negative compression) or when an unreasonable amount of time would be required to encode the tile.

Dividing an image into tiles also allows different compression schemes to be applied to different parts of an image to achieve an optimal compression ratio. For example, one portion of an image (a very busy portion) could be divided into tiles that are compressed using JPEG, while another portion of the same image (a portion containing only one or two colors) could be stored as tiles that are run-length encoded. In this case, the tiles in the image would not all

Bitmap Files

73

be the same uniform size; the smallest would be only a few pixels, and the largest would be hundreds or thousands of pixels on a side. Tiling sometimes allows faster decoding and decompression of larger images than would be possible if the pixel data were organized as lines or strips. Because tiles can be individually encoded, file formats allowing the use of tiles will contain tile quantity, size, and offset information in the header specifications. Using this information, a reader that needs to display only the bottomright corner of a very large image would have to read only the tiles for that area of the image; it would not have to read all of the image data that was stored before it.

Certain newer tile-oriented compression schemes, such as JPEG, naturally work better with file formats capable of supporting tiling. A good example of this is the incorporation of JPEG in later versions of the TIFF file format. For more information about the use of tiles, see the article on the TIFF file format in Part II of this book. Palette

Many bitmap file formats contain a color palette. For a discussion of palettes of different kinds, see "Pixel Data and Palettes" in Chapter 2. Footer The footer, sometimes called the trailer, is a data structure similar to a header and is often an addition to the original header, but appended to the end of a file. A footer is usually added when the file format is upgraded to accommodate new types of data and it is no longer convenient to add or change information in the header. It is mainly a result of a desire to maintain backward compatibility with previous versions of the format. An example of this is the TGA format, later revisions of which contain a footer that enables applications to identify the different versions of its format and to access special features available only in the later version of the format. Because by definition it appears after the image data, which is usually of variable length, a footer is never found at a fixed offset from the beginning of an image file unless the image data is always the same size. It is, however, usually located at a specified offset from the end of an image file. Like headers, footers are usually a fixed size. The offset value of the footer may also be present in the header information, provided there was reserved space or padding available in the header. Also like a header, a footer may contain an identification field or magic number which can be used by a rendering application to differentiate it from other data structures in the file.

74

Overview

Other Bitmap File Data Structures Besides headers, footers, and palettes, bitmap files may contain additional data structures, which are usually added to aid manipulation of the image data by the rendering application. A file format which allows more than one image in the file needs to provide some method of identifying the start of each image. Thus, an image offset table, sometimes called an image file index or page table, may be used to store the offset values of the start of each image from the beginning of the file. A scan-line table may be provided for locating the start of each image scan line in the pixel data. This can be useful if the image data is compressed and pixel data corresponding to individual scan lines must be accessed randomly; the pixels in the image data cannot be counted until the image data is decoded. Scan-line tables contain one entry per image scan line. Variants of this idea include strip location tables (one entry per group of scan lines) and tile location tables (one entry for each rectangular subarea of the image).

Other Bitmap File Features Several formats incorporate unique or unusual data structures in their design. These are usually to accomplish the specific purpose of the format or to create as much generality as possible. A common file format that comes to mind under the heading of "unusual" is TIFF. TIFF contains a rudimentary header, but stores much of its data in a series of tags called Image File Directories, which are fixed in neither size nor position. They are instead like an in-memory list data structure in that they are linked by a series of file offset values. Data can be found by seeking to the next offset from the current offset. While this arrangement can lead to confusion (and indeed TIFF has many times been called a "write-only" format), it allows a programmer to construct a header-like structure that can contain any information at all, thus adding to its versatility. Unusual or unique features of other formats include the storing of image data and palette information in separate files (the Dr. Halo CUT and PAL files, for example) and the storing of monochrome bitmaps as blocks of ASCII format l's and O's (as in the PBM format), designed perhaps with interplatform portability in mind.

Bitmap Files

75

Pros and Cons of Bitmap File Formats Bitmap files are especially suited for the storage of real-world images; complex images can be rasterized in conjunction with video, scanning, and photographic equipment and stored in a bitmap format. Advantages of bitmap files include the following: • Bitmap files may be easily created from existing pixel data stored in an array in memory. • Retrieving pixel data stored in a bitmap file may often be accomplished by using a set of coordinates that allows the data to be conceptualized as a grid. • Pixel values may be modified individually or as large groups by altering a palette if present. • Bitmap files may translate well to dot-format output devices such as CRTs and printers. Bitmap files, however, do have drawbacks: • They can be very large, particularly if the image contains a large number of colors. Data compression can shrink the size of pixel data, but the data must be expanded before it can be used, and this can slow down the reading and rendering process considerably. Also, the more complex a bitmap image (large number of colors and minute detail), the less efficient the compression process will be. • They typically do not scale very well. Shrinking an image by decimation (throwing away pixels) can change the image in an unacceptable manner, as can expanding the image through pixel replication. Because of this, bitmap files must usually be printed at the resolution in which they were originally stored.

76

Overview

Chapter 4

Vector Files

In this chapter we'll be talking about vector files. Because we've already introduced bitmap files in Chapter 3, Bitmap Files, we'll be contrasting selected features of vector files with their bitmap counterparts.

Vector Versus Bitmap Files A bitmap file in some sense contains an exact pixel-by-pixel mapping of an image, which can then be reconstructed by a rendering application on the display surface of an output device. Rendering applications seldom have to take into account any structural elements other than pixels, scan lines, strips, and tiles—subdivisions of the image which were made without reference to the content of the image. Vector files contain, instead, mathematical descriptions of one or more image elements, which are used by the rendering application to construct a final image. Vector files are thus said to be made up of descriptions of image elements or objects, rather than pixel values. Although the term object has a modern meaning, you will find vector format specifications adhering to the older usage.

What Is Vector Data? Vectors are line segments minimally defined as a starting point, a direction, and a length. They can, however, be much more complex and can include various sorts of lines, curves, and splines. Straight and curved lines can be used to define geometrical shapes, such as circles, rectangles, and polygons, which then can be used to create more complex shapes, such as spheres, cubes, and polyhedrons.

Vector Files

77

Vector Files Came First Vector file formats have been around since computers were first used to display lines on an output device. CRTs, for example, were first used as computer-driven output devices in the 1950s. The first CRT displays were random scan devices similar to oscilloscopes, capable of producing images of mathematical and geometrical shapes. Vector display devices provided output sufficient for the needs of computer users for many years after their introduction, due to the limited range of tasks computers were called upon to perform. At some point the need to store vector data arose, and portable storage media such as punch cards or paper tape were pressed into use. Prior to rendering time, an image was logically subdivided into its simplest elements. At rendering time, the image was produced and maintained by drawing each of its elements repeatedly in a specified order. At storage time, data was readily exported as a list of drawing operations, and mathematical descriptions of the image elements—their size, shape, and position on the display screen—were written to the storage device in the order in which they were displayed.

Vector Files and Device Independence As mentioned above, vector images are collections of device-independent mathematical descriptions of graphical shapes. More so than their bitmap counterparts, various vector formats differ primarily because each was designed for a different purpose. While the conceptual differences between the designs of formats supporting 1-bit and 24-bit bitmap data may be small, the differences between vector formats used with CAD applications and those used for general data interchange can be formidable. Thus, it is difficult to generalize about vector formats in the same way we did when discussing bitmap formats. On the other hand, most output devices are point-addressable, providing a grid of pixels which can be addressed individually, as if the surface of the device were graph paper made of discrete elements. This means that an application can always find a way to draw vector-format image elements on the device.

78

Overview

Sources of Vector Format Files The simplest vector formats are those used by spreadsheet applications. These normally contain numerical data meant to be displayed on a two-dimensional grid on an output device. Some non-spreadsheet applications use spreadsheet file formats to store data that can alternately be interpreted as either bitmap or vector data.

Examples of common spreadsheet formats include those associated with the programs Lotus 1-2-3 (.WKS and .WK1), Excel (.XLS), and Quattro Pro. Although these originated on Intel-based PCs, the respective vendors now support multiple platforms. Several spreadsheet formats have been developed explicitly to support portable data interchange between different spreadsheet applications; as a result, these are now also found on multiple platforms. These include Lotus DIF (Data Interchange Format) and Microsoft SYLK (SYmbolic LinK Format). The majority of vector formats, however, are designed for storing line drawings created by CAD applications. CAD packages are used to create mechanical, electrical, and architectural drawings, electronic layouts and schematics, maps and charts, and artistic drawings. The complexity of information needed to support the needs of a major CAD application is considerably greater than that needed to support a spreadsheet and generally requires a more complicated vector format.

CGM (Computer Graphics Metafile) is an example of a general format designed with data interchange in mind, a format that is defined in a published standard. All elements in a CGM-format file are constructed of simple objects such as lines and polygons, primitives assumed to be available in every rendering application. Very complex objects are broken down into the simplest possible shapes. Autodesk's AutoCAD DXF (Data eXchange Format) was also designed with vector data interchange in mind but is vendor-controlled and originated as a format supporting a single application. In addition, DXF was specifically tailored for CAD information useful in the construction of mechanical, electrical, and architectural drawings. DXF therefore supports not only common vector elements such as circles and polygons, but also complex objects frequently used in CAD renderings, such as three-dimensional objects, labels, and hatching.

Vector Files

79

How Vector Files Are Organized Although vector files, like bitmap files, vary considerably in design, most contain the same basic structure: a header, a data section, and an end-of-file marker. Some structure is needed in the file to contain information global to the file and to correctly interpret the vector data at render time. Although most vector files place this information in a header, some rely solely on a footer to perform the same task. Vector files on the whole are structurally simpler than most bitmap files and tend to be organized as data streams. Most of the information content of the file is found in the image data. The basic components of a simple vector file are the following: Header

Image Data If a file contains no image data, only a header will be present. If additional information is required that does not fit in the header, you may find a footer appended to the file, and a palette may be included as well: Header Palette

Image Data Header

The header contains information that is global to the vector file and must be read before the remaining information in the file can be interpreted. Such information can include a file format identification number, a version number, and color information.

Headers may also contain default attributes, which will apply to any vector data elements in the file lacking their own attributes. While this may afford some reduction in file size, it does so at the cost of introducing the need to cache the header information throughout the rendering operation. Headers and footers found in vector-format files may not necessarily be a fixed size. For historical reasons mentioned above, it is not uncommon to find vector formats which use streams of variable-length records to store all data. If this is the case, then the file must be read sequentially and will normally fail to pro-

80

Overview

vide offset information that is necessary to allow the rendering application to subsample the image. The type of information stored in the header is governed by the types of data stored in the file. Basic header information contains the height and width of the image, the position of the image on the output device, and possibly the number of layers in the image. Thus, the size of the header may vary from file to file within the same format. Vector Data

The bulk of all but tiny files consists of vector element data that contain information on the individual objects making up the image. The size of the data used to represent each object will depend upon the complexity of the object and how much thought went into reducing the file size when the format was designed. Following the header is usually the image data. The data is composed of elements, which are smaller parts that comprise the overall image. Each element is either explicitly associated with default information, or inherits information, that specifies its size, shape, position relative to the overall image, color, and possibly other attribute information. An example of vector data in ASCII format containing three elements (a circle, a line, and a rectangle), might appear as:

/CIRCLE,40,100,100,BLUE;LINE,200,50,136,227,BLACK;RECT,80,65,25,78,RED;

Although this example is a simple one, it illustrates the basic problem of deciphering vector data, which is the existence of multiple levels of complexity. When deciphering a vector format, you not only must find the data, but you also must understand the formatting conventions and the definitions of the individual elements. This is hardly ever the case in bitmap formats; bitmap pixel data is all pretty much the same. In this example, elements are separated by semicolons, and each is named, followed by numerical parameters and color information. Note, however, that consistency of syntax across image elements is never guaranteed. We could have just as easily defined the format in such a way as to make blocks of unnamed numbers signify lines by default: ;CIRCLE,40,100,100,BLUE;200,50,136,227,BLACK;RECT,80,65,25,78,RED;

and the default color black if unspecified: ;CIRCLE,40,100,100,BLUE;200,50,136,227;RECT,80,65,25,78,RED;

Vector Files

81

Many formats allow abbreviations: ;C,40,100,100,BL;200,50,136,227;R,80,65,25,78,R;

Notice that the R for RECT and R for RED are distinguished by context. You will find that many formats have opted to reduce data size at the expense of conceptual simplicity. You are free to consider this as evidence of flawed reasoning on the part of the format designer. The original reason for choosing ASCII was for ease of reading and parsing. Unfortunately, using ASCII may make the data too bulky. Solution: reduce the data size through implied rules and conventions and allow abbreviation (in the process making the format unreadable). The format designer would have been better off using a binary format in the first place. After the image data is usually an end-of-section or end-of-file marker. This can be as simple as the string EOF at the end the file. For the same reasons discussed in Chapter 3, Bitmap Files, some vector formats also append a footer to the file. Information stored in a footer is typically not necessary for the correct interpretation of the rendering and may be incidental information such as the time and date the file was created, the name of the application used to create the file, and the number of objects contained in the image data. Palettes and Color Information Like bitmap files, vector files can contain palettes. (For a full discussion of palettes, see the discussion in Chapter 2.) Because the smallest objects defined in vector format files are the data elements, these are the smallest features for which color can be specified. Naturally, then, a rendering application must look up color definitions in the file palette before rendering the image. Our example above, to be correct, would thus need to include the color definitions, which take the form of a palette with associated ASCII names: RED,255,0,0, BLACK,0,0,0, BLUE,0,0,255 ;C,40,100,100,BL;200,50,136,227,-R,80,65,25,78,R; Some vector files allow the definition of enclosed areas, which are considered

outlines of the actual vector data elements. Outlines may be drawn with variations in thickness or by using what are known as different pen styles, which are typically combinations of dots and dashes and which may be familiar from technical and CAD drawings. Non-color items of information necessary for the reproduction of the image by the rendering application are called element attributes.

82

OVERVIEW

Fills and color attributes

Enclosed elements may be designed to be filled with color by the rendering application. The filling is usually allowed to be colored independently from the element outline. Thus, each element may have two or more colors associated with it, one for the element outline and one for the filling. Fill colors may be transparent, for instance, and some formats define what are called color attributes. In addition to being filled with solid colors, enclosed vector elements may contain hatching or shading, which are in turn called fill attributes. In some cases, fill and color attributes are lumped together, either conceptually in the format design, or physically in the file. Formats that do not support fill patterns must simulate them by drawing parts of the pattern (lines, circles, dots, etc.) as separate elements. This not only introduces an uneven quality to the fill, but also dramatically increases the number of objects in the file and consequently the file size. Gradient fills

An enclosed vector element may also be filled with more than one color. The easiest way is with what is called a gradient fill, which appears as a smooth transition between two colors located in different parts of the element fill area. Gradient fills are typically stored as a starting color, an ending color, and the direction and type of the fill. A rendering application is then expected to construct the filled object, usually at the highest resolution possible. CGM is an example of a format that supports horizontal, vertical, and circular gradient fills. Figure 4-1 illustrates a gradient fill. Footer

A footer may contain information that can be written to the file only after all the object data is written, such as the number of objects in the image. The footer in most vector formats, however, is simply used to mark the end of the object data. Vector File Size Issues Not counting palette and attribute information, the size of a vector file is directly proportional to the number of objects it contains. Contrast this with a complex bitmap file, which stays the same size no matter how complex the image described within. The only impact complexity has on bitmap files is on the degree of compression available to the file creator. Vector files thus can vary greatly in size. A format can store images efficiently by using some form of shorthand notation to allow the compact definition of complex elements. A vector format rich in objects might be able to represent a VECTOR FILES

83

*- dark blue

f gradient

+—light blue Figure 4-1: Gradient fill single complex element using a Bezier curve, for instance. Another format not supporting Bezier curves would need to represent the same curve inefficiendy, perhaps using a series of lines. Each line, in this case, would be a separate element, producing a file much larger than one supporting Bezier curves directly. A format creator was probably addressing the problem of file size when he or she decided to support the creation and naming of complex elements. Great size savings come about when elements are repeated in the image; all that needs to be stored after the original element definition is a pointer to that definition, as well as attribute and position information specific to each individual repeated element. Size savings may also come about from the way in which a format stores information. Different formats may support identical information in widely varying ways. For example, in the CGM format a hatch pattern is represented as a single object. In the PIC and Autodesk DXF formats, however, each line in the hatch pattern is stored as a separate element. Because vector data is stored as numbers, it can be scaled, rotated, and otherwise manipulated easily and quickly, at least compared to bitmap data. Also, because scaling is so easy, vector files are not subject to image size limitations in the same way as bitmap files. Vector formats normally do not support data compression as most bitmap formats do. Some formats, however, support an alternate encoding method that produces smaller data files, but contains the same information. CGM, for instance, normally stores vector information in a clear-text ASCII format that is human-readable, as does the example we presented earlier in this chapter. It also allows the storage of information in a binary format, however, which 84

Overview

results in smaller files at the cost of readability and cross-platform portability. The DXF format also has a binary analog called DXB (Data eXchange Binary) which is not only smaller, but faster to load into its parent application (AutoCAD). It is, however, not portable to other applications.

Scaling Vector Files A vector element may be scaled to any size. Precision, overflow, and underflow problems may occur, however, if vector data is scaled too large or too small, "large" and "small" being relative to the intrinsic data size of the hardware and software platform supporting the rendering application. Although these problems are well known in numerical analysis, they are not generally recognized by the majority of programmers, and it pays to keep them in mind. Another common problem occurs when apparently enclosed elements are enlarged and then rendered. Two lines meant to be joined may have endpoints slightly misaligned, and this misalignment may show up as a gap when the element is enlarged or rotated. When a rendering application attempts to display the element on an output device, fill colors or patterns may inexplicably "leak." Many applications that allow the creation of vector files have tools to prevent this, but they may not be applied automatically before the file is saved. Text in Vector Files Vector formats that allow the storage of text strings do so in one of two ways. The simplest approach is to store the text as a literal ASCII string along with font, position, color, and attribute information. Although the text is provided in a compact form, this scheme requires the rendering application to have a knowledge of the font to be used, which is always problematic. Because font names are for the most part vendor-controlled, it is sometimes difficult to even specify the font to be drawn. The CGM format addresses this problem through the use of an international registry of font names and associated descriptive data. Any rendering application supporting CGM must have access to this data, or it must use the font metric data supplied in the CGM file's header. Text, however, because it is stored in human-readable format, can be edited. The second approach, and by far the most flexible, is to store the characters making up the text string as outlines constructed from a series of primitive vector data elements. Under this scheme each creator application must have access to font outline data; because it is stored like any other vector data, font outline data can be scaled at will, rotated, and otherwise manipulated. Until recently, access to outline data has been a problem, but vendors have realized

vector Files

85

the importance of support for outline fonts and are now routinely supplying this capability at the operating system level. Because the graphics industry at large and the font industry have grown up in parallel, there are naturally some incompatibilities between data storage models. Most fonts, for instance, are stored as a series of splines joined end-to-end, and a particular spline type may not be supported by the file format in use. In this case, the creator application may choose to convert the splines to arcs or lines and store these instead. This may or may not have an effect on the appearance of the text. Creator applications may even incorporate vector or stroke fonts, which are usually primitive sets of character outlines with an angular or mechanical look, designed to be drawn with a minimum of fuss. Although many vendors have chosen to make their own, one widely available source is the Hershey fonts. Hershey data is available commercially, but is no longer considered adequate for general use. The use of vector, stroke, or outline fonts usually increases the size of a file dramatically, but this may be offset by an increase in visual quality in the case of spline-based outline fonts. Text stored in outline format cannot easily be edited.

Pros and Cons of Vector Files Advantages of vector files include the following: • Vector files are useful for storing images composed of line-based elements such as lines and polygons, or those that can be decomposed into simple geometrical objects, such as text. More sophisticated formats can also store three-dimensional objects such as polyhedrons and wire-frame models. • Vector data can be easily scaled and otherwise manipulated to accommodate the resolution of a spectrum of output devices. • Many vector files containing only ASCII-format data can be modified with simple text editing tools. Individual elements may be added, removed, or changed without affecting other objects in the image. • It is usually easy to render vector data and save it to a bitmap format file, or, alternately, to convert the data to another vector format, with good results. Some drawbacks of vector files include the following: • Vector files cannot easily be used to store extremely complex images, such as some photographs, where color information is paramount and may vary on a pixel-by-pixel basis.

86

Overview

• The appearance of vector images can vary considerably depending upon the application interpreting the image. Factors include the rendering application's compatibility with the creator application and the sophistication of its toolkit of geometric primitives and drawing operations. • Vector data also displays best on vectored output devices such as plotters and random scan displays. High-resolution raster displays are needed to display vector graphics as effectively. • Reconstruction of vector data may take considerably longer than that contained in a bitmap file of equivalent complexity, because each image element must be drawn individually and in sequence.

Vector Files

87

Chapter 5

Metafiles Metafiles can contain both bitmap and vector data. When the term metafile first appeared, it was used in discussions of device- and machine-independent interchange formats. In the mid-1970's, the National Center for Atmospheric Research (NCAR), along with several other research institutions, reportedly used a format called metacode, which was device- and platform-independent to a certain degree. What is known for certain is that in 1979, the SIGGRAPH Graphics Standards and Planning Committee used the term, referring to a part of their published standards recommendations. These early attempts at defining device- and platform-independent formats mainly concerned themselves with vector data. Although work has continued along this line, we will refer to formats that can accommodate both bitmap and vector data as metafiles, because for all practical purposes the interchange formats in widespread use in the marketplace handle both types of data. Although metafile formats may be used to store only bitmap or only vector information, it is more likely that they will contain both types of data. From a programmer's point of view, bitmap and vector data are two very different problems. Because of this, supporting both bitmap and vector data types adds to the complexity of a format. Thus, programmers find themselves avoiding the use of metafile formats unless the added complexity is warranted—either because they need to support multiple data types or for external reasons. The simplest metafiles closely resemble vector format files. Historically, limitations of vector formats were exceeded when the data that needed to be stored

became complex and diverse. Vector formats were extended conceptually, allowing the definition of vector data elements in terms of a language or grammar, and also by allowing the storage of bitmap data. In a certain sense, the resulting formats went beyond the capabilities of both bitmap and vector formats—hence the term metafile.

METAFILES

89

Platform Independence? Metafiles are widely used to transport bitmap or vector data between hardware platforms. The character-oriented nature of ASCII metafiles, in particular, eliminates problems due to byte ordering. It also eliminates problems encountered when transferring binary files across networks where the eighth bit of each byte is stripped off, which can leave files damaged beyond repair. Also, because a metafile supports both bitmap and vector data, an application designer can kill two birds with one stone by providing support for one metafile rather than two separate bitmap and vector formats. Metafiles are also used to transfer image data between software platforms. A creator application, for instance, can save an image in both bitmap and vector form in a metafile. This file may then be read by any bitmap-capable or vectorcapable application supporting the particular metafile format. Many desktop publishing programs, for instance, can manipulate and print vector data, but are unable to display that same data on the screen for various reasons. To accommodate this limitation, a bitmap representation of the image is often included along with the vector data in a metafile. The application can read the bitmap representation of the image from the metafile, which serves as a reduced-quality visual representation of the image that will eventually appear on the printed page. When the page is actually printed, however, the vector data from the metafile is used to produce the image on the printer. Display PostScript files are an example of this type of arrangement.

How Metafiles Are Organized Metafiles vary so widely in format that it is pointless to attempt to construct a hierarchical explanation of their general construction. Most metafiles contain some sort of header, followed by one or more sections of image data. Some metafiles contain nothing but bitmap data, and still others contain no data at all, opting instead for cryptic drawing instructions, or numerical data similar to that found in vector files.

Pros and Cons of Metafiles Because metafiles are in a sense a combination of bitmap and vector formats, many of the pros and cons associated with these formats also apply to metafiles. Your decision to choose one particular metafile format over another will thus depend on what kind of data (bitmap or vector) makes up the bulk of the file, and on the strengths and weaknesses of that particular type of data. With that said, we can safely generalize as follows:

90

Overview

• Although many metafile formats are binary, some are character-oriented (ASCII), and these are usually very portable between computer systems. • Metafiles containing mixtures of vector and bitmap data can in some cases be smaller than fully-rendered bitmap versions of the same image. • Because they can contain high-redundancy ASCII-encoded data, metafiles generally compress well for file transfer. • Most metafiles are very complex, because they are usually written by one application for another application. Although their ASCII nature means that theoretically they may be modified with a text editor, modification of a metafile by hand generally requires a skilled eye and special knowledge.

metafiles

91

Chapter 6

Platform Dependencies One of our criteria for choosing the formats discussed in this book was whether they are used for data exchange (both between applications and across platforms). This analysis necessarily ruled out formats incorporating hardware-specific instructions (for example, printer files). Although the formats we discuss here do not raise many hardware issues, several machine dependency issues do come up with some regularity. Two of these issues have some practical implications beyond being simply sources of annoyance. This chapter describes those issues. It also touches on differences between filenames among different platforms. These are significant only because filenames may offer clues about the origins of files you may receive and need to convert.

Byte Order We generally think of information in memory or on disk as being organized into a series of individual bytes of data. The data is read sequentially in the order in which the bytes are stored. This type of data is called byte-oriented data and is typically used to store character strings and data created by 8-bit CPUs.

Not all computers look at the universe through an 8-bit window, however. For reasons of efficiency, 16- and 32-bit CPUs prefer to work with bytes, organized into 16- and 32-bit cells, which are called words and doublewords respectively. The order of the bytes within word-oriented and doubleword-oriented data is not always the same; it varies depending upon the CPU that created it. (Note, however, that CPUs do exist in which byte ordering can be changed; Digital Equipment Corporation's Alpha chip is a recent example.) Byte-oriented data has no particular order and is therefore read the same on all systems. But, word-oriented data does present a potential problem—probably the most common portability problem you will encounter when moving platform Dependencies

93

files between platforms. The problem arises when binary data is written to a file on a machine with one byte order and is then read on a machine assuming a different byte order. Obviously, the data will be read incorrectly. It is the order of the bytes within each word and doubleword of data that determine the "endianness" of the data. The two main categories of byte-ordering schemes are called big-endian and little-endian. * Big-endian machines store the most significant byte (MSB) at the lowest address in a word, usually referred to as byte 0. Big-endian machines include those based on the Motorola MC68000A series of CPUs (the 68000, 68020, 68030, 68040, and so on), including the Commodore Amiga, the Apple Macintosh, and some UNIX machines. Little-endian machines store the least significant byte (LSB) at the lowest address in a word. The two-byte word value, 1234h, written to a file in littleendian format, would be read as 3412h on a big-endian system. This occurs because the big-endian system assumes that the MSB, in this case the value 12h, is at the lowest address within the byte. The little-endian system, however, places the MSB at the highest address in the byte. When read, the position of the bytes in the word are effectively flipped in the file-reading process by the big-endian machine. Little-endian machines include those based on the Intel iAPX86 series of CPUs (the 8088, 80286, 80386, 80486, and so forth), including the IBM PC and clones. A third term, middle-endian, has been coined to refer to all byte-ordering schemes that are neither big-endian nor little-endian. Such middle-endian ordering schemes include the 3-4-1-2, 2-1-4-3, 2-3-0-1, and 1-0-3-2 packeddecimal formats. The Digital Equipment Corporation PDP-11 is an example of a middle-endian machine. The PDP-11 has a DWORD byte-ordering scheme of 2-3-0-1.

The I/O routines in the C standard library always read word data in the native byte order of the machine hosting the application. This means that functions such as freadQ and fivriteQ have no knowledge of byte order and cannot provide needed conversions. Most C libraries, however, contain a function named swab(), which is used to swap the bytes in an array of bytes. While swab() can be used to convert words of data from one byte order to another, doing so can be inefficient, due to the necessity of making multiple calls for words greater than two bytes in size.

* The terms big-endian and little-endian were originally found in Jonathan Swift's book, Gulliver's Travels, as satirical descriptions of politicians who disputed whether eggs should be broken at their big end or their little end. This term was first applied to computer architecture by Danny Cohen. (See "For Further Information" below.)

94

Overview

Programmers working with bitmap files need to be concerned about byte order, because many popular formats such as Macintosh Paint (MacPaint), Interchange File Format (IFF or AmigaPaint), and Sun Raster image files are always read and written in big-endian byte order. The TIFF file format is unique, however, in that any TIFF file can be written in either format, and any TIFF reader must be able to read either byte order correctly regardless of the system on which the code is executing.

File Size and Memory Limitations The second most common problem, after byte-ordering differences, is the handling of large files. Many systems have limited memory, as is the case with MSDOS-based, early Macintosh, and other desktop machines. As a consequence of this limitation, two problems can arise. The first is that buffer memory available on a small machine may not be adequate to handle chunks of data deemed reasonable on a larger machine. Fortunately, this is a rare occurrence, but it is something to be aware of. Some formats are designed with the limitations of small machines in mind. A prudent thing to do is to avoid forcing the rendering application to buffer more than 32K of data at a time. The second problem is absolute file size. As suggested above, an uncompressed 24-bit bitmap file 1024 pixels square will be a minimum of 3,145,728 bytes in size. While this much memory may be available on a workstation, it may not be on a smaller machine. In this case, the rendering application will not be able to assemble the data in memory. If any alteration to the image must be done, extraordinary measures may need to be taken by the application prior to rendering. Thus, it is prudent to take advantage of the file "chunking" features available in many formats. Although it may take more programming effort to accommodate smaller machines, the effort also guarantees portability to future platforms.

Floating-point Formats Vector file formats occasionally store key points in floating-point format, and a number of different floating-point formats are in common use. Most floatingpoint data, however, is stored in a portable manner. The least common denominator approach is to store floating-point numbers as ASCII data, as a series of point pairs: 1234.56 2345.678 987.65 8765.43

The main problems you will encounter with floating-point data stored in ASCII format are with formatting conventions—how the numbers are delimited (comma, whitespace, etc.), and how many digits of precision need to be Platform Dependencies

95

handled by your parsing routines. Library routines are readily available to handle conversion from ASCII to native binary floating-point formats. Floating-point numbers stored in binary format present different problems. There are a number of floating-point binary formats in common use, including IEEE, Digital Equipment Corporation, and Microsoft Basic. Library routines are available for these conversions, but it may take some searching to find the correct one for your application. Sometimes, however, the hardest part of the job is identifying the formats you are trying to convert from and to. Bit Order Bit order refers to the direction in which bits are represented in a byte of memory. Just as words of data may be written with the most significant byte or the least significant byte first (in the lowest address of the word), so too can bits within a byte be written with the most significant bit or the least significant bit first (in the lowest position of the byte). The bit order we see most commonly is the one in which the zeroth, or leastsignificant bit, is the first bit read from the byte. This is referred to as up bit ordering or normal bit direction. When the seventh, or most significant, bit is the first one stored in a byte, we call this down bit ordering, or reverse bit direction.

The terms big-endian and little-endian are sometimes erroneously applied to bit order. These terms were specifically adopted as descriptions of differing byte orders only and are not used to differentiate bit orders (see the section called "Byte Order" earlier in this chapter). Normal bit direction, least significant bit to most significant bit, is often used in transmitting data between devices, such as FAX machines and printers, and for storing unencoded bitmapped data. Reverse bit direction, most significant bit to least significant bit, is used to communicate data to display devices and in many data compression encoding methods. It is therefore possible for a bitmap image file to contain data stored in either or both bit directions if both encoded and unencoded data is stored in the file (as can occur in the TIFF and CALS Raster image file formats). Problems that occur in reading or decoding data stored with a bit orders that is the reverse of the expected bit order are called bit sex problems. When the bit order of a byte must be changed, we commonly refer to this as reversing the bit sex.

Color sex problems result when the value of the bits in a byte are the inverse of what we expect them to be. Inverting a bit (also called flipping or toggling a bit) is to change a bit to its opposite state. A 1 bit becomes a 0 and a 0 bit 96

overview

becomes a 1. If you have a black-on-white image and you invert all the bits in the image data, you will have a white-on-black image. In this regard, inverting the bits in a byte of image data is normally referred to as inverting the color sex.

It is important to realize that inverting and reversing the bits in a byte are not the same operation and rarely produce the same results. Note the different resulting values when the bits in a byte are inverted and reversed: Original Bit Pattern: 10100001 Inverted Bit Pattern: 01011110

Original Bit Pattern: 10100001 Reversed Bit Pattern: 10000101

There are, however, cases when the inverting or reversing of a bit pattern will yield the same value: Original Bit Pattern: 01010101 Inverted Bit Pattern: 10101010

Original Bit Pattern: 01010101 Reversed Bit Pattern: 10101010

Occasionally, it is necessary to reverse the order of bits within a byte of data. This most often occurs when a particular hardware device, such as a printer, requires that the bits in a byte be sent in the reverse order in which they are stored in the computer's memory. Because it is not possible for most computers to directly read a byte in a reversed bit order, the byte value must be read and its bits rewritten to memory in reverse order. Reversing the order of bits within a byte, or changing the bit sex, may be accomplished by calculation or by table look-up. A function to reverse the bits within a byte is shown below: // // Reverse the order of bits within a byte. // Returns: The reversed byte value. // BYTE ReverseBits(BYTE b) { BYTE c; c = ((b » 1) & 0x55) I ((b « 1) & Oxaa); c 1= ((b » 2) & 0x33) I ((b « 2) & Oxcc) ; c 1= ((b » 4) & OxOf) I ((b « 4) & OxfO); return(c); }

platform Dependencies

97

If an application requires more speed in the bit-reversal process, the above function can be replaced with the REVERSEBITS macro and look-up table below. Although the macro and look-up table is faster in performing bit reversal than the function, the macro lacks the prototype checking that ensures that every value passed to the function ReverseBitsQ is an 8-bit unsigned value. An INVERTBITS macro is also included for color sex inversion. #def ine INVERTBITS (b)

(~ (b) )

#def ine REVERSEBITS (b) (BitReverseTable [b]) static BYTE BitReverseTable[256] = { 0x00 0x80 0x40 OxcO 0x20 OxaO 0x60 OxeO, 0x10 0x90 0x50 OxdO 0x30 OxbO 0x70 OxfO, 0x08 0x88 0x48 0xc8 0x28 0xa8 0x68 0xe8, 0x18 0x98 0x58 0xd8 0x38 0xb8 0x78 0xf8, 0x04 0x84 0x44 0xc4 0x24 0xa4 0x64 0xe4, 0x14 0x94 0x54 0xd4 0x34 0xb4 0x74 0xf4, 0x0c 0x8c 0x4c Oxcc 0x2 c Oxac 0x6c Oxec, 0x1c 0x9c 0x5c Oxdc 0x3 c Oxbc 0x7c Oxfc, 0x02 0x82 0x42 0xc2 0x22 0xa2 0x62 0xe2, 0x12 0x92 0x52 0xd2 0x32 0xb2 0x72 0xf2, 0x0a 0x8a 0x4a Oxca 0x2a Oxaa 0x6a Oxea, 0x1a 0x9a 0x5a Oxda 0x3a Oxba 0x7a Oxfa, 0x06 0x86 0x46 , 0xc6 0x26 0xa6 0x66 0xe6, 0x16 0x96 0x56 , 0xd6 0x36 0xb6 0x76 0xf6, OxOe 0x8e 0x4e r Oxce 0x2 e Oxae 0x6e Oxee, Oxle 0x9e 0x5e Oxde 0x3 e Oxbe 0x7e Oxfe, 0x01 0x81 0x41 Oxcl 0x21 Oxal 0x61 Oxel, 0x11 0x91 0x51 , Oxdl 0x31 Oxbl 0x71 , Oxfl, 0x09 0x89 0x49 , 0xc9 0x29 0xa9 0x69 0xe9, 0x19 0x99 0x59 , 0xd9 0x39 0xb9 0x79 , 0xf9, 0x05 0x85 0x45 , 0xc5 0x25 0xa5 0x65 , 0xe5, 0x15 0x95 0x55 , 0xd5 0x35 0xb5 0x75 , 0xf5, OxOd 0x8d 0x4d r Oxcd 0x2d Oxad 0x6d , Oxed, Oxld 0x9d 0x5d , Oxdd 0x3d Oxbd 0x7d Oxfd, 0x03 0x83 0x43 0xc3 0x23 0xa3 0x63 0xe3, 0x13 0x93 0x53 , 0xd3 0x33 0xb3 0x73 0xf3, 0x0b 0x8b 0x4b Oxcb 0x2b Oxab 0x6b Oxeb, 0x1b 0x9b 0x5b Oxdb 0x3b Oxbb 0x7b Oxfb, 0x07 0x87 0x47 , 0xc7 0x27 0xa7 0x67 0xe7, 0x17 0x97 0x57 0xd7 0x37 0xb7 0x77 0xf7, 0x0 f 0x8 f 0x4 f Oxcf 0x2 f Oxaf 0x6 f Oxef, 0x1 f 0x9 f 0x5 f , Oxdf 0x3 f Oxbf 0x7 f Oxff };

98

Overview

Filenames Whether you are writing a file or reading one written by another user, you need to be aware of the differences among filenames on various platforms. Filename Structure

By number of installed machines, the three most popular platforms at the time of this writing are MS-DOS, Macintosh, and UNIX, roughly in the ratio of 100:10:5. All three support the name.extension filenaming convention (although this is mostly true of the MS-DOS and UNIX systems). Applications occasionally use the extension portion of the filename for file type identification. Other systems with a large installed user base (such as OS/2, Amiga, Atari, and VMS) have roughly similar naming conventions. VMS, for instance, uses as a default the format: namel. name2\ version

where version is an integer denoting the revision number of the file. In any case, files are likely to come from anywhere, and examination of the extension portion of a filename, if present, may help you to identify the format. Filename Length UNIX and Macintosh users are accustomed to long filenames: ThisIsAMacFilename This is also a Mac Filename This. Is. A. Unix.Filename

The MS-DOS, Windows NT, and OS/2 FAT filesystems, on the other hand, limit filenames to the 8.3 format (eight characters per filename, three per extension): msdosnam.ext

For interplatform portability, we suggest that you consider using the 8.3 convention. Be aware, if you are using MS-DOS, that you may get filename duplication errors when you convert multiple files from other platforms. Depending on the application doing the filename conversion, the following files from a typical UNIX installation: thisis.file. number. 1 thisis.file. number. 2

PLATFORM DEPENDENCIES

99

are both converted to the following filename under MS-DOS, and the second file will overwrite the first: thisis.fil

Case Sensitivity Users on Macintosh and UNIX systems are accustomed to the fact that filenames are case-sensitive, and that filenames can contain mixed uppercase and lowercase. Filenames on MS-DOS systems, however, are case-insensitive, and the filesystem effectively converts all names to uppercase before manipulating them. Thus: AMacFile.Ext AUnixFile.Ext become: AMACFILE.EXT AUNIXFIL.EXT

under MS-DOS and other similar filesystems. Similarly: Afile.Ext AFile.Ext are both converted to: AFILE.EXT

For Further Information For an excellent description of the war between the byte orders, see Danny Cohen's paper, cited below. A good description of the original of the "endian" terms may be found in Eric Raymond's monumental work, also cited below. Both publications are also widely available via the Internet. Cohen, Danny. "On Holy Wars and a Plea for Peace," TEEE Computer Magazine, Volume 14, October 1981, pp. 48-54. Raymond, Eric. The New Hacker's Dictionary, MIT Press, Cambridge, MA, 1991.

100

Overview

Chapter 7

Format Conversion

Is It Really Possible? Programmers of all kinds always ask for information on how to convert between file formats, and graphics programmers are no exception. You must realize, however, that not every format can be converted to every other format, and this is doubly so if you wish to preserve image quality. The biggest problems occur when you attempt to convert between files of different basic format types—bitmap to vector, for instance. Successful conversion between basic format types is not always possible due to the great differences in the ways data is stored.

File conversion is a thankless task for any number of reasons. No vendor, for instance, feels obligated to disclose revisions to proprietary formats, or even to publish accurate and timely specifications for the ones already in common use. Many formats also have obscure variants which may be difficult to track down. At any moment, too, a revision to a major application may appear, containing a bug which makes it produce incorrect files. These files will ever after need to be supported. For all these reasons, think long and hard about any decision you make about whether to include file conversion features in your application. Remember, too, that a format that is reasonably well designed for a particular intended application cannot necessarily be converted for use with another application. Interchange between devices and applications is only as good as the software components (readers and writers) which generate and interpret the format, whether it is CGM, PostScript, or any other format.

FORMAT CONVERSION

101

Don't Do It If You Don't Need to .. • If you do need to convert image files between formats, consider using one or more of the software packages written especially for file format conversion. UNIX systems, in particular, have a number of extraordinarily good tools for format conversion, which are freely available and distributed in source form; pbmplus is a particularly good tool, and we've included it on the CD-ROM that accompanies this book. We have also included conversion tools for other platforms, including a pbmplus port (MS-DOS), Conversion Assistant for Windows (Windows), GBM (OS/2), and Graphics Converter (Macintosh). Inset Systems' Hijaak, a commercial product that we are not able to include on the CD-ROM, is an excellent product as well. You should consider writing your own conversion program only if you have very specific conversion needs not satisfied by the many publicly available and commercial applications—for example, to accommodate a proprietary format. If you do decide to write your own converter, you will need to know a bit about what to expect when converting from one format type to another. ...But If You Do In what follows we will call the file to be converted (which already exists in one format) the original, and the file produced in the second format (after the conversion operation) the converted file. A conversion application acts upon an original file to produce a converted file. If you remember our terminology, an application renders a graphics file to an output device or file. By extension, then, we will say that a conversion application renders an original file to a converted file. We will also use the term translation as a synonym for conversion. Bitmap to Bitmap Converting one bitmap format to another normally yields the best results of all the possible conversions between format types. All bitmap images consist of pixels, and ultimately all bitmap data is converted one pixel value at a time. Bitmap headers and the data contained in them can vary considerably, but the data contained in them can be added or discarded at the discretion of the con-

version software to make the best conversion possible. Usually, some rearrangement of the color data is necessary. This might take the form of separation of pixel data into color plane data, or the addition or removal of a palette. It might even entail conversion from one color model to another.

102

overview

Unsuccessful bitmap-to-bitmap conversions occur most often when translating a file written in a format supporting deep pixel data to one written in a lessercolor format, for example, one supporting only palette color. This can occur when you are converting between a format supporting 24-bit data and one supporting only 8-bit data. The palette color format may not support the storage of enough data necessary to produce an accurate rendering of the image. Usually, some image-processing operations {quantization or dithering, most likely) are needed to increase the apparent quality of the converted image. Operations of this sort will necessarily make the converted image appear different from the original, and thus technically the image will not have been preserved in the conversion process. The other main problem comes about when the converted image must be made smaller or larger than the original. If the converted image is smaller than the original, information in the original image must be thrown away. Although various image-processing strategies can be used to improve image quality, some of the information is nonetheless removed when the original file is shrunk. If the converted image is larger than the original, however, information must be created to fill in the spaces that appear between formerly adjacent pixels when the original image is enlarged. There is no completely satisfactory way to do this, and the processes currently used typically give the enlarged image a block-pixel look. An example of a bitmap-to-bitmap conversion that is almost always successful is PCX to Microsoft Windows Bitmap. Vector to Vector

Conversion between vector formats—for example, from AutoCAD DXF to Lotus DIF—is possible and sometimes quite easy. Two serious problems can occur, though. The first comes about due to differences in the number and type of objects available in different vector formats. Some formats, for instance, provide support for only a few simple image elements, such as circles and rectangles. Richer formats may also provide support for many additional complex elements, such as pattern fills, drop shadowing, text fonts, b-splines, and Bezier curves. Attempting to convert a file written in a complex format rich in elements to a simpler format will result in an approximation of the original image. The second problem comes from the fact that that each vector format has its own interpretation of measurements and the appearance of image elements and primitives. Rarely do two formats agree exactly on the placement and appearance of even simple image elements. Common problems are those related to line joint styles and end styles, and to centerline and centerpoint

Format Conversion

103

locations. For example, the Macintosh PICT format assumes that lines are drawn with a pen point that is below and to the right of the specified coordinates, while most other formats center their pens direcdy on the coordinates. Another example is the GEM VDI format, which assumes that a line should be drawn so that it extends one-half of the width of the line past the end coordinate. Lines in other formats often stop exactly at the end of the coordinate pixel. It is quite difficult to generalize further than this. If you write a vector-to-vector format converter, you must be aware of the peculiarities of each vector format and the problems of conversion between one specific format and the other. Metafile to Metafile Because metafiles can contain both bitmap and vector image data in the same file, problems inherent in bitmap and vector conversion apply to metafiles as well. Generally, the bitmap part of the metafile will convert with success, but the accuracy of the conversion of the vector part will depend on the match to the format to which you are converting. An example of a metafile-to-metafile conversion is Microsoft Windows Metafile (WMF) to CGM. Vector and Metafile to Bitmap Converting vector and metafile format files to bitmap format files is generally quite easy. A vector image can be turned into a bitmap simply by dividing it up into component pixels and then writing those pixels to an array in memory in the memory equivalent of a contrasting color. The array can then be saved in a bitmap format file. This process is familiar to users of paint programs, where a mouse or stylus is used to draw geometrical shapes which appear as series of lines on the screen. When the data is written out to disk, however, it is stored in a bitmap file as a series of colored pixels rather than as mathematical data describing the position of the lines making up the image. The ultimate quality of the resulting bitmap image will depend on the resolution of the bitmap being rendered to and the complexity (color, pixel depth, and image features) of the original vector image. The most common problem occurring with this type of conversion is aliasing, sometimes known as the jaggies. This is where arcs and diagonal lines take on a staircase appearance, partly due to the relatively low resolution of the output bitmap compared to that necessary to adequately support rendering of the output image. The conversion of ASCII metafile data to binary bitmap data is usually the most complicated and time-consuming part of metafile conversion. As mentioned above in the section discussing the three basic formats, many metafile 104

Overview

formats also contain a bitmap image. If conversion from vector to bitmap data achieves poor results, then converting the bitmap data to the desired format may not only result in a better job, but may also be a much quicker process. A metafile-to-bitmap conversion that is almost always successful is Microsoft Windows Metafile to Microsoft Windows Bitmap. Bitmap and Metafile to Vector Converting bitmap and metafile format files to vector format files is usually the hardest of all to perform, and rarely does it achieve any kind of usable results. This is due to the fact that complex image processing algorithms and heuristics are necessary to find all the lines and edges in bitmap images. Once the outlines are found, they must be recognized and converted to their vector element equivalents, and each step is prone to error. Simple bitmap images may be approximated as vector images, usually as black-and-white line drawings, but more complex photographic-type images are nearly impossible to reproduce accurately. Nevertheless, commercial applications exist to provide various types of edge detection and vectorization. Edge detection remains an active area of research.

Another problem inherent in the conversion of bitmap format files to vector is that of color. Although most bitmap files incorporate many colors, vector formats seldom provide support for more than a few. The conversion of an original bitmap file to a vector file can result in a loss of color in the converted image. Metafiles also have the same problems associated with the conversion of their bitmap components, although many metafile formats are capable of handling the colors found in the original raster image data. Close vector reproductions of bitmap images are not usually possible unless the bitmap data is very simple. Bitmap and Vector to Metafile Converting bitmap format files to metafiles can be quite accurate because most metafile format files can contain a bitmap image as well. Vector format source files have a more limited range of metafile target formats to which they can successfully convert. Problems encountered in this type of conversion are the same as those occurring in bitmap-to-vector conversions. A common process conversion of this type is the conversion of binary bitmap or vector image files to an ASCII metafile format such as PostScript. Although PostScript is covered only briefly in this book, it is widely used for file interchange, particularly on the Macintosh platform. Such conversions lend portability to image data designed to be moved between machines or which may be directed to a number of different output devices. FORMAT CONVERSION

105

Other Format Conversion Considerations There are other problems that occur when converting from one file format to another. One of the most vexing comes up when converting to a destination format that supports fewer colors than are contained in the original image. Also, the number of colors in an image may not be a problem, but the specific colors contained in the original image may be. For example, consider the conversion of a 256-color image from one format to another. Both formats support images with up to 256 different colors in the bitmap, so the number of colors is not a problem. What is a problem, however, is that the source format chooses the colors that can go in the palette from a field of 16 million (24-bit palette), while the target format can store colors only from a range of 65,535 (16-bit palette). It is quite likely that the source image will contain colors not defined in the palette of the target image. The application doing the conversion will have to rely on some color aliasing scheme, which usually fails to provide satisfactory results.

106

Overview

Chapter 8

Working With Graphics Files This chapter provides some general guidance on reading and writing graphics files stored in the various formats described in this book, as well as examples of code you can use in your own programs. (For many additional code examples for specific file formats, see the CD that accompanies this book.) In this chapter we also discuss vthe use of test files, and we touch upon what it means to develop your own file format (if you dare!).

Reading Graphics Data A graphics format file reader is responsible for opening and reading a file, determining its validity, and interpreting the information contained within it. A reader may take, as its input, source image data either from a file or from the data stream of an input device, such as a scanner or frame buffer. There are two types of reader designs in common use. The first is the filter. A filter reads a data source one character at a time and collects that data for as

long as it is available from the input device or file. The second type is the scanner (also called a parser). Scanners are able to randomly access across the entire data source. Unlike filters, which cannot back up or skip ahead to read information, scanners can read and reread data anywhere within the file. The main difference between filters and scanners is the amount of memory they use. Although filters are limited in the manner in which they read data, they require only a small amount of memory in which to perform their function. Scanners, on the other hand, are not limited in how they read data, but as a tradeoff may require a large amount of memory or disk space in which to store data.

Because most image files are quite large, make sure that your readers are highly optimized to read file information as quickly as possible. Graphics and

Working With Graphics Files

107

imaging applications are often harshly judged by users based on the amount of time it takes them to read and display an image file. One curiosity of user interface lore states that an application that renders an image on an output device one scan line at a time will be perceived as slower than an application that waits to paint the screen until the entire image has been assembled, even though in reality both applications may take the same amount of time.

Binary Versus ASCII Readers Binary image readers must read binary data written in 1-, 2-, and 4-byte word sizes and in one of several different byte-ordering schemes. Bitmap data should be read in large chunks and buffered in memory for faster performance, as opposed to reading one pixel or scan line at a time. ASCII-format image readers require highly optimized string reader and parser functions capable of quickly finding and extracting pertinent information from a string of characters and converting ASCII strings into numerical values.

Reading a Graphics File Header The type of code you use to implement a reader will vary greatly, depending upon how data is stored in a graphics file. For example, PCX files contain only little-endian binary data; Encapsulated PostScript files contain both binary and ASCII data; TIFF files contain binary data that may be stored in either the bigor little-endian byte order; and AutoCAD DXF files contain only ASCII data. Many graphics files that contain only ASCII data may be parsed one character at a time. Usually, a loop and a series of rather elaborate nested case statements are used to read through the file and identify the various tokens of keywords and values. The design and implementation of such a text parser is not fraught with too many perils. Where you can find some real gotchas is in working with graphics files containing binary data, such as the contents of most bitmap files. A few words of advice are in order, so that when you begin to write your own graphics file readers and writers you don't run into too many problems (and inadvertently damage your fists with your keyboard!). When you read most bitmap files, you'll find that the header is the first chunk of data stored in the file. Headers store attributes of the graphics data that may change from file to file, such as the height and width of the image and the number of colors it contains. If a format always stored images of the same size, 108

Overview

type, and number of colors, a header wouldn't be necessary. The values for that format would simply be hard-coded into the reader. As it is, most bitmap file formats have headers, and your reader must know the internal structure of the header of each format it is to read. A program that reads a single bitmap format may be able to get away with seeking to a known offset location and reading only a few fields of data. However, more sophisticated formats containing many fields of information require that you read the entire header.

Using the C language, you might be tempted to read in a header from a file using the following code: typedef struct _Header { DWORD Fileld; BYTE Type; WORD Height; WORD Width; WORD Depth; CHAR FileName[81] ; DWORD Flags; BYTE Filler[32]; HEADER; HEADER header; FILE *fp = fopen("MYFORMAT.FOR", "rb"); if (fp) fread(&header, sizeof(HEADER), 1, fp) ;

Here we see a typical bitmap file format header defined as a C language structure. The fields of the header contain information on the size, color, type of image, attribute flags, and name of the image file itself. The fields in the header range from one to 80 bytes in size, and the entire structure is padded out to a total length of 128 bytes. The first potential gotcha may occur even before you read the file. It lies waiting for you in the fopenQ function. If you don't indicate that you are opening the graphics file for reading as a binary file (by specifying the "rb" in the second argument of the fopen{) parameter list), you may find that extra carriage returns and/or line feeds appear in your data in memory that are not in the graphics file. This is because fopen{) opens files in text mode by default.

Working With Graphics Files

109

In C++, you need to OR the iosrrbinary value into the mode argument of the fstream or ifstream constructor: fstream *fs = new fstream ("MYFORMAT.FOR", ios: :in| ios: :binary) ;

After you have opened the graphics file successfully, the next step is to read the header. The code we choose to read the header in this example is the fread{) function which is most commonly used for reading chunks of data from a file stream. Using fread(), you can read the entire header with a single function call. A good idea, except that in using freadi) you are likely to encounter problems. You guessed it, the second gotcha! A common problem you may encounter when reading data into a structure is that of the boundary alignment of elements within the structure. On most machines, it is usually more efficient to align each structure element to begin on a 2-, 4-, 8-, or 16-byte boundary. Because aligning structure elements is the job of the compiler, and not the programmer, the effects of such alignment are not always obvious. The compiler word-aligns structure elements in the same way. By adding padding, we increased the length of the header so it ends on a 128-byte boundary. Just as we added padding at the end of the header, compilers add invisible padding to structures to do the following: • Start the structure on a word boundary (an even memory address). • Align each element on the desired word or doubleword boundary. • Ensure that the size of the structure is an even number of bytes in size (16-bit machines) or is divisible by four (32-bit machines). The padding takes the form of invisible elements that are inserted between the visible elements the programmer defines in the structure. Although this invisible padding is not directly accessible, it is as much a part of the structure as any visible element in the structure. For example, the following structure will be five, six, or eight bytes in size if it is compiled using a 1-, 2-, or 4-byte word alignment: typedef struct { BYTE A; DWORD B; } TEST;

test

/* One byte */ /* Four bytes */

With 1-byte alignment, there is no padding, and the structure is five bytes in size, with element B beginning on an odd-byte boundary. With 2-byte alignment, one byte of padding is inserted between elements A and B to allow element B to begin on the next even-byte boundary. With 4-byte alignment, three 110

Overview

bytes of padding are inserted between A and B, allowing element B to begin on the next even-word boundary.

Determining the Size of a Structure At runtime, you can use the sizeof() operator to determine the size of a structure: typedef struct _test { BYTE A; DWORD B; } TEST;

printf("TEST is %u bytes in lengthen", sizeof(TEST));

Because most ANSI C compilers don't allow the use of sizeof() as a preprocessor statement, you can check the length of the structure at compile time by using a slightly more clever piece of code: /* ** Test if the size of TEST is five bytes or not. If not, the array ** SizeTest[] will be declared to have zero elements, and a ** compile-time error will be generated on all ANSI C compilers. ** Note that the use of a typedef causes no memory to be allocated ** if the sizeof () test is true. And please, document all such ** tests in your code so other programmer will know what the heck ** you are attempting to do. */ typedef char CHECKSIZEOFTEST[sizeof(TEST) == 5];

The gotcha here is that the JreadQ function will write data into the padding when you expected it to be written to an element. If you used fread() to read five bytes from a file into our 4-byte-aligned TEST structure, you would find that the first byte ended up correctly in element A, but that bytes 2, 3, and 4 were stored in the padding and not in element B as you had expected. Element B will instead store only byte 5, and the last three bytes of B will contain garbage. There are several steps involved in solving this problem. First, attempt to design a structure so each field naturally begins on a 2-byte (for 16-bit machines) or 4-byte (for 32-bit machines) boundary. Now if the compiler's byte-alignment flag is turned on or off, no changes will occur in the structure.

Working With Graphics Files

111

When defining elements within a structure, you also want to avoid using the INT data type. An INT is two bytes on some machines and four bytes on others. If you use INTs, you'll find that the size of a structure will change between 16and 32-bit machines even though the compiler is not performing any word alignment on the structure. Always use SHORT to define a 2-byte integer element and LONG to specify a four-byte integer element, or use WORD and DWORD to specify their unsigned equivalents. When you read an image file header, you typically don't have the luxury of being the designer of the file's header structure. Your structure must exactly match the format of the header in the graphics file. If the designer of the graphics file format didn't think of aligning the fields of the header, then you're out of luck. Second, compile the source code module that contains the structure with a flag indicating that structure elements should not be aligned (/Zpl for Microsoft C++ and -al for Borland C++). Optionally, you can put the #pragma directive for this compiler option around the structure; the result is that only the structure is affected by the alignment restriction and not the rest of the module.

This, however, is not a terribly good solution. As we have noted, by aligning all structure fields on a 1-byte boundary, the CPU will access the structure data in memory less efficiently. If you are reading and writing the structure only once or twice, as is the case with many file format readers, you may not care how quickly the header data is read. You must also make sure that, whenever anybody compiles your source code, they use the 1-byte structure alignment compiler flag. Depending on which machine is executing your code, failure to use this flag may cause problems reading image files. Naming conventions may also differ for #pragma directives between compiler vendors; on some compilers, the byte-alignment #pragma directives might not be supported at all. Finally, we must face the third insidious gotcha—the native byte order of the CPU. If you attempt to freadQ a graphics file header containing data written in the little-endian byte order on a big-endian machine (or big-endian data in a file on a little-endian machine), you will get nothing but byte-twiddled garbage. The fread() function cannot perform the byte-conversion operations necessary to read the data correctly, because it can only read data using the native byte order of the CPU. At this point, if you are thinking "But, I'm not going to read in each header field separately!" you are in for a rather rude change of your implementation paradigm!

112

Overview

Reading each field of a graphics file header into the elements of a structure, and performing the necessary byte-order conversions, is how it's done. If you are worried about efficiency, just remember that a header is usually read from a file and into memory only once, and you are typically reading less than 512 bytes of data—in fact, typically much less than that. We doubt if the performance meter in your source code profiler will show much of a drop. So, how do we read in the header fields one element at a time? We could go back to our old friend freadQ: HEADER header; fread(&header. Fileld,

sizeof (header .Fileld) ,

1, fp)

fread(^header. Height,

sizeof (header .Height)

1, fp)

fread(^header. Width,

sizeof (header .Width) ,

1, fp)

fread(fcheader. Depth,

sizeof (header • Depth),

1, fp)

fread(&header. Type,

sizeof (header .Type),

1, fp)

fread(&header. FileName, sizeof (header FileName), 1, fp) fread(Sheader. Flags,

sizeof (header Flags),

1, fp)

fread(&header. Filler,

sizeof (header Filler),

1, fp)

While this code reads in the header data and stores it in the structure correctly (regardless of any alignment padding), fread{) still reads the data in the native byte order of the machine on which it is executing. This is fine if you are reading big-endian data on a big-endian machine, or little-endian data on a littleendian machine, but not if the byte order of the machine is different from the byte order of the data being read. It seems that what we need is a filter that can convert data to a different byte order. If you have ever written code that diddled the byte order of data, then you have probably written a set of SwapBytes functions to exchange the position of bytes with a word of data. Your functions probably looked something like this: /* ** Swap the bytes within a 16-bit WORD. */ WORD SwapTwoBytes (WORD w) { register WORD trap; trap = (w Sc 0x0OFF) ; trap = ((w & OxFFOO) » 0x08) I (trap « 0x08); return (trap) ; } /* ** Swap the bytes within a 32-bit DWORD. */ DWORD SwapFourBytes (DWORD w)

WORKING WITH GRAPHICS FILES

113

register DWORD tnp; trap = (w & OxOOOOOOFF) ; trrp = ((w & OxOOOOFFOO) » 0x08) I (tnp « 0x08) tnp = ((w & OxOOFFOOOO) » 0x10) I (tnp « 0x08) tnp = ((w & OxFFOOOOOO) » 0x18) I (tnp « 0x08) return (tnp) ; }

Because words come in two sizes, you need two functions: SwapTwoBytes() and SwapFourBytesQ—for those of you in the C++ world, you'll just write two overloaded functions, or a function template, called SwapBytes(). Of course you can swap signed values just as easily by writing two more functions which substitute the data types SHORT and LONG for WORD and DWORD. Using our SwapBytes functions, we can now read in the header as follows: HEADER header; freacK&header.Fileld,

sizeof(header.Fileld),

1, fp);

header.Fileld = SwapFourBytes(header.Fileld); fread(&header.Height,

sizeof(header.Height),

1, fp);

header.Height = SwapTwoBytes(header.Height); fread (&header .Width,

sizeof (header .Width),

1, fp);

header.Width = SwapTwoBytes (header.Width); f read (Sheader. Depth,

sizeof (header. Depth) ,

1, f p);

header.Depth = SwapTwoBytes (header.Depth) ; f read (&header. Type,

si zeof (header. Type),

1, fp) ;

f read(Scheader. FileName, sizeof (header. FileName), 1, fp) ; fread(Sheader.Flags,

sizeof(header.Flags),

1, fp);

header.Flags = SwapFourBytes(header.Flags); fread(&header.Filler,

sizeof(header.Filler),

1, fp) ;

We can read in the data using fread() and can swap the bytes of the WORD and DWORD-sized fields using our SwapBytes functions. This is great if the byte order of the data doesn't match the byte order of the CPU, but what if it does? Do we need two separate header reading functions, one with the SwapBytes functions and one without, to ensure that our code will work on most machines? And, how do we tell at runtime what the byte-order of a machine is? Take a look at this example: #define LSB_FIRST #define MSB_FIRST /*

0 1

** Check the byte-order of the CPU. */ int CheckByteOrder (void) {

114

Overview

SHORT w = 0x0001; CHAR *b = (CHAR *) &w; return(b[0] ? LSB_FIRST : MSB_FIRST); }

The function CheckByteOrder() returns the value LSB_FIRST if the machine is little-endian (the little end comes first) and MSB__FIRST if the machine is bigendian (the big end comes first). This function will work correctly on all bigand little-endian machines. Its return value is undefined for middle-endian machines (like the PDP-11).

Let's assume that the data format of our graphics file is little-endian. We can check the byte order of the machine executing our code and can call the appropriate reader function, as follows: int byteorder = CheckByteOrder(); if (byteorder == LSB_FIRST) ReadHeaderAsLittleEndianO ; else ReadHeaderAsBigEndian () ;

The function ReadHeaderAsLittleEndianO would contain only the fread{) functions, and ReadHeaderAsBigEndian() would contain the fread() and SwapBytes() functions.

But this is not very elegant. What we really need is a replacement for both the fread() and SwapBytes functions that can read WORDs and DWORDs from a data file, making sure that the returned data are in the byte order we specify. Consider the following functions: /* ** Get a 16-bit word in either big- or little-endian byte order. */ WORD GetWord(char byteorder, FILE *fp) { register WORD w;

if (byteorder == MSB_FIRST) { w = (WORD) (fgetc(fp) & OxFF); w = ((WORD) (fgetc(fp) & OxFF)) I (w « 0x08); } else

/* LSB_FIRST */

{ w = (WORD) (fgetc(fp) & OxFF); w 1= ((WORD) (fgetc(fp) & OxFF) « 0x08); }

working With Graphics Files

115

return (w) ;

Get a 32-bit word in either big- or little-endian byte order. */ DWORD GetDword(char byteorder, FILE *fp) { register DWORD w; if (byteorder == MSB_FIRST) {

(DWORD) (fgetc(fp) ( (DWORD) (fgetc(fp) ( (DWORD) (fgetc(fp) ( (DWORD) (fgetc(fp)

w w w w

} else

OxFF); OxFF)) OxFF)) OxFF))

(w « 0x08) (w « 0x08) (w « 0x08)

/* LSB_FIRST */

{ 1=

(DWORD) (fgetc(fp) (((DWORD) (fgetc(fp) (((DWORD) (fgetc(fp) ( ( (DWORD) (fgetc(fp)

OxFF); OxFF)) « 0x08) OxFF)) « 0x10) OxFF)) « 0x18)

} return (w) } The GetWord() and GetDword() functions will read a word of data from a file

stream in either byte order (specified in their first argument). Valid values are LSB_FIRST and MSB_FIRST. Now, let's look at what reading a header is like using the GetWordQ and GetDword{) functions. Notice that we now read in the single-byte field Type using fgetc{) and that fread() is still the best way to read in blocks of byte-aligned data: HEADER header; int byteorder = CheckByteOrder();

116

header.Fileld

GetDword (byteorder, fp) ;

header.Height header. Width

GetWord(byteorder, fp);

header.Depth

GetWord(byteorder, fp),

header. Type

fgetc(fp);

Overview

GetWord(byteorder, fp)

fread(&header.FileName, sizeof (header.FileName) , 1, fp) ; header.Flags = GetDword(byteorder, f p) ; fread(&header.Filler, sizeof(header.Filler), 1, fp) ;

All we need to do now is to pass the byte order of the data being read to the GetWord() and GetDword{) functions. The data is then read correctly from the file stream regardless of the native byte order of the machine on which the functions are executing. The techniques we've explored for reading a graphics file header can also be used for reading other data structures in a graphics file, such as color maps, page tables, scan-line tables, tags, footers, and even pixel values themselves. Reading Graphics File Data In most cases, you will not find any surprises when you read image data from a graphics file. Compressed image data is normally byte-aligned and is simply read one byte at a time from the file and into memory before it is decompressed. Uncompressed image data is often stored only as bytes, even when the pixels are two, three, or four bytes in size. You will also usually use fread(), to read a block of compressed data into a memory buffer that is typically 8K to 32K in size. The compressed data is read from memory a single byte at a time, is decompressed, and the raw data is written either to video memory for display, or to a bitmap array for processing and analysis. Many bitmap file formats specify that scan lines (or tiles) of 1-bit image data should be padded out to the next byte boundary. This means that, if the width of an image is not a multiple of eight, then you probably have a few extra zeroed bits tacked onto the end of each scan line (or the end and/or bottom of each tile). For example, a 1-bit image with a width of 28 pixels will contain 28 bits of scan-line data followed by four bits of padding, creating a scan line 32 bits in length. The padding allows the next scan line to begin on a byte boundary, rather than in the middle of a byte. You must determine whether the uncompressed image data contains scan line padding. The file format specification will tell you if padding exists. Usually, the padding is loaded into display memory with the image data, but the size of the display window (the part of display memory actually visible on the screen) must be adjusted so that the padding data is not displayed.

Working With Graphics Files

117

Writing Graphics Data As you might guess, writing a graphics file is basically the inverse of reading it. Writers may send data directly to an output device, such as a printer, or they may create image files and store data in them. Writing a Graphics File Header When you write a graphics file header, you must be careful to initialize all of the fields in the header with the correct data. Initialize reserved fields used for

fill and padding with the value OOh, unless the file format specification states otherwise. You must write the header data to the graphics files using the correct byte order for the file format as well. Because the GetWord() and GetDword{) functions were so handy for correctly reading a header, their siblings PutWordQ and PutDword{) must be just as handy for writing one: /* ** Put a 16-bit word in either big- or little-endian byte order. */ void PutWord(char byteorder, FILE *fp, WORD w) { if (byteorder == MSB_FIRST) { fputc((w » 0x08) & OxFF, fp) ; fputc( w

& OxFF, fp) ;

} else

/* LSB_FIRST */

{ fputc( w

& OxFF, fp);

fputc((w » 0x08) & OxFF, fp) ; } } /* ** Put a 32-bit word in either big- or little-endian byte order. */ void PutDword(char kyteorder, FILE *fp, DWORD w) { if (byteorder == MSB_FIRST) { fputc((w » 0x18) Sc OxFF, fp) ; fputc((w » 0x10) & OxFF, fp) ; fputc((w » 0x08) Sc OxFF, fp) ; fputc( w

118

Overview

& OxFF, fp) ;

} else

/* LSB_FIRST */

{ fputc( w

& OxFF, fp)

fputc((w » 0x08) & OxFF, fp) fputc((w » 0x10) & OxFF, fp) fputc((w » 0x18) & OxFF, fp) } }

In the following example, we use fwrite{), PutWord(), and PutDword() to write out our header structure. Note that the byteorder argument in PutWord() and PutDword() indicates the byte order of the file we are writing (in this case, littleendian), and not the byte order of the machine on which the functions are being executed. Also, we indicate in fopenQ that the output file is being opened for writing in binary mode (wb): ti /pedef struct _Header { DWORD Fileld; BYTE Type; WORD Height; WORD Width; WORD Depth; CHAR FileName[81]; DWORD Flags; BYTE Filler[32]; } HEADER; int WriteHeader() { HEADER header; FILE *fp = fopen("MYFORMAT.FOR", "wb"); if (fp) { header.Fileld

= 0x91827364; = 3; = 8; = 512; = 512; strncpy((char *)header.FileName, "MYFORMAT.FOR", sizeof(header.FileName)); = 0x04008001; header.Flags memset(&header.Filler, 0, sizeof(header.Filler)); header. Type header.Depth header.Height header. Width

WORKING WITH GRAPHICS FILES

119

PutDword(MSB_FIRST, fp, header.Fileld) ; fput c(header.Type, fp); PutWord (MSB_FIRST, fp, header. Height) ; PutWord(MSB_FIRST/ fp, header.Width) ; PutWord(MSB_FIRST, fp, header.Depth) ; fwrite(&header.FileName, sizeof(header.FileName), 1, fp) ; PutDword (MSB_FIRST, fp, header. Flags) ; fwrite(&header.Filler, sizeof(header.Filler), 1, fp); fclose(fp); return(0); } return(1) ; }

Writing Graphics File Data Writing binary graphics data can be a little more complex than just making sure you are writing data in the correct byte order. Many formats specify that each scan line is to be padded out to end on a byte or word boundary if they do not naturally do so. When the scan-line data is read from the file, this padding (usually a series of zero bit values) is thrown away, but it must be added again later if the data is written to a file. Image data that is written uncompressed to a file may require a conversion before it is written. If the data is being written directly from video memory, it may be necessary to convert the orientation of the data from pixels to planes, or from planes to pixels, before writing the data. If the data is to be stored in a compressed format, the quickest approach is to compress the image data in memory and use fwriteQ to write the image data to the graphics file. If you don't have a lot of memory to play with, then write out the compressed image data as it is encoded, usually one scan line at a time. Test Files How can you be sure that your application supports a particular file format? Test, test, test... Files adhering to the written format specification, and software applications that work with them, are called fully conforming, or just conforming. Fully conforming software should always make the conservative choice if there is any ambiguity in the format specification. Of course it's not always clear what conservative means in any specified context, but the point is to not extend the format if given the chance, no matter how tempting the opportunity. In any case,

120

Overview

if you write software to manipulate graphics files, you'll need some fully conforming files to test the code. If you happen to be working with TIFF, GIF, or PCX, files in these formats are often just a phone call away, but beware. Files are not necessarily fully conforming, and there is no central registry of conforming files. In some cases, effort has gone into making sets of canonical test files; for example, some are included in the TIFF distribution.

The trick is to get a wide spectrum of files from a number of sources and to do extensive testing on everything you find—every size, resolution, and variant that can be found. Another strategy is to acquire files created by major applications and test for compatibility. For example, Aldus PageMaker files are often used to test TIFF files. Unfortunately, PageMaker has historically been the source of undocumented de facto TIFF revisions, and testing compatibility with PageMaker-produced TIFF files does not produce wholly accurate results. Any number of graphics file conversion programs exist which can do screen captures, convert files, and display graphics and images in many different formats. Once again, you are at the mercy of the diligence of the application author.

Remember, too, that it's your responsibility to locate the latest format revisions. We still see new GIF readers, for instance, on all platforms, which fail to support the latest specification. If you simply cannot locate an example of the format you need, your last resort will always be the format creator or caretaker. While this approach is to be used with discretion, and is by no means guaranteed, it sometimes is your only recourse, particularly if the format is very new. In any case, don't stop coding simply because one variation of an image file format can be read and converted successfully by your application. Test your application using as many variations of the format, and from as many different sources as possible. Have your application do its best to support the image format fully.

Designing Your Own Format We find it hard to imagine why anyone would think that the world needs another graphics file format. And, in fact, we don't want to give the impression that we're encouraging such behavior. But given the fact that people can and will create new formats, we'd like to leave you with some pointers.

Working With Graphics Files

121

Why Even Consider It? The truth is that this book does not even begin to include all of the hundreds of more obscure formats, some of which are used only privately and remain inside company walls. Companies wishing the output of their products to remain proprietary will always find a way to do so and thus will continue to develop new formats. Designing your own format also will help you avoid trouble should use of someone else's format one day be restricted through legal action. Kodak has taken steps to ensure that its Photo CD format remains proprietary, for instance. It has also been active in enforcing that protection through the threat of legal action. Something to remember is that even though many formats may appear to be publicly available, very few actually are. Of course there are functional reasons for designing your own format. You may decide that an appropriate format doesn't yet exist, for instance, and thus feel compelled to create a new one. Reasoning leading to this decision is always suspect, however, and sending yet another format out into the world might even decrease your market share in this era of increasing interoperability and file sharing. The unfortunate reality is that file formats are usually created to support applications after the fact. In the modern world, marketing decisions and speculation about the future evolution of the supporting operating system and hardware platform play large parts in the development of program specifications from the very start. So we urge you to consider designing your application around a set of existing formats, or at least a format model. But If You Must... With that said, consider the following guidelines if you persist in designing your own:

• Study everybody else's mistakes. No matter what you think, you're not smart enough to avoid them all, unless you see them first. • Plan for future revisions. Give plenty of room for future expansion of, and changes to, data. Avoid building limitations into your format that will one day force you to make kludges to support further revisions. • Keep it simple. The last thing the world needs is another "write-only" format. The object is to make it easy to read, not easy to write. • Document everything! Use consistent terminology that everyone understands and will continue to understand until the end of time. Number your documentation revisions with the format revisions; that way, it will be obvious if you "forget" to document a new feature.

122

Overview

• Write the format before, not after, your application. Build the application around it. Don't make convenience revisions no matter what the provocation.

• Avoid machine dependencies; at the same time, don't add complications designed to support portability. • Find some unambiguous means by which a reading application can identify the format.

• Make the specification available. Do not discourage people interested in your format by refusing to supply them with information simply because they are not a registered user of your product. Charge a nominal fee, if you must, but make it available to all. Make identical electronic and printed versions of the specification in some common format like ASCII or PostScript. Place electronic versions on BBSs and FTP sites. • If your format is truly superior, market, market, market! Write software applications that use your format. Formats gain currency almost entirely through the marketing power of companies and their software. If your format is unique in the way it stores information, and you feel that it fills a niche that other formats don't, then advertise that fact. • Explicitly place the format in the public domain. If that is too threatening in the context of your company model, allow use with attribution. Do not discourage the spread of your format by including threats about copyright infringement and proprietary information in the specification. This only prevents your format from becoming widely accepted, and it alienates programmers who would otherwise be happy to further your company's plan of world domination for free.

• Develop canonical test files and make them freely available. Mark them as such in the image, with your company name, the format type, color model, resolution, and pixel depth at the very least. They will be a good form of advertising for you and your company, and are sure to be widely distributed with no effort on your part. One Last Ward

Remember that there is a lot of code already out there and that there are plenty of libraries available in source form that may be able to supply your needs. Consider this statement from the FAQ (Frequently Asked Questions list) from the comp.graphics newsgroup on the Internet: Format documents for TIFF, IFF, BIFF, NFF, OFF, FITS, etc. You almost certainly don't need these. Read the [section] on free image manipulation software. Get one or more of these packages and look through Working With Graphics Files

123

them. Chances are excellent that the image converter you were going to write is already there. For Further Information An excellent article concerning the solution to byte-order conversion problems appeared in the now-extinct magazine, C Gazette. It was written by a fellow named James D. Murray! Murray, James D. "Which Endian is Up?," C Gazette, Summer, 1990.

124

Overview

Chapter 9

Data Compression

Introduction Compression is the process used to reduce the physical size of a block of information. In computer graphics, we're interested in reducing the size of a block of graphics data so we can fit more information in a given physical storage space. We also might use compression to fit larger images in a block of memory of a given size. You may find when you examine a particular file format specification that the term data encoding is used to refer to algorithms that perform compression. Data encoding is actually a broader term than data compression. Data compression is a type of data encoding, and one that is used to reduce the size of data. Other types of data encoding include encryption (cryptography) and data transmission (e.g., Morse code). A compressor, naturally enough, performs compression, and a decompressor reconstructs the original data. Although this may seem obvious, a decompressor can operate only by using knowledge of the compression algorithm used to convert the original data into its compressed form. What this means in practice is that there is no way for you to avoid understanding the compression algorithms in the market today if you are interested in manipulating data files. At a minimum, you need a good general knowledge of the conceptual basis of the algorithms. If you read through a number of specification documents, you'll find that almost every format incorporates some kind of compression method, no matter how rudimentary. You'll also find that only a few different compression schemes are in common use throughout the industry. The most common of these schemes are variants of the following methods, which we discuss in the sections below.

Data Compression

125

• Run-length Encoding (RLE) • Lempel-Ziv-Welch (LZW) • CCITT (one type of CCITT compression is a variant on Huffman encoding) • Discrete Cosine Transform (DCT) (used in the JPEG compression we discuss in this chapter) In addition, we discuss pixel packing, which is not a compression method per se, but is an efficient way to store data in contiguous bytes of memory. Data compression works somewhat differently for the three common types of graphics data: bitmap, vector, and metafile. In bitmap files, only the image data is compressed; the header and any other data (color map, footer, and so on) are always left uncompressed. This uncompressed data makes up only a small portion of a typical bitmap file. Vector files normally do not incorporate a native form of data compression. Vector files store a mathematical description of an image rather than the image data itself. There are reasons why vector files are rarely compressed: • The expression of the image data is already compact in design, so data compression would have little effect. • Vector images are typically slow to read in the first place; adding the overhead of decompression would make ending them still slower. If a vector file is compressed at all, you will usually find the entire file compressed, header and all. In metafiles, data compression schemes often resemble those used to compress bitmap files, depending upon the type of data the metafiles contain. It's important to realize that compression algorithms do not describe a specific disk file format. Many programmers ask vendors or newsgroups for specifications for the CCITT or JPEG file formats. There are no such specifications. Compression algorithms define only how data is encoded, not how it is stored on disk. For detailed information on how data is stored on disk, look to an actual image file format specification, such as BMP or GIF, which will define file headers, byte order, and other issues not covered by discussions of compression algorithms. More complex formats, such as TIFF, may incorporate several different compression algorithms. The following sections introduce the terms used in discussions of data compression and each of the main types of data compression algorithms used for graphics file formats today.

126

Overview

Data Compression Terminology This section describes the terms you'll encounter when you read about data compression schemes in this chapter and in graphics file format specifications. The terms unencoded data and raw data describe data before it has been com-

pressed, and the terms encoded data and compressed data describe the same information after it has been compressed. The term compression ratio is used to refer to the ratio of uncompressed data to compressed data. Thus, a 10:1 compression ratio is considered five times more efficient than 2:1, and data through 100 of the image before finding the ten lines it needed. Of course compressed using an algorithm yielding 10:1 compression is five times smaller than the same data compressed using an algorithm yielding 2:1 compression. In practice, because only image data is normally compressed, analysis of compression ratios provided by various algorithms must take into account the absolute sizes of the files tested.

Physical and Logical Compression Compression algorithms are often described as squeezing, squashing, crunching, or imploding data, but these are not very good descriptions of what is actually happening. Although the major use of compression is to make data use less disk space, compression does not actually physically cram the data into a smaller size package in any meaningful sense. Instead, compression algorithms are used to reencode data into a different, more compact representation conveying the same information. In other words, fewer words are used to convey the same meaning, without actually saying the same thing. The distinction between physical and logical compression methods is made on the basis of how the data is compressed, or, more precisely, how the data is rearranged into a more compact form. Physical compression is performed on data exclusive of the information it contains; it only translates a series of bits from one pattern to another, more compact one. While the resulting compressed data may be related to the original data in a mechanical way, that relationship will not be obvious to us humans.

Physical compression methods typically produce strings of gibberish, at least relative to the information content of the original data. The resulting block of compressed data is smaller than the original because the physical compression algorithm has removed the redundancy that existed in the data itself. All the compression methods discussed in this chapter are physical methods.

Data Compression

127

Logical compression is accomplished through the process of logical substitution—that is, replacing one alphabetic, numeric, or binary symbol with another. Changing "United States of America" to "USA" is a good example of logical substitution, because "USA" is derived directly from the information contained in the string "United States of America," and retains some of its meaning. In a similar fashion "can't" can be logically substituted for "cannot." Logical compression works only on data at the character level or higher and is based exclusively on information contained within the data. Logical compression itself is not used in image data compression. Symmetrical and Asymmetrical Compression Compression algorithms can also be divided into two categories: symmetric and asymmetric. A symmetric compression method uses roughly the same algorithms, and performs the same amount of work, for compression as it does for decompression. For example, a data transmission application where compression and decompression are both being done on the fly will usually require a symmetric algorithm for the greatest efficiency. Asymmetric methods require substantially more work to go in one direction than they require in the other. Usually, the compression step takes far more time and system resources than the decompression step. In the real world this makes sense. For example, if we are making an image database in which an image will be compressed once for storage, but decompressed many times for viewing, then we can probably tolerate a much longer time for compression than for decompression. An asymmetric algorithm which uses much CPU time for compression, but is quick to decode, would work well in this case. Algorithms which are asymmetric in the other direction are less common but have some applications. In making routine backup files, for example, we fully expect that many of the backup files will never be read. A fast compression algorithm that is expensive to decompress might be useful in this case. Adaptive, Semi-adaptive, and Non-adaptive Encoding Certain dictionary-based encoders, such as CCITT compression algorithms (see the section later in this chapter called "CCITT (Huffman) Encoding") are designed to compress only specific types of data. These non-adaptive encoders contain a static dictionary of predefined substrings that are known to occur with high frequency in the data to be encoded. A non-adaptive encoder designed specifically to compress English language text would contain a dictionary with predefined substrings such as "and", "but", "of, and "the", because these substrings appear very frequently in English text.

128

overview

An adaptive encoder, on the other hand, carries no preconceived heuristics about the data it is to compress. Adaptive compressors, such as LZW, achieve data independence by building their dictionaries completely from scratch. They do not have a predefined list of static substrings and instead build phrases dynamically as they encode. Adaptive compression is capable of adjusting to any type of data input and of returning output using the best possible compression ratio. This is in contrast to non-adaptive compressors, which are capable of efficiently encoding only a very select type of input data for which they are designed. A mixture of these two dictionary encoding methods is the semi-adaptive encoding method. A semi-adaptive encoder makes an initial pass over the data to build the dictionary and a second pass to perform the actual encoding. Using this method, an optimal dictionary is constructed before any encoding is actually performed. Lossy and Lossless Compression The majority of compression schemes we deal with in this chapter are called lossless. What does this mean? When a chunk of data is compressed and then decompressed, the original information contained in the data is preserved. No data has been lost or discarded; the data has not been changed in any way. Lossy compression methods, however, throw away some of the data in an image in order to achieve compression ratios better than that of most lossless compression methods. Some methods contain elaborate heuristic algorithms that adjust themselves to give the maximum amount of compression while changing as little of the visible detail of the image as possible. Other less elegant algorithms might simply discard a least-significant portion of each pixel, and, in terms of image quality, hope for the best. The terms lossy and lossless are sometimes erroneously used to describe the quality of a compressed image. Some people assume that if any image data is lost, this could only degrade the image. The assumption is that we would never want to lose any data at all. This is certainly true if our data consists of text or numerical data that is associated with a file, such as a spreadsheet or a chapter of our great American novel. In graphics applications, however, there are circumstances where data loss may be acceptable, and even recommended. In practice, a small change in the value of a pixel may well be invisible, especially in high-resolution images where a single pixel is barely visible anyway. Images containing 256 or more colors may have selective pixel values changed with no noticeable visible effect on the image.

DATA COMPRESSION

129

In black-and-white images, however, there is obviously no such thing as a small change in a pixel's value: each pixel can only be black or white. Even in blackand-white images, however, if the change simply moves the boundary between a black and a white region by one pixel, the change may be difficult to see and therefore acceptable. As mentioned in Chapter 2, Computer Graphics Basics, the human eye is limited in the number of colors it can distinguish simultaneously, particularly if those colors are not immediately adjacent in the image or are sharply contrasting. An intelligent compression algorithm can take advantage of these limitations, analyze an image on this basis, and achieve significant data size reductions based on the removal of color information not easily noticed by most people. In case this sounds too much like magic, rest assured that much effort has gone into the development of so-called lossy compression schemes in recent years, and many of these algorithms can achieve substantial compression ratios while retaining good quality images. This is an active field of research, and we are likely to see further developments as our knowledge of the human visual system evolves, and as results from the commercial markets regarding acceptance of lossy images make their way back to academia. For more information about lossy compression, see the JPEG section later in this chapter.

Pixel Packing Pixel packing is not so much a method of data compression as it is an efficient way to store data in contiguous bytes of memory. Most bitmap formats use pixel packing to conserve the amount of memory or disk space required to store a bitmap. If you are working with image data that contains four bits per pixel, you might find it convenient to store each pixel in a byte of memory, because a byte is typically the smallest addressable area of memory on most computer systems. You would quickly notice, however, that by using this arrangement, half of each byte is not being used by the pixel data (shown in Figure 9-1, a). Image data containing 4096 4-bit pixels will require 4096 bytes of memory for storage, half of which is wasted. To save memory, you could resort to pixel packing; instead of storing one 4-bit pixel per byte, you could store two 4-bit pixels per byte (shown in Figure 9-1, b). The size of memory required to hold the 4-bit, 4096-pixel image drops from 4096 bytes to 2048 bytes, only half the memory that was required before. Pixel packing may seem like common sense, but it is not without cost. Memorybased display hardware usually organizes image data as an array of bytes, each storing one pixel or less. If this is the case, it will actually be faster to store only 130

Overview

O 4-bit unpacked pixels ByteO |

pixel 0 I

Byte 1 |

I pixel 1 I

Byte 2

I pixel 2 \

O 4-bit packed pixels

| ByteO | Bytel | Byte 2

pixel 0 I pixel 1 I pixel 2 \ pixel 3 I pixel 4 I pixel 5 Figure 9-1: Pixel packing one 4-bit pixel per byte and read this data directly into memory in the proper format rather than to store two 4-bit pixels per byte, which requires masking and shifting each byte of data to extract and write the proper pixel values. The tradeoff is faster read and write times versus reduced size of the image file. This is a good example of one of the costs of data compression. Compression always has a cost. In this case, the cost is in the time it takes to unpack each byte into two 4-bit pixels. Other factors may come into play when decompressing image data: buffers need to be allocated and managed; CPUintensive operations must be executed and serviced; scan line bookkeeping must be kept current. If you are writing a file reader, you usually have no choice; you must support all compression schemes defined in the format specification. If you are writing a file writer, however, you always need to identify the costs and tradeoffs involved in writing compressed files. At one time in the history of computing, there was no decision to be made; disk space was scarce and expensive, so image files needed to be compressed. Now, however, things are different. Hard disks are relatively inexpensive, and alternate distribution and storage media (CD-ROM for instance) are even more so. More and more applications now write image files uncompressed by default. You need to examine carefully the target market of your application before deciding whether to compress or not. Data Compression

131

Run-length Encoding (RLE) Run-length encoding is a data compression algorithm that is supported by most bitmap file formats, such as TIFF, BMP, and PCX. RLE is suited for compressing any type of data regardless of its information content, but the content of the data will affect the compression ratio achieved by RLE. Although most RLE algorithms cannot achieve the high compression ratios of the more advanced compression methods, RLE is both easy to implement and quick to execute, making it a good alternative to either using a complex compression algorithm or leaving your image data uncompressed. RLE works by reducing the physical size of a repeating string of characters. This repeating string, called a run, is typically encoded into two bytes. The first byte represents the number of characters in the run and is called the run count In practice, an encoded run may contain 1 to 128 or 256 characters which is often stored in the run count as the number of characters minus one (a value in the range of 0 to 127 or 255). The second byte is the value of the character in the run, which is in the range of 0 to 255, and is called the run value Uncompressed, a character run of 15 A characters would normally require 15 bytes to store: AAAAAAAAAAAAAAA

The same string after RLE encoding would require only two bytes: 15A

The 15A code generated to represent the character string is called an RLE packet. Here, the first byte, 15, is the run count and contains the number of repetitions. The second byte, A, is the run value and contains the actual repeated value in the run. A new packet is generated each time the run character changes, or each time the number of characters in the run exceeds the maximum count. Assume that

our 15-character string now contains four different character runs: AAAAAAbbbXXXXXt

Using run-length encoding this could be compressed into four 2-byte packets: 6A3b5Xlt

Thus, after run-length encoding, the 15-byte string would require only eight bytes of data to represent the string, as opposed to the original 15 bytes. In this case, run-length encoding yielded a compression ratio of almost 2 to 1. Long runs are rare in certain types of data. For example, ASCII plaintext (such as the text on the pages of this book) seldom contains long runs. In the

132

Overview

previous example, the last run (containing the character t) was only a single character in length; a 1-character run is still a run. Both a run count and a run value must be written for every 2-character run. To encode a run in RLE requires a minimum of two characters worth of information; therefore, a run of single characters actually takes more space. For the same reasons, data consisting entirely of 2-character runs remains the same size after RLE encoding. In our example, encoding the single character at the end as two bytes did not noticeably hurt our compression ratio because there were so many long character runs in the rest of the data. But observe how RLE encoding doubles the size of the following 14-character string: Xtmpr sqzntwl f b

After RLE encoding, this string becomes: lXltlmlplrlslqlzlnltlwlllflb

RLE schemes are simple and fast, but their compression efficiency depends on the type of image data being encoded. A black-and-white image that is mosdy white, such as the page of a book, will encode very well, due to the large amount of contiguous data that is all the same color. An image with many colors that is very busy in appearance, however, such as a photograph, will not encode very well. This is because the complexity of the image is expressed as a large number of different colors. And because of this complexity there will be relatively few runs of the same color. Variants on RunJength Encoding There are a number of variants of run-length encoding. Image data is normally run-length encoded in a sequential process that treats the image data as a 1-dimensional stream, rather than as a 2-dimensional map of data. In sequential processing, a bitmap is encoded starting at the upper left corner and proceeding from left to right across each scan line (the X axis) to the bottom right corner of the bitmap (shown in Figure 9-2, a). But, alternative RLE schemes can also be written to encode data down the length of a bitmap (the Y axis) along the columns (shown in Figure 9-2, b), to encode a bitmap into 2-dimensional tiles (shown in Figure 9-2, c), or even to encode pixels on a diagonal in a zig-zag fashion (shown in Figure 9-2, d). Odd RLE variants such as this last one might be used in highly specialized applications, but are usually quite rare. Another seldom-encountered RLE variant is a lossy run-length encoding algorithm. RLE algorithms are normally lossless in their operation. However, discarding data during the encoding process, usually by zeroing-out one or two least significant bits in each pixel, can increase compression ratios without adversely affecting the appearance of very complex images. This RLE variant Data Compression

133

Q Encoding along the X axis

Q Zig zag encoding

Q Encoding along the Y axis Y s

Q Encoding (4x4 pixel) tiles

-

Figure 9-2: Run-length encoding variants works well only with real-world images that contain many subtle variations in pixel values. Make sure that your RLE encoder always stops at the end of each scan line of bitmap data that is being encoded. There are several benefits to doing so. Encoding only a simple scan line at a time means that only a minimal buffer size is required. Encoding only a simple line at a time also prevents a problem known as cross-coding. Cross-coding is the merging of scan lines that occur when the encoded process loses the distinction between the original scan lines. If the data of the

134

Overview

individual scan lines is merged by the RLE algorithm, the point where one scan line stopped and another began is lost, or at least is very hard to detect quickly. Cross-coding is sometimes done, although we advise against it. It may buy a few extra bytes of data compression, but it complicates the decoding process, adding time cost. For bitmap file formats, this technique defeats the purpose of organizing a bitmap image by scan lines in the first place. Although many file format specifications explicitly state that scan lines should be individually encoded, many applications encode image data as a continuous stream, ignoring scan line boundaries. If you have ever encountered an RLE-encoded image file that could be displayed using one application but not using another, cross-coding is typically the problem. To be safe, decoding and display applications must take crosscoding into account and not assume that an encoded run will always stop at the end of a scan line.

When an encoder is encoding an image, an end-of-scan-line marker is placed in the encoded data to inform the decoding software that the end of the scan line has been reached. This marker is usually a unique packet, explicitly defined in the RLE specification, which cannot be confused with any other data packets. End-of-scan-line markers are usually only one byte in length, so they don't adversely contribute to the size of the encoded data. Encoding scan lines individually has advantages when an application needs to use only part of an image. Let's say that an image contains 512 scan lines, and we need to display only lines 100 to 110. If we did not know where the scan lines started and ended in the encoded image data, our application would have to decode lines one through 100 of the image before finding the ten lines it needed. Of course, if the transitions between scan lines were marked with some sort of easily recognizable delimiting marker, the application could simply read through the encoded data, counting markers until it came to the lines it needed. But this approach would be a rather inefficient one. Another option for locating the starting point of any particular scan line in a block of encoded data is to construct a scan-line table. A scan-line table usually contains one element for every scan line in the image, and each element holds the offset value of its corresponding scan line. To find the first RLE packet of scan line 10, all a decoder needs to do is seek to the offset position value stored in the tenth element of the scan-line lookup table. A scan-line table could also hold the number of bytes used to encode each scan line. Using this method, to find the first RLE packet of scan line 10, your decoder would add together the

Data Compression

135

values of the first nine elements of the scan-line table. The first packet for scan line 10 would start at this byte offset from the beginning of the RLE-encoded image data. Bit-, Byte-, and PixeUevel RLE Schemes The basic flow of all RLE algorithms is the same, as illustrated in Figure 9-3. The parts of run-length encoding algorithms that differ are the decisions that are made based on the type of data being decoded (such as the length of data runs). RLE schemes used to encode bitmap graphics are usually divided into classes by the type of atomic (that is, most fundamental) elements that they encode. The three classes used by most graphics file formats are bit-, byte-, and pixel-level RLE. Bit-level RLE schemes encode runs of multiple bits in a scan line and ignore byte and word boundaries. Only monochrome (black and white), 1-bit images contain a sufficient number of bit runs to make this class of RLE encoding efficient. A typical bit-level RLE scheme encodes runs of one to 128 bits in length in a single-byte packet. The seven least significant bits contain the run count minus one, and the most significant bit contains the value of the bit run, either 0 or 1 (shown in Figure 9-4, a). A run longer than 128 pixels is split across several RLE-encoded packets. Byte-level RLE schemes encode runs of identical byte values, ignoring individual bits and word boundaries within a scan line. The most common byte-level RLE scheme encodes runs of bytes into 2-byte packets. The first byte contains the run count of 0 to 255, and the second byte contains the value of the byte run. It is also common to supplement the 2-byte encoding scheme with the ability to store literal, unencoded runs of bytes within the encoded data stream as well.

In such a scheme, the seven least significant bits of the first byte hold the run count minus one, and the most significant bit of the first byte is the indicator of the type of run that follows the run count byte (shown in Figure 9-4, b). If the most significant bit is set to 1, it denotes an encoded run (shown in Figure 9-4, c). Encoded runs are decoded by reading the run value and repeating it the number of times indicated by the run count. If the most significant bit is set to 0, a literal run is indicated, meaning that the next run count bytes are read literally from the encoded image data (shown in Figure 9-4, d). The run count byte then holds a value in the range of 0 to 127 (the run count minus one). Byte-level RLE schemes are good for image data that is stored as one byte per pixel.

136

Overview

RunCount=0

Increment RunCount

Write RunCount Value

i

YES

Write RunValue A

EXIT Write RunCount Value

± Write RunValueA

RunValueA = RunValueB

i RunCount=0

J

Figure 9-3: Basic run-length encoding flow Pixel-level RLE schemes are used when two or more consecutive bytes of image data are used to store single pixel values. At the pixel level, bits are ignored, and bytes are counted only to identify each pixel value. Encoded packet sizes vary depending upon the size of the pixel values being encoded. The number of bits or bytes per pixel is stored in the image file header. A run of image data Data Compression

137

stored as 3-byte pixel values encodes to a 4-byte packet, with one run-count byte followed by three run-value bytes (shown in Figure 9-4, e). The encoding method remains the same as with the byte-oriented RLE. It is also possible to employ a literal pixel run encoding by using the most significant bit of the run count as in the byte-level RLE scheme. In pixel-level RLE schemes, remember that the run count is the number of pixels and not the number of bytes in the run. Earlier in this section, we examined a situation where the string "Xtmprsqzntwlfb" actually doubled in size when compressed using a conventional RLE method. Each 1-character run in the string became two characters in size. How can we avoid this negative compression and still use RLE? Normally, an RLE method must somehow analyze of the uncompressed data stream to determine whether to use a literal pixel run. A stream of data would need to contain many on 1- and 2-pixel runs to make using a literal run efficient by encoding all the runs into a single packet. However, there is another method that allows literal runs of pixels to be added to an encoded data stream without being encapsulated into packets. Consider an RLE scheme that uses three bytes, rather than two, to represent a run (shown in Figure 9-5). The first byte is a flag value indicating that the following two bytes are part of an encoded packet. The second byte is the count value, and the third byte is the run value.e When encoding, if a 1-, 2-, or 3-byte character run is encountered, the character values are written directly to the compressed data stream. Because no additional characters are written, no overhead is incurred.

When decoding, a character is read; if the character is a flag value, then the run count and run values are read, expanded, and the resulting run written to the data stream. If the character read is not a flag value, then it is written directly to the uncompressed data stream. There two potential drawbacks to this method: • The minimum useful run-length size is increased from three characters to four. This could affect compression efficiency with some types of data. • If the unencoded data stream contains a character value equal to the flag value, it must be compressed into a 3-byte encoded packet as a run length of one. This prevents erroneous flag values from occurring in the compressed data stream. If many of these flag value characters are present, then poor compression will result. The RLE algorithm must therefore use a flag value that rarely occurs in the uncompressed data stream.

138

overview

O Bit-level RLE packet 1 Run Count \ Run Value

I

l

7 6

I

0

O Byte-level RLE packet \ Run Count | Run Value \

I

l

l

6

0/7

0

O Byte-level encoded RLE packet Run Count Run Value \ Run 1

ll

l

I

7 6

0/7

0

O Byte-level literal RLE packet 1 Run Count \ Run Value 1 \ Run Value 2 Run Value 3 \

\ Run Value n

Run 1

ll

l

I

I

I

I

I

7 6

0/7

0/7

0/7

0

7

0

O Pixel-level RLE packet hun value Pixel

.

Pixel

Pixel

1 Run Count Channel 1 \ Channel 2 \ Channel 3

Figure 9-4: Bit-, byte-, and pixel-level RLE schemes

data Compression

139

Encoded line with the following runs:

Flag = 255

28 pixels of value 53 13 pixels of value 212 1 pixel of value 37 1 pixel of value 53

1 pixel of value 12 1 pixel of value 12 4 pixels of value 113

I Flag Count Value I Flag Count Value I Value I Value I Value I

Value I Flag Count Value

Figure 9-5: RLE scheme with three bytes Vertical Replication Packets Some RLE schemes use other types of encoding packets to increase compression efficiency. One of the most useful of these packets is the repeat scan line packet, also known as the vertical replication packet This packet does not store any real scan-line data; instead, it just indicates a repeat of the previous scan line. Here's an example of how this works. Assume that you have an image containing a scan line 640 bytes wide and that all the pixels in the scan line are the same color. It will require 10 bytes to runlength encode it, assuming that up to 128 bytes can be encoded per packet and that each packet is two bytes in size. Let's also assume that the first 100 scan lines of this image are all the same color. At 10 bytes per scan line, that would produce 1000 bytes of run-length encoded data. If we instead used a vertical replication packet that was only one byte in size (possibly a run-length packet with a run count of 0) we would simply run-length encode the first scan line (10 bytes) and follow it with 99 vertical replication packets (99 bytes). The resulting run-length encoded data would then only be 109 bytes in size. If the vertical replication packet contains a count byte of the number of scan lines to repeat, we would need only one packet with a count value of 99. The

140

overview

resulting 10 bytes of scan-line data packets and two bytes of vertical replication packets would encode the first 100 scan lines of the image, containing 64,000 bytes, as only 12 bytes—a considerable savings. Figure 9-6 illustrates 1- and 2-byte vertical replication packets.

O RLE scheme with 1-byte vertical replication packets I Count I Value \ 1 \ 2 \ 3 \

I 98 I 99

repeat packet values

O RLE scheme with 2-byte vertical replication packets Repeat Count Value Flag Value I

Figure 9-6: RLE scheme with 1- and 2-byte vertical replication packets

Unfortunately, definitions of vertical replication packets are application dependent. At least two common formats, WordPerfect Graphics Metafile (WPG) and GEM Raster (IMG), employ the use of repeat scan line packets to enhance data compression performance. WPG uses a simple 2-byte packet scheme, as previously described. If the first byte of an RLE packet is zero, then this is a vertical replication packet. The next byte that follows indicates the number of times to repeat the previous scan line. The GEM Raster format is more complicated. The byte sequence, OOh OOh FFh, must appear at the beginning of an encoded scan line to indicate a vertical replication packet. The byte that follows this sequence is the number of times to repeat the previous scan line minus one. NOTE

Many of the concepts we have covered in this section are not limited to RLE. All bitmap compression algorithms need to consider the concepts of cross-coding, sequential processing, efficient data encoding based on the data being encoded, and ways to detect and avoid negative compression.

Data Compression

141

For Further Information About RLE Most books on data compression have information on run-length encoding algorithms. The following references all contain information on RLE: Held, Gilbert. Data Compression: Techniques and Applications, Hardware and Software Considerations, second edition, John Wiley & Sons, New York, NY 1987. Lynch, Thomas D. Data Compression Techniques and Applications, lifetime Learning Publications, Belmont, CA, 1985. Nelson, Mark R. The Data Compression Book, M&T Books, Redwood City, CA, 1991. Storer, James A. Data Compression: Methods and Theory, Computer Science Press, Rockville, MD, 1988.

Lempel-Ziv-Welch (LZW) Compression One of the most common algorithms used in computer graphics is the Lempel-Ziv-Welch, or LZW, compression scheme. This lossless method of data compression is found in several image file formats, such as GIF and TIFF, and is also part of the V.42bis modem compression standard and PostScript Level 2. In 1977, Abraham Lempel and Jakob Ziv created the first of what we now call the LZ family of substitutional compressors. The LZ77 compression algorithms are commonly found in text compression and archiving programs, such as compress, zoo, lha, pkzip, and arj. The LZ78 compression algorithms are more commonly used to compress binary data, such as bitmaps. In 1984, while working for Unisys, Terry Welch modified the LZ78 compressor for implementation in high-performance disk controllers. The result was the LZW algorithm that is commonly found today. LZW is a general compression algorithm capable of working on almost any type of data. It is generally fast in both compressing and decompressing data and does not require the use of floating-point operations. Also, because LZW writes compressed data as bytes and not as words, LZW-encoded output can be identical on both big-endian and little-endian systems, although you may still encounter bit order and fill order problems. (See Chapter 6, Platform Dependencies, for a discussion of such systems.) LZW is referred to as a substitutional or dictionary-based encoding algorithm. The algorithm builds a data dictionary (also called a translation table or string table) of data occurring in an uncompressed data stream. Patterns of data (substrings) are identified in the data stream and are matched to entries in the dictionary. 142

Overview

If the substring is not present in the dictionary, a code phrase is created based on the data content of the substring, and it is stored in the dictionary. The phrase is then written to the compressed output stream. When a reoccurrence of a substring is identified in the data, the phrase of the substring already stored in the dictionary is written to the output. Because the phrase value has a physical size which is smaller than the substring it represents, data compression is achieved. Decoding LZW data is the reverse of encoding. The decompressor reads a code from the encoded data stream and adds the code to the data dictionary if it is not already there. The code is then translated into the string it represents and is written to the uncompressed output stream. LZW goes beyond most dictionary-based compressors in that it is not necessary to preserve the dictionary to decode the LZW data stream. This can save quite a bit of space with storing the LZW-encoded data. When compressing text files, LZW initializes the first 256 entries of the dictionary with the 8-bit ASCII character set (values OOh through FFh) as phrases. These phrases represent all possible single-byte values that may occur in the data stream, and all substrings are in turn built from these phrases. Because both LZW encoders and decoders begin with dictionaries initialized to these values, a decoder need not have the original dictionary and instead will build a duplicate dictionary as it decodes. TIFF, among other file formats, applies the same method for graphic files. In TIFF, the pixel data is packed into bytes before being presented to LZW, so an LZW source byte might be a pixel value, part of a pixel value, or several pixel values, depending on the image's bit depth and number of color channels. GIF requires each LZW input symbol to be a pixel value. Because GIF allows 1to 8-bit deep images, there are between two and 256 LZW input symbols in GIF, and the LZW dictionary is initialized accordingly. It is irrelevant how the pixels might have been packed into storage originally; LZW will deal with them as a sequence of symbols. The TIFF approach does not work very well for odd-size pixels, because packing into bytes creates byte sequences that do not match the original pixel sequences, so any patterns in the pixels are obscured. If pixel boundaries and byte boundaries agree (e.g. two 4-bit pixels per byte, or one 16-bit pixel every two bytes), then TIFF's method works well. The GIF approach works better for odd-size bit depths, but it is difficult to extend it to more than eight bits per pixel because the LZW dictionary must become very large to achieve useful compression on large input alphabets.

DATA COMPRESSION

143

Noise Removal and Differencing LZW does a very good job of compressing image data with a wide variety of pixel depths. 1-, 8-, and 24-bit images all compress at least as well as they do using RLE encoding schemes. Noisy images, however, can degrade the compression effectiveness of LZW significantly. Removing noise from an image, usually by zeroing out one or two of the least significant bit planes of the image, is recommended to increase compression efficiency. In other words, if your data does not compress well in its present form, then transform it to a different form that does compress well. One method that is used to make data more "compressible" by reducing the amount of extraneous information in an image is called differencing. The idea is that unrelated data may be easily converted by an invertible transform into a form that can be more efficiently compressed by an encoding algorithm. Differencing accomplishes this using the fact that adjacent pixels in many continuous-tone images vary only slightly in value. If we replace the value of a pixel with the difference between the pixel and the adjacent pixel, we will reduce the amount of information stored, without losing any data. With 1-bit monochrome and 8-bit gray-scale images, the pixel values themselves are differenced. RGB pixels must have each of their color channels differenced separately, rather than the absolute value of the RGB pixels' differences (difference red from red, green from green, and blue from blue). Differencing is usually applied in a horizontal plane across scan lines. In the following code example, the algorithm starts at the last pixel on the first scan line of the bitmap. The difference between the last two pixels in the line is calculated, and the last pixel is set to this value. The algorithm then moves to the next to last pixel and continues up the scan line and down the bitmap until finished, as shown in the following pseudo-code: /* Horizontally difference a bitmap */ for (Line = 0; Line < NumberOfLines; Line++) for (Pixel = NumberOfPixelsPerLine - 1; Pixel >= 1; Pixel—) Bitmap[Line][Pixel] -= Bitmap[Line][Pixel-1];

Vertical and 2-dimensional differencing may also be accomplished in the same way. The type of differencing used will have varied effectiveness depending upon the content of the image. And, regardless of the method used, differenced images compress much more efficiently using LZW.

144

Overview

Variations on the LZWAlgorithm Several variations of the LZW algorithm increase its efficiency in some applications. One common variation uses index pointers that vary in length, usually starting at nine bits and growing upward to 12 or 13 bits. When an index pointer of a particular length has been used up, another bit is tacked on to increase precision. Another popular variation of LZW compressors involves constantly monitoring the compression process for any drop in efficiency. If a drop is noted, the least recently used (LRU) phrases in the dictionary are discarded to make room for new phrases, or the entire dictionary is discarded and rebuilt. The LZMW variant on the LZW compression method builds phrases by concatenating two phrases together, rather than by concatenating the current phrase and the next character of data. This causes a quicker buildup of longer strings at the cost of a more complex data dictionary. LZW is a simple algorithm that is difficult to implement efficiently. Deciding when to discard phrases from the dictionary and even how to search the data dictionary during encoding (using a hashing or tree-based scheme) is necessary to improve efficiency and speed. Variations on the standard LZW algorithm are more common than a programmer may realize. For example, the TIFF and GIF formats use the standard features of the LZW algorithm, such as a Clear code (the indication to discard the string table), EOF code (End Of File), and a 12-bit limit on encoded symbol width. However, GIF treats each pixel value as a separate input symbol. Therefore, the size of the input alphabet, the starting compressed-symbol width, and the values of the Clear and EOF codes will vary depending on the pixel depth of the image being compressed. GIF also stores compressed codes with the least significant bit first, regardless of the native bit order of the machine on which the algorithm is implemented. When two codes appear in the same byte, the first code is in the lower bits. When a code crosses a byte boundary, its least significant bits appear in the earlier bytes. TIFF's LZW variation always reads 8-bit input symbols from the uncompressed data regardless of the pixel depth. Each symbol may therefore contain one pixel, more than one pixel, or only part of a pixel, depending upon the depth of the pixels in the image. TIFF always stores compressed codes with the most significant bit first, the opposite of GIF. (Don't confuse the byte-order indicator in the TIFF header, or the value of the FillOrder tag, with the bit order of the LZW compressed data. In a TIFF file, LZW-compressed data is always stored most significant bit first.) Data Compression

145

TIFF LZW also contains a bit of a kludge. Compressed code widths are required to be incremented one code sooner than is really necessary. For example, the compressor changes from 9-bit to 10-bit codes after adding code 511 to its table rather than waiting until code 512 is added, thus wasting one bit.

We understand that the explanation for this practice is that the LZW implementation supplied by Aldus in Revision 5.0 of the TIFF Developer's Toolkit contained this bug, although the TIFF 5.0 specification itself specified the LZW algorithm correctly. By the time the problem was identified (by Sam Leffler, the head of the TIFF Advisory Committee), too many applications existed that used the erroneous implementation, and there was no way to identify incorrecdy-encoded LZW data. The solution was simply to change the TIFF specification to require this 'Variation" in the TIFF algorithm, rather than to change the code and break all existing TIFF LZW applications and regard all previously created TIFF 5.0 LZW images as incorrect and useless. Patent Restrictions

The LZW compression method is patented both by Unisys (4,558,302) and by IBM (4,814,746). Unisys requires that all hardware developers who wish to use LZW purchase a one-time licensing fee for its use. How Unisys or IBM enforces the use of LZW in software-only applications is not clear. Further information on this subject can be obtained from the following source: Welch Licensing Department Office of the General Counsel M/SC1SW19

Unisys Corporation Blue Bell, PA 19424 USA The LZ77 algorithm is subject to a number of patents. Several of these patents overlap, or even duplicate, other patents pertaining to LZ77. Here is some history in case you need it: • The original LZ77 algorithm was patented (4,464,650) in 1981 by Lempel, Ziv, Cohen, and Eastman. • A patent (4,906,991) was obtained by E.R. Fiala and D.H. Greene in 1990 for a variation of LZ77 using a tree data dictionary structure. • IBM obtained a patent (5,001,478) in 1991 for a variation combining the LZ77 history buffer with the LZ78 lexicon buffer. • Phil Katz (of pkzip fame) owns a 1991 patent (5,051,745) on LZ77 using sorted hash tables when the tables are smaller than the window size.

146

Overview

• Various patents for LZ77 using hashing are also held by Robert Jung (5,140,321), Chambers (5,155,484), and Stac Inc. (5,016,009). • Stac also owns the patent (4,701,745) for the LZ77 variant LZRW1. Both of the Stac patents are the basis of the lawsuit between Stac and Microsoft over the file compression feature found in MS-DOS 6.0. For Further Information About LZW Many books on data compression contain information on the LZ and LZW compression algorithms. The first reference below is the definitive source for a very general explanation about the LZW algorithm itself and does not focus specifically on bitmap image data: Welch, T. A. "A Technique for High Performance Data Compression," IEEE Computer, Volume 17, Number 6, June 1984. The TIFF specification (both revisions 5.0 and 6.0) contains an explanation of the TIFF variation on LZW compression. Refer to the "For Further Information" section of the TIFF article in Part II of this book for information and see

the CD-ROM for the specification itself. The following articles and manuscripts are also specifically related to LZW: Bell, Timothy C. "Better OPM/L Text Compression," IEEE Transactions on Communications, V34N12 (December 1986): 1176-1182. Bernstein, Daniel J. Y coding, Draft 4b, March 19, 1991. Manuscript part of the Yabba Y Coding package. Blackstock, Steve. LZW and GIF Explained, Manuscript in the public domain, 1987. Montgomery, Bob. LZW Compression Used to Encode/Decode a GIF File, Manuscript in the public domain, 1988. Nelson, Mark R. "LZW Data Compression," Dr. Dobbs Journal, 156, October 1989, pp. 29-36. Phillips, Dwayne. "LZW Data Compression," The Computer Applications Journal, Circuit Cellar Ink, 27, June/July 1992, pp. 36-48. Thomborson, Clark. "The V.42bis standard for data-compressing modems,"ZEEE Micro, October, 1992, pp. 41-53. Ziv, J. and A. Lempel. "A Universal Algorithm for Sequential Data Compression," IFFF Transactions on Information Theory, 23(3), 1977, pp. 337-343.

Data Compression

147

Ziv,J. and A. Lempel (1978). "Compression of Individual Sequences via Variable-Rate Coding," IEEE Transactions on Information Theory, 24(5), September, 1978.

CCITT (Huffman) Encoding Many facsimile and document imaging file formats support a form of lossless data compression often described as CCITT encoding. The CCITT (International Telegraph and Telephone Consultative Committee) is a standards organization that has developed a series of communications protocols for the facsimile transmission of black-and-white images over telephone lines and data networks. These protocols are known officially as the CCITT T.4 and T.6 standards, but are more commonly referred to as CCITT Group 3 and Group 4 compression, respectively. Sometimes, CCITT encoding is referred to, not entirely accurately, as Huffman encoding. Huffman encoding is a simple compression algorithm introduced by David Huffman in 1952. CCITT 1-dimensional encoding, described in a subsection below, is a specific type of Huffman encoding. The other types of CCITT encodings are not, however, implementations of the Huffman scheme. Group 3 and Group 4 encodings are compression algorithms that are specifically designed for encoding 1-bit image data. Many document and FAX file formats support Group 3 compression and several, including TIFF, also support Group 4. Group 3 encoding was designed specifically for bilevel, black-and-white image data telecommunications. All modern FAX machines and FAX modems support Group 3 facsimile transmissions. Group 3 encoding and decoding is fast, maintains a good compression ratio for a wide variety of document data, and contains information which aids a Group 3 decoder in detecting and correcting errors without special hardware. Group 4 is a more efficient form of bilevel compression which has almost entirely replaced the use of Group 3 in many conventional document image storage systems. (An exception is facsimile document storage systems where original Group 3 images are required to be stored in an unaltered state.) Group 4 encoded data is approximately half the size of 1-dimensional Group 3-encoded data. Although Group 4 is fairly difficult to implement efficiently, it encodes at least as fast as Group 3 and in some implementations decodes even faster. Also, Group 4 was designed for use on data networks, so it does not contain the synchronization codes used for error detection that Group 3 does, making it a poor choice for an image transfer protocol.

148

Overview

Group 4 is sometimes confused with the IBM MMR (Modified Modified Read) compression method. In fact, Group 4 and MMR are almost exactly the same algorithm and achieve almost identical compression results. IBM released MMR in 1979 with the introduction of its Scanmaster product before Group 4 was standardized. MMR became IBM's own document compression standard and is still used in many IBM imaging systems today. Document-imaging systems that store large amounts of facsimile data have adopted these CCITT compression schemes to conserve disk space. CCITTencoded data can be decompressed quickly for printing or viewing (assuming that enough memory and CPU resources are available). The same data can also be transmitted using modem or facsimile protocol technology without needing to be encoded first. The CCITT algorithms are non-adaptive. That is, they do not adjust the encoding algorithm to encode each bitmap with optimal efficiency. They use a fixed table of code values that were selected according to a reference set of documents containing both text and graphics that were considered to be representative of documents that would be transmitted by facsimile. Group 3 normally achieves a compression ratio of 5:1 to 8:1 on a standard 200-dpi (204x196 dpi), A4-sized, document. Group 4 results are roughly twice as efficient as Group 3, achieving compression ratios upwards of 15:1 with the same document. Claims that the CCITT algorithms are capable of far better compression on standard business documents are exaggerated—largely by hardware vendors.

Because the CCITT algorithms have been optimized for type and handwritten documents, it stands to reason that images radically different in composition will not compress very well. This is all too true. Bilevel bitmaps which contain a high frequency of short runs, as typically found in digitally half-toned continuous-tone images, do not compress as well using the CCITT algorithms. Such images will usually result in a compression ratio of 3:1 or even lower, and many will actually compress to a size larger than the original. The CCITT actually defines three algorithms for the encoding of bilevel image data:

• Group 3 One-Dimensional (G31D) • Group 3 Two-Dimensional (G32D) • Group 4 Two-Dimensional (G42D) G31D is the simplest of the algorithms and the easiest to implement. For this reason, it is discussed in its entirety in the first subsection below. G32D and

data compression

149

G42D are much more complex in their design and operation, and are described only in general terms below. The Group 3 and Group 4 algorithms are standards and therefore produce the same compression results for everybody. If you have heard any claims made to the contrary, it is for one of these reasons: • Non-CCITT test images are being used as benchmarks. • Proprietary modifications have been made to the algorithm. • Pre- or post-processing is being applied to the encoded image data. • You have been listening to a misinformed salesperson. Group 3 One-Dimensional (G31D) Group 3 one-dimensional encoding (G31D) is a variation of the Huffman keyed compression scheme. A bilevel image is composed of a series of blackand-white 1-bit pixel runs of various lengths (1 = black and 0 = white). A Group 3 encoder determines the length of a pixel run in a scan line and outputs a variable-length binary code word representing the length and color of the run. Because the code word output is shorter than the input, pixel data compression is achieved.

The run-length code words are taken from a predefined table of values representing runs of black or white pixels. This table is part of the T.4 specification and is used to encode and decode all Group 3 data. The size of the code words were originally determined by the CCITT, based statistically on the average frequency of black-and-white runs occurring in typical type and handwritten documents. The documents included line art and were written in several different languages. Run lengths that occur more frequently are assigned smaller code words while run lengths that occur less frequently are assigned larger code words. In printed and handwritten documents, short runs occur more frequently than long runs. Two- to 4-pixel black runs are the most frequent in occurrence. The maximum size of a run length is bounded by the maximum width of a Group 3 scan line.

Run lengths are represented by two types of code words: makeup and terminating. An encoded pixel run is made up of zero or more makeup code words and a terminating code word. Terminating code words represent shorter runs, and makeup codes represent longer runs. There are separate terminating and makeup code words for both black and white runs. Pixel runs with a length of 0 to 63 are encoded using a single terminating code. Runs of 64 to 2623 pixels are encoded by a single makeup code and a 150

Overview

terminating code. Run lengths greater than 2623 pixels are encoded using one or more makeup codes and a terminating code. The run length is the sum of the length values represented by each code word. Here are some examples of several different encoded runs: • A run of 20 black pixels would be represented by the terminating code for a black run length of 20. This reduces a 20-bit run to the size of an 11-bit code word, a compression ratio of nearly 2:1. This is illustrated in Figure 9-7, a. • A white run of 100 pixels would be encoded using the makeup code for a white run length of 64 pixels followed by the terminating code for a white run length of 36 pixels (64 + 36 = 100). This encoding reduces 100 bits to 13 bits, or a compression ratio of over 7:1. This is illustrated in Figure 9-7, b.

• A run of 8800 black pixels would be encoded as three makeup codes of 2560 black pixels (7680 pixels), a makeup code of 1088 black pixels, followed by the terminating code for 32 black pixels (2560 + 2560 + 2560 + 1088 + 32 = 8800). In this case, we will have encoded 8800 run-length bits into five code words with a total length of 61 bits, for an approximate compression ratio of 144:1. This is illustrated in Figure 9-7, c. The use of run lengths encoded with multiple makeup codes has become a de facto extension to Group 3, because such encoders are necessary for images with higher resolutions. And while most Group 3 decoders do support this extension, do not expect them to do so in all cases. Decoding Group 3 data requires methods different from most other compression schemes. Because each code word varies in length, the encoded data stream must be read one bit at a time until a code word is recognized. This can be a slow and tedious process at best. To make this job easier, a state table can be used to process the encoded data one byte at a time. This is the quickest and most efficient way to implement a CCITT decoder. All scan lines are encoded to begin with a white run-length code word (most document image scan lines begin with white run lengths). If an actual scan line begins with a black run, a zero-length white run-length code word will be prepended to the scan line. A decoder keeps track of the color of the run it is decoding. Comparing the current bit pattern to values in the opposite color bit table is wasteful. That is, if a black run is being decoded, then there is no reason to check the table for white run-length codes.

Data Compression

151

Q 20-pixel black run 1 Terminating \

Q 100-pixel white run I

Makeup

I Terminating I

Q 8800-pixel black run I

Makeup

I

Makeup

I

Makeup

Makeup

I Terminating I

Figure 9-7: CCITT Group 3 encoding

Several special code words are also defined in a Group 3-encoded data stream. These codes are used to provide synchronization in the event that a phone transmission experiences a burst of noise. By recognizing this special code, a CCITT decoder may identify transmission errors and attempt to apply a recovery algorithm that approximates the lost data. The EOL code is a 12-bit codeword that begins each line in a Group 3 transmission. This unique code word is used to detect the start/end of a scan line during the image transmission. If a burst of noise temporarily corrupts the signal, a Group 3 decoder throws away the unrecognized data it receives until it encountered an EOL code. The decoder would then start receiving the transmission as normal again, assuming that the data following the EOL is the beginning of the next scan line. The decoder might also replace the bad line with a predefined set of data, such as a white scan line. A decoder also uses EOL codes for several purposes. It uses them to keep track of the width of a decoded scan line (an incorrect scan-line width may be an error, or it may be an indication to pad with white pixels to the EOL). In addition, it uses EOL codes to keep track of the number of scan lines in an image, in order to detect a short image. If it finds one, it pads the remaining length with scan lines of all white pixels. A Group 3 EOL code is illustrated in Figure 9-8.

152

Overview

EOL code

Figure 9-8: CCITT Group 3 encoding (EOL code) Most FAX machines transmit data of an "unlimited length," in which case the decoder cannot detect how long the image is supposed to be. Also, it is faster not to transmit the all-white space at the end of a page, and many FAX machines stop when they detect that the rest of a page is all white; they expect the receiver to do white padding to the end of the negotiated page size. When Group 3 data is encapsulated in an image file, information regarding the length and width of the image is typically stored in the image file header and is read by the decoder prior to decoding. Group 3 message transmissions are terminated by a Return To Control (RTC) code that is appended to the end of every Group 3 data stream and is used to indicate the end of the message transmission. An RTC code word is simply six EOL codes occurring consecutively. The RTC is actually part of the facsimile protocol and not part of the encoded message data. It is used to signal the receiver that it should drop the high-speed message carrier and listen on the low-speed carrier for the post-page command. A Group 3 RTC code is illustrated in Figure 9-9.

RTC code

Figure 9-9: CCITT Group 3 encoding (ETC code) A Fill (FILL) is not actually a code word, but instead is a run of one or more zero bits that occurs between the encoded scan-line data and the EOL code (but never in the encoded scan line itself). Fill bits are used to pad out the length of an encoded scan line to increase the transmission time of the line to a required length. Fill bits may also be used to pad the RTC code word out to end on a byte boundary.

Data Compression

153

TIFF Compression Type 2 The TIFF compression Type 2 scheme (in which the compression tag value is equal to 2) is a variation of CCITT G31D encoding. The TIFF Type 3 and TIFF Type 4 compression methods follow exacdy the CCITT Group 3 and Group 4 specifications, respectively. Type 2 compression, on the other hand, implements Group 3 encoding, but does not use EOL or RTC code words. For this reason, TIFF Type 2 compression is also called "Group 3, No EOLs." Also, fill bits are never used except to pad out the last byte in a scan line to the next byte boundary. These modifications were incorporated into the TIFF specification because EOL and RTC codes are not needed to read data stored on tape or disk. A typical letter-size image (1728x2200 pixels) would contain 26,484 bits (3310.5 bytes) of EOL and RTC information. When storing Group 3 data to a file, the following are not needed: • The initial 12-bit EOL

• The 12 EOL bits per scan line • The 72 RTC bits tacked onto the end of each image Conventional Group 3 decoders cannot handle these modifications and either will refuse to read the TIFF Type 2-encoded data or will simply return a stream of decoded garbage. However, many decoders have been designed to accept these Group 3 trivial "modifications" and have no problems reading this type of data. Group 3 encoding is illustrated in Figure 9-10.

Group 3 Transmission

RTC -^ 1

Figure 9-10: Group 3 CCITT encoding (TIFF Compression Type 2)

154

Overview

TIFFClassF

There is nearly one facsimile file format for every brand of computer FAX hardware and software made. Many compress the facsimile data using RLE (presumably so it will be quicker to display) or store it in its original Group 3 encoding. Perhaps the most widely used FAX file format is the unofficial TIFF Class F format. (See the TIFF article in Part II of this book for more information about TIFF Class F.) Even with the latest release of TIFF, revision 6.0, Class F has never been officially included in the standard, despite the wishes of the TIFF Advisory Council. The reason for this is that Aldus feels that supporting applications which require facsimile data storage and retrieval is outside of the scope of TIFF. (TIFF was designed primarily with scanner and desktop publishing in mind.) This is too bad, considering that one of TIFF's main goals is to aid in promoting image data interchangeability between hardware platforms and software applications. Group 3 Two-Dimensional (G32D) Group 3 one-dimensional (G31D) encoding, which we've discussed above, encodes each scan line independent of the other scan lines. Only one run length at a time is considered during the encoding and decoding process. The data occurring before and after each run length is not important to the encoding step; only the data occurring in the present run is needed. With two-dimensional encoding (G32D), on the other hand, the way a scan line is encoded may depend on the immediately preceding scan-line data. Many images have a high degree of vertical coherence (redundancy). By describing the differences between two scan lines, rather than describing the scan line contents, 2-dimensional encoding achieves better compression. The first pixel of each run length is called a changing element. Each changing element marks a color transition within a scan line (the point where a run of one color ends and a run of the next color begins). The position of each changing element in a scan line is described as being a certain number of pixels from a changing element in the current, coding line (horizontal coding is performed) or in the preceding, reference line (vertical coding is performed). The output codes used to describe the actual positional information are called Relative Element Address Designate (READ) codes. Shorter code words are used to describe the color transitions that are less than

four pixels away from each other on the code line or the reference line. Longer code words are used to describe color transitions laying a greater distance from the current changing element. Data Compression

155

Two-dimensional encoding is more efficient than 1-dimensional because the usual data that is compressed (typed or handwritten documents) contains a high amount of 2-dimensional coherence. Because a G32D-encoded scan line is dependent on the correctness of the preceding scan line, an error, such as a burst of line noise, can affect multiple, 2-dimensionally encoded scan lines. If a transmission error corrupts a segment of encoded scan line data, that line cannot be decoded. But, worse still, all scan lines occurring after it also decode improperly. To minimize the damage created by noise, G32D uses a variable called a K factor and 2-dimensionally encodes K-l lines following a 1-dimensionally encoded line. If corruption of the data transmission occurs, only K-l scan lines of data will be lost. The decoder will be able to resync the decoding at the next available EOL code.

The typical value for K is 2 or 4. G32D data that is encoded with a K value of 4 appears as a single block of data. Each block contains three lines of 2-dimensional scan-line data followed by a scan line of ID encoded data. The K variable is not normally used in decoding the G32D data. Instead, the EOL code is modified to indicate the algorithm used to encode the line following it. If a 1 bit is appended to the EOL code, the line following is 1-dimensionally encoded; if a 0 bit is appended, then the line following the EOL code is 2-dimensionally encoded. All other transmission code word markers (FILL and RTC) follow the same rule as in G31D encoding. K is only needed in decoding if regeneration of the previous 1-dimensionally encoded scan line is necessary for error recovery. Group 4 Two-Dimensional (G42D) Group 4 two-dimensional encoding (G42D) was developed from the G32D algorithm as a better 2-dimensional compression scheme—so much better, in fact, that Group 4 has almost completely replaced G32D in commercial use. Group 4 encoding is identical to G32D encoding except for a few modifications. Group 4 is basically the G32D algorithm with no EOL codes and a K variable set to infinity. Group 4 was designed specifically to encode data residing on disk drives and data networks. The built-in transmission error detec-

tion/correction found in Group 3 is therefore not needed by Group 4 data. The first reference line in Group 4 encoding is an imaginary scan line containing all white pixels. In G32D encoding, the first reference line is the first scan line of the image. In Group 4 encoding, the RTC code word is replaced by an End of Facsimile Block (EOFB) code, which consists of two consecutive Group 3 EOL code words. Like the Group 3 RTC, the EOFB is also part of the 156

Overview

transmission protocol and not actually part of the image data. Also, Group 4-encoded image data may be padded out with FILL bits after the EOFB to end on a byte boundary. Group 4 encoding will usually result in an image compressed twice as small as if it were done with G31D encoding. The main tradeoff is that Group 4 encoding is more complex and requires more time to perform. When implemented in hardware, however, the difference in execution speed between the Group 3 and Group 4 algorithms is not significant, which usually makes Group 4 a better choice in most imaging system implementations. Tips for Designing CCTTT Encoders and Decoders Here are some general guidelines to follow if you are using the CCITT encoding method code to encode or decode. • Ignore bits occurring after the RTC or EOFB markers. These markers indicate the end of the image data, and all bits occurring after them can be considered filler.

• You must know the number of pixels in a scan line before decoding. Any row that decodes to fewer or greater pixels than expected is normally considered corrupt, and further decoding of the image block (2-dimensional encoding only) should not be attempted. Some encoding schemes will produce short scan lines if the line contains all white pixels. The decoder is expected to realize this and pad the entire line out as a single white run. • Be aware that decoded scan-line widths will not always be a multiple of eight, and decoders should not expect byte-boundary padding to always occur. Robust decoders should be able to read non-byte-aligned data. • If a decoder encounters an RTC or EOFB marker before the expected number of scan lines has been decoded, assume that the remaining scan lines are all white. If the expected number of scan lines has been decoded, but an RTC or EOFB has not been encountered, stop decoding. A decoder should then produce a warning that an unusual condition has been detected.

• Note that a well-designed CCITT decoder should be able to handle the typical color-sex and bit-sex problems associated with 1-bit data, as described below:

The CCITT defines a pixel value of 0 as white and a pixel value of 1 as black. Many bitmaps, however, may be stored using the opposite pixel color values and the decoder would interpret a "negative" of the image in this case (a color-sex problem). Different machine architectures also store the Data Compression

157

bits within a byte in different ways. Some store the most significant bit first, and some store the least significant bit first. If bitmap data is read in the opposite format from the one in which it was stored, the image will appear fragmented and disrupted (a bit-sex problem). To prevent these problems, always design a CCITT decoder that is capable of reading data using either of the color-sex and bit-sex schemes to suit the requirements of the user. For Further Information About CCITT The original specifications for the CCITT Group 3 and Group 4 algorithms are in CCITT (1985) Volume VII, Fascicle VII.3, Recommendations T.4 and T.6: "Standardization of Group 3 Facsimile Apparatus for Document Transmission," Recommendation T.4, Volume VII, Fascicle VII.3, Terminal Equipment and Protocols for Telematic Services, The International Telegraph and Telephone Consultative Committee (CCITT), Geneva, Switzerland, 1985, pp. 16-31. "Facsimile Coding Schemes and Coding Control Functions for Group 4 Facsimile Apparatus," Recommendation T.6, Volume VII, Fascicle VII.3, Terminal Equipment and Protocols for Telematic Services, The International Telegraph and Telephone Consultative Committee (CCITT), Geneva, Switzerland, 1985, pp. 40-48. The latest specification is CCITT (1992) Volume VII, Fascicle VII.3, Recommendations TO through T.63. Both the CCITT and ANSI documents may be obtained from the following source:

American National Standards Institute, Inc. Attn: Sales 1430 Broadway New York, NY 10018 USA Voice: 212-642-4900

See also the following references. (For information on getting RFCs (Requests for Comments), send email to [email protected] with a subject line of "getting rfcs" and a body of "help: ways _to_get_rfcs".) R. Hunter and A.H. Robinson. "International Digital Facsimile Coding Standards," Proceedings of the IEEE, Volume 68, Number 7, July 1980, pp. 854-867.

RFC 804—CCITT Draft Recommendation T.4 (Standardization of Group 3 Facsimile Apparatus for Document Transmission).

158

Overview

RFC 1314—A File Format for the Exchange of Images in the Internet FAX: Digital Facsimile Technology & Applications, McConnell, Artech House, Norwood, MA; second edition, 1992. Marking, Michael P. "Decoding Group 3 Images," The C Users Journal, June 1990, pp. 45-54. Information on MMR encoding may be found in the following references: "Binary-image-manipulation Algorithms in the Image View Facility," IBM Journal of Research and Development, Volume 31, Number 1, January 1987.

Information on Huffman encoding may be found in the following references: Huffman, David. "A Method for the Construction of Minimum Redundancy Codes," Proceedings of the IRE, Volume 40, Number 9, 1952, pp. 1098-1101.

Kruger, Anton (1991). "Huffman Data Compression." C Gazette, Volume 5, Number 4, pp.71-77.

JPEG Compression One of the hottest topics in image compression technology today is JPEG. The acronym JPEG stands for the Joint Photographic Experts Group, a standards committee that had its origins within the International Standard Organization (ISO). In 1982, the ISO (the European equivalent of ANSI, the American National Standards Institute) formed the Photographic Experts Group (PEG) to research methods of transmitting video, still images, and text over ISDN (Integrated Services Digital Network) lines. PEG's goal was to produce a set of industry standards for the transmission of graphics and image data over digital communications networks.

In 1986, a subgroup of the CCITT began to research methods of compressing color and gray-scale data for facsimile transmission. The compression methods needed for color facsimile systems were very similar to those being researched by PEG. It was therefore agreed that the two groups should combine their resources and work together towards a single standard. In 1987, the ISO and CCITT combined their two groups into a joint committee that would research and produce a single standard of image data compression for both organizations to use. This new committee was JPEG.

DATA COMPRESSION

159

Although the creators of JPEG might have envisioned a multitude of commercial applications for JPEG technology, a consumer public made hungry by the marketing promises of imaging and multimedia technology are benefiting greatly as well. Most previously developed compression methods do a relatively poor job of compressing continuous-tone image data; that is, images containing hundreds or thousands of colors taken from real-world subjects. And very few file formats can support 24-bit raster images. GIF, for example, can store only images with a maximum pixel depth of eight bits, for a maximum of 256 colors. And its LZW compression algorithm does not work very well on typical scanned image data. The low-level noise commonly found in such data defeats LZW's ability to recognize repeated patterns. Both TIFF and BMP are capable of storing 24-bit data, but in their pre-JPEG versions are capable of using only encoding schemes (LZW and RLE, respectively) that do not compress this type of image data very well. JPEG provides a compression method that is capable of compressing continuous-tone image data with a pixel depth of six to 24 bits with reasonable speed and efficiency. And although JPEG itself does not define a standard image file format, several have been invented or modified to fill the needs of JPEG data storage. JPEG in Perspective Unlike all of the other compression methods described so far in this chapter, JPEG is not a single algorithm. Instead, it may be thought of as a toolkit of image compression methods that may be altered to fit the needs of the user. JPEG may be adjusted to produce very small, compressed images that are of relatively poor quality in appearance, but still suitable for many applications. Conversely, JPEG is capable of producing very high quality compressed images that are still far smaller than the original uncompressed data. JPEG is also different in that it is primarily a lossy method of compression. Most popular image format compression schemes, such as RLE, LZW, or the CCITT standards, are lossless compression methods. That is, they do not discard any data during the encoding process. An image compressed using a lossless method is guaranteed to be identical to the original image when uncompressed. Lossy schemes, on the other hand, throw useless data away during encoding. This is, in fact, how lossy schemes manage to obtain superior compression ratios over most lossless schemes. JPEG was designed specifically to discard information that the human eye cannot easily see. Slight changes in color are not perceived well by the human eye, while slight changes in intensity (light

160

Overview

and dark) are. Therefore JPEG's lossy encoding tends to be more frugal with the gray-scale part of an image and to be more frivolous with the color. JPEG was designed to compress color or gray-scale continuous-tone images of real-world subjects: photographs, video stills, or any complex graphics that resemble natural subjects. Animations, ray tracing, line art, black-and-white documents, and typical vector graphics don't compress very well under JPEG, and shouldn't be expected to. And, although JPEG is now used to provide motion video compression, the standard makes no special provision for such an application. The fact that JPEG is lossy and works only on a select type of image data might make you ask "Why bother to use it?" It depends upon your needs. JPEG is an excellent way to store 24-bit photographic images, like those used in imaging and multimedia applications. JPEG 24-bit (16 million color) images are superior in appearance to 8-bit (256 color) images on a VGA display and are at their most spectacular when using 24-bit display hardware (which is now quite inexpensive). The amount of compression achieved depends upon the content of the image data. A typical photographic-quality image may be compressed from 20:1 to 25:1 without experiencing any noticeable degradation in quality. Higher compression ratios will result in image files that differ noticeably from the original image, but that still have an overall good image quality. And achieving a 20:1 or better compression ratio in many cases not only saves disk space, but also reduces transmission time across data networks and phone lines. An end user can "tune" the quality of a JPEG encoder using a parameter sometimes called a "quality setting" or a "Q factor." Although different implementations have varying scales of Q factors, a range of 1 to 100 is typical. A factor of 1 produces the smallest, worst quality images; a factor of 100 produces the largest, best quality images. The optimal Q factor depends on the image content and is therefore different for every image. The art of JPEG compression is finding the lowest Q factor that produces an image that is visibly acceptable, and preferably as identical to the original as possible. The JPEG library supplied by the Independent JPEG Group (included on the CD-ROM that accompanies this book) uses a quality setting scale of 1 to 100. To find the optimal compression for an image using the JPEG library, follow these steps: 1. Encode the image using a quality setting of 75 (-Q 75). 2. If you observe unacceptable defects in the image, increase the value and reencode the image.

Data Compression

161

3. If the image quality is acceptable, decrease the setting until the image quality is barely acceptable. This will be the optimal quality setting for this image. 4. Repeat this process for every image you have (or just encode them all using a quality setting of 75). JPEG isn't always an ideal compression solution. There are several reasons: • As we have said, JPEG doesn't fit every compression need. Images containing large areas of a single color do not compress very well. In fact, JPEG will introduce "artifacts" into such images that are visible against a flat background, making them considerably worse in appearance than if you used a conventional lossless compression method was used. Images of a "busier" composition contain even worse artifacts, but they are considerably less noticeable against the image's more complex background. • JPEG can be rather slow when it is implemented only in software. If fast decompression is required, a hardware-based JPEG solution is your best bet, unless you are willing to wait for a faster software-only solution to come along or buy a faster computer. • JPEG is not trivial to implement. It is not likely you will be able to sit down and write your own JPEG encoder/decoder in a few evenings. We recommend that you obtain a third-party JPEG library, rather than writing your own.

• JPEG is not supported by very many file formats. The formats that do support JPEG are all fairly new and can be expected to be revised at frequent intervals.

BaselineJPEG The JPEG specification defines a minimal subset of the standard called baseline JPEG, which all JPEG-aware applications are required to support. This baseline uses an encoding scheme based on the Discrete Cosine Transform (DCT) to achieve compression. DCT is a generic name for a class of operations identified and published some years ago. DCT-based algorithms have since made their way into various compression methods. DCT-based encoding algorithms are always lossy by nature. DCT algorithms are capable of achieving a high degree of compression with only minimal loss of data. This scheme is effective only for compressing continuous-tone images, in which the differences between adjacent pixels are usually small. In practice, JPEG works well only on images with depths of at least four or five bits per color channel. The baseline standard actually specifies eight bits per input sample. Data of lesser bit depth can be handled by scaling it up to eight bits 162

Overview

per sample, but the results will be bad for low-bit-depth source data, because of the large jumps between adjacent pixel values. For similar reasons, colormapped source data does not work very well, especially if the image has been dithered.

The JPEG compression scheme is divided into the following stages: 1. Transform the image into an optimal color space. 2. Down-sample chrominance components by averaging groups of pixels together. 3. Apply a Discrete Cosine Transform (DCT) to blocks of pixels, thus removing redundant image data. 4. Quantize each block of DCT coefficients using weighting functions optimized for the human eye. 5. Encode the resulting coefficients (image data) using a Huffman variable word-length algorithm to remove redundancies in the coefficients. Figure 9-11 summarizes these steps, and the following subsections look at each of them in turn. Note that JPEG decoding performs the reverse of these steps.

Figure 9-11: JPEG compression and decompression

data Compression

163

Transform the image The JPEG algorithm is capable of encoding images that use any type of color space. JPEG itself encodes each component in a color model separately, and it is completely independent of any color-space model, such as RGB, HSI, or CMY The best compression ratios result if a luminance/chrominance color space, such as YUV or YCbCr, is used (see "A Word About Color Spaces" in Chapter 2, Computer Graphics Basics, for a description of these color spaces). Most of the visual information to which human eyes are most sensitive is found in the high-frequency, gray-scale, luminance component (Y) of the YCbCr color space. The other two chrominance components (Cb and Cr) contain high-frequency color information to which the human eye is less sensitive. Most of this information can therefore be discarded.

In comparison, the RGB, HSI, and CMY color models spread their useful visual image information evenly across each of their three color components, making the selective discarding of information very difficult. All three color components would need to be encoded at the highest quality, resulting in a poorer compression ratio. Gray-scale images do not have a color space as such and therefore do not require transforming. Downsample chrominance components The simplest way of exploiting the eye's lesser sensitivity to chrominance information is simply to use fewer pixels for the chrominance channels. For example, in an image nominally 1000x1000 pixels, we might use a full 1000x1000 luminance pixels, but only 500x500 pixels for each chrominance component. In this representation, each chrominance pixel covers the same area as a 2x2 block of luminance pixels. We store a total of six pixel values for each 2x2 block (four luminance values, one each for the two chrominance channels), rather than the twelve values needed if each component is represented at full resolution. Remarkably, this 50 percent reduction in data volume has almost no effect on the perceived quality of most images. Equivalent savings are not possible with conventional color models such as RGB, because in RGB each color channel carries some luminance information and so any loss of resolution is quite visible. When the uncompressed data is supplied in a conventional format (equal resolution for all channels), a JPEG compressor must reduce the resolution of the chrominance channels by downsampling, or averaging together groups of pixels. The JPEG standard allows several different choices for the sampling ratios, or relative sizes, of the downsampled channels. The luminance channel is always left at full resolution (1:1 sampling). Typically both chrominance channels are downsampled 2:1 horizontally and either 1:1 or 2:1 vertically, meaning that a chrominance pixel covers the same area as either a 2x 1 or a 2x2 block 164

Overview

of luminance pixels. JPEG refers to these downsampling processes as 2hlv and 2h2v sampling respectively. Another notation commonly used is 4:2:2 sampling for 2hlv, and 4:2:0 sampling for 2h2v; this notation derives from television customs (color transformation and downsampling have been in use since the beginning of color TV transmission). 2hlv sampling is fairly common because it corresponds to National Television Standards Committee (NTSC) standard TV practice, but it offers less compression than 2h2v sampling, with hardly any gain in perceived quality. Apply a Discrete Cosine Transform The image data is divided up into 8x8 blocks of pixels (from this point on, each color component is processed independently, so a "pixel" means a single value, even in a color image). A DCT is applied to each 8x8 block. DCT converts the spatial image representation into a frequency map: the low-order or "DC" term represents the average value in the block, while successive higherorder ("AC") terms represent the strength of more and more rapid changes across the width or height of the block. The highest AC term represents the strength of a cosine wave alternating from maximum to minimum at adjacent pixels. The DCT calculation is fairly complex; in fact, this is the most costly step in JPEG compression. The point of doing it is that we have now separated out the high- and low-frequency information present in the image. We can discard high-frequency data easily without losing low-frequency information. The DCT step itself is lossless except for roundoff errors. Quantize each block To discard an appropriate amount of information, the compressor divides each DCT output value by a "quantization coefficient" and rounds the result to an integer. The larger the quantization coefficient, the more data is lost, because the actual DCT value is represented less and less accurately. Each of the 64 positions of the DCT output block has its own quantization coefficient, with the higher-order terms being quantized more heavily than the low-order terms (that is, the higher-order terms have larger quantization coefficients). Furthermore, separate quantization tables are employed for luminance and chrominance data, with the chrominance data being quantized more heavily than the luminance data. This allows JPEG to exploit further the eye's differing sensitivity to luminance and chrominance. It is this step that is controlled by the "quality" setting of most JPEG compressors. The compressor starts from a built-in table that is appropriate for a medium-quality setting, and increases or decreases the value of each table Data Compression

165

entry in inverse proportion to the requested quality. The complete quantization tables actually used are recorded in the compressed file so that the decompressor will know how to (approximately) reconstruct the DCT coefficients. Selection of an appropriate quantization table is something of a black art. Most existing compressors start from a sample table developed by the ISO JPEG committee. It is likely that future research will yield better tables that provide more compression for the same perceived image quality. Implementation of improved tables should not cause any compatibility problems, because decompressors merely read the tables from the compressed file; they don't care how the table was picked. Encode the resulting coefficients The resulting coefficients contain a significant amount of redundant data. Huffman compression will losslessly remove the redundancies, resulting in smaller JPEG data. An optional extension to the JPEG specification allows arithmetic encoding to be used instead of Huffman for an even greater compression ratio (see the section called "JPEG Extensions" below). At this point, the JPEG data stream is ready to be transmitted across a communications channel or encapsulated inside an image file format. JPEG Extensions What we have examined thus far is only the baseline specification for JPEG. A number of extensions have been defined in the JPEG specification that provide progressive image buildup, improved compression ratios using arithmetic encoding, and a lossless compression scheme. These features are beyond the needs of most JPEG implementations and have therefore been defined as "not required to be supported" extensions to the JPEG standard. Progressive image buildup Progressive image buildup is an extension for use in applications that need to receive JPEG data streams and display them on the fly. A baseline JPEG image can be displayed only after all of the image data has been received and decoded. But, some applications require that the image be displayed after only some of the data is received. Using a conventional compression method, this means displaying the first few scan lines of the image as it is decoded. In this case even if the scan lines were interlaced, you would need at least 50 percent of the image data to get a good clue as to the content of the image. The progressive buildup extension of JPEG offers a better solution. Progressive buildup allows an image to be sent in layers rather than scan lines. But instead of transmitting each bitplane or color channel in sequence (which wouldn't be very useful), a succession of images built up from approximations 166

Overview

of the original image are sent. The first scan provides a low-accuracy representation of the entire image—in effect, a very low-quality JPEG compressed image. Subsequent scans gradually refine the image by increasing the effective quality factor. If the data is displayed on the fly, you would first see a crude, but recognizable, rendering of the whole image. This would appear very quickly because only a small amount of data would need to be transmitted to produce it. Each subsequent scan would improve the displayed image's quality one block at a time.

A limitation of progressive JPEG is that each scan takes essentially a full JPEG decompression cycle to display. Therefore, with typical data transmission rates, a very fast JPEG decoder (probably specialized hardware) would be needed to make effective use of progressive transmission. A related JPEG extension provides for hierarchical storage of the same image at multiple resolutions. For example, an image might be stored at 250x250, 500x500, 1000x1000, and 2000x2000 pixels, so that the same image file could support display on low-resolution screens, medium-resolution laser printers, and high-resolution imagesetters. The higher-resolution images are stored as differences from the lower-resolution ones, so they need less space than they would need if they were stored independently. This is not the same as a progressive series, because each image is available in its own right at the full desired quality. Arithmetic encoding The baseline JPEG standard defines Huffman compression as the final step in the encoding process. A JPEG extension replaces the Huffman engine with a binary arithmetic entropy encoder. The use of an arithmetic coder reduces the resulting size of the JPEG data by a further 10 percent to 15 percent over the results that would be achieved by the Huffman coder. With no change in resulting image quality, this gain could be of importance in implementations where enormous quantities of JPEG images are archived. Arithmetic encoding has several drawbacks: • Not all JPEG decoders support arithmetic decoding. Baseline JPEG decoders are required to support only the Huffman algorithm. • The arithmetic algorithm is slower in both encoding and decoding than Huffman.

• The arithmetic coder used by JPEG (called a Q Coder) is owned by IBM and AT&T (Mitsubishi also holds patents on arithmetic coding.) You must obtain a license from the appropriate vendors if their Q coders are to be used as the back end of your JPEG implementation.

Data Compression

167

Lossless JPEG compression A question that commonly arises is "At what Q factor does JPEG become lossless?" The answer is "never." Baseline JPEG is a lossy method of compression regardless of adjustments you may make in the parameters. In fact, DCT-based encoders are always lossy, because roundoff errors are inevitable in the color conversion and DCT steps. You can suppress deliberate information loss in the downsampling and quantization steps, but you still won't get an exact recreation of the original bits. Further, this minimum-loss setting is a very inefficient way to use lossy JPEG. The JPEG standard does offer a separate lossless mode. This mode has nothing in common with the regular DCT-based algorithms, and it is currently implemented only in a few commercial applications. JPEG lossless is a form of Predictive Lossless Coding using a 2-dimensional Differential Pulse Code Modulation (DPCM) scheme. The basic premise is that the value of a pixel is combined with the values of up to three neighboring pixels to form a predictor value. The predictor value is then subtracted from the original pixel value. When the entire bitmap has been processed, the resulting predictors are compressed using either the Huffman or the binary arithmetic entropy encoding methods described in the JPEG standard. Lossless JPEG works on images with two to 16 bits per pixel, but performs best on images with six or more bits per pixel. For such images, the typical compression ratio achieved is 2:1. For image data with fewer bits per pixels, other compression schemes do perform better. For Further Information About JPEG The JPEG standard itself is not available electronically; you must order a paper copy through ISO (but see the Pennebaker and Mitchell book below). In the United States, copies of the standard may be ordered from: American National Standards Institute, Inc. Attn: Sales 1430 Broadway New York, NY 10018 Voice: 212-642-4900

The standard is divided into two parts; Part 1 is the actual specification, and Part 2 covers compliance-testing methods. Part 1 of the draft has now reached International Standard status. See this document:

Digital Compression and Coding of Continuous-tone Still Images, Part 1: Requirements and Guidelines. Document number ISO/IEC IS 10918-1.

168

Overview

Part 2 is still at Committee Draft status. See this document:

Digital Compression and Coding of Continuous-tone Still Images, Part 2: Compliance Testing Document number ISO/IEC CD 10918-2. New information on JPEG and related algorithms is appearing constantly. The majority of the commercial work for JPEG is being carried out at companies such as the following: Eastman Kodak Corporation 343 State Street Rochester, NY 14650 Voice: 800-242-2424

C-Cube Microsystems 1778 McCarthy Boulevard Milpitas, CA 95035 Voice: 408-944-6300

See the article about the JFIF (JPEG File Interchange Format) supported by CCube in Part II of this book.

The JPEG FAQ (Frequently Asked Questions) article is a useful source of general information about JPEG. This FAQ is included on the CD-ROM that accompanies this book; however, because the FAQ is updated frequently, the CD-ROM version should be used only for general information. The FAQ is posted every two weeks to Usenet newsgroups comp.graphics, news, answers, and other groups. You can get the latest version of this FAQ from the news.answers archive at rtfm.mit.edu in /pub/usenet/news.answers/jpeg-faq. \bu can also get this FAQ by sending email to [email protected] with the message "send usenet/news.answers/jpeg-faq" in the body. A consortium of programmers, the Independent JPEG Group (IJG), has produced a public-domain version of a JPEG encoder and decoder in C source code form. We have included this code on the CD-ROM that accompanies this book. See Appendix A, What's On the CD-ROM? for information on how you can obtain the IJG library from the CD-ROM and from various FTP sites, information services, and computer bulletin boards. The best short technical introduction to the JPEG compression algorithm is: Wallace, Gregory K. 'The JPEG Still Picture Compression Standard," Communications of the ACM, April 1991 (Volume 34, Number 4), pp. 30-44.

data compression

169

A more complete explanation of JPEG can be found in the following texts: Pennebaker, William B. and Mitchell, Joan L,JPEG Still Image Data Compression Standard, Van Nostrand Reinhold, New York, 1992. This book includes the complete text of the ISO JPEG standards (DIS 10918-1 and draft DIS 10918-2). This is by far the most complete exposition of JPEG in existence and is highly recommended. Nelson, Mark, The Data Compression Book, M&T Books, Redwood City, CA, 1991. This book provides good explanations and example C code for a multitude of compression methods including JPEG. It is an excellent source if you are comfortable reading C code but don't know much about data compression in general. The book's JPEG sample code is incomplete and not very robust, but the book is a good tutorial.

For Further Information About Data Compression On Usenet the comp. compression newsgroup FAQ article provides a useful source of information about a variety of different compression methods, including JPEG, MPEG, Huffman, and the various LZ algorithms. Also included is information on many archiving programs (pkzip, lzh, zoo, etc.) and patents pertaining to compression algorithms. This FAQ is included on the CD-ROM that accompanies this book; however, because the FAQ is updated frequently, the CD-ROM version should be used only for general information. You can get the latest version of this FAQ from the news.answers newsgroup at rtfm.mit.edu, where it is stored in the directory /pub/usenet/(comp. compression/'compression-faq in three parts: parti, part2, and part3. This FAQ may also be obtained by sending email to the address [email protected] with the message "send usenet/comp.compression/compression-faq/partl", etc., in the body. There are many books on data encoding and compression. Most older books contain only mathematical descriptions of encoding algorithms. Some newer books, however, have picked up the highly desirable trend of including working (we hope) C code examples in their text and even making the code examples available online or on disk. The following references contain good general discussions of many of the compression algorithms discussed in this chapter: Bell, T.C, I.H. Witten, and J.G. Cleary. "Modeling for Text Compression," ACM Computing Surveys, Volume 21, Number 4, December, 1989, p.557.

170

Overview

Bell, T.C, I.H. Witten, J.G. Cleary. Text Compression, Prentice-Hall, Englewood Cliffs, NJ, 1990. Huffman, D.A. "A Method for the Construction of MinimumRedundancy Codes," Communications of the ACM, Volume 40, Number 9, September, 1952, pp. 1098-1101. Jain, Anil K., Paul M. Farrelle, and V. Ralph Angazi. Image Data Compression: Digital Image Processing Techniques, Academic Press, San Diego, CA, 1984, pp. 171-226. Lelewer, D.A, and Hirschberg, D.S. "Data Compression," ACM Computing Surveys, Volume 19, Number 3, September, 1987, p. 261. Storer, James A. Data Compression: Methods and Theory, Computer Science Press, Rockville, MD, 1988. Storer, James A., ed. Image and Text Compression, Kluwer Books, Boston, MA, 1992. Williams, R. Adaptive Data Compression, Kluwer Books, Boston, MA, 1990.4, Number 4), pp. 30-44.

Data Compression

171

Chapter lO

Multimedia

Beyond Traditional Graphics File Formats Most of this book describes image file formats and the types of data compression they employ. However, still images are not the only type of data that can be stored in a file. This chapter describes the other types of graphics data that are becoming popular. A hot topic in the world of personal computers today is multimedia. Multimedia applications combine text, graphics, audio, and video in much the same way a motion picture film combines sound and motion photography. But, unlike motion pictures, multimedia can be interactive through the use of a keyboard, mouse, joystick, or other input device to control the behavior of the multimedia presentation. The output from a multimedia application can be through conventional speakers or a stereo system, a music or voice synthesizer, or other types of output devices. A conventional stereo system or television and video tape recorder (VCR) are passive information devices. \bu can raise and lower the volume of a stereo, change the color of a television picture, or fast-forward a VCR, but this type of control is very limited in capability and is used only intermittently. When you use a passive information device, you normally just sit and watch the picture and listen to the sound.

Anyone who has played a computer or video arcade game has experienced an active information device. The games at your local video arcade, or hooked up to your living room television (and therefore permanently attached to your eightyear-old's hands), require constant input in order to function properly. And, although the sights and sounds of such a game might be staggering, the control and utility a user gains from an active information device is only slightly more than is gained using a passive one.

Multimedia

173

Personal computers are not only active information devices, but also interactive devices. A computer itself does very little unless a user interacts with it. Computers are, as you would expect, excellent platforms for interactive multimedia applications. Interactive multimedia provides more than just the stimulus-response reaction of a video game. It also allows a collection of complex data to be manipulated with a much finer control than is possible using non-interactive devices. Sample multimedia applications in existence today include: • Online, multimedia dictionaries and encyclopedias containing text, sounds, and images. These allow the instant lookup of word definitions and provide video playback, in addition to pictures. • Games that respond to hand movements and that talk back to the user Computerized multimedia is still in its infancy. It is currently a tool used for educational and entertainment purposes and is expanding out into the commercial world. There probably isn't a complex computerized control system that wouldn't be easier to learn or to use if it had a standardized, multimedia front end. And one day you might even see multimedia applications with heuristic algorithms that will allow your computer to learn as much from you as you will from your computer. Multimedia File Formats Multimedia data and information must be stored in a disk file using formats similar to image file formats. Multimedia formats, however, are much more complex than most other file formats because of the wide variety of data they must store. Such data includes text, image data, audio and video data, computer animations, and other forms of binary data, such as Musical Instrument Digital Interface (MIDI), control information, and graphical fonts. (See the "MIDI Standard" section later in this chapter.) Typical multimedia formats do not define new methods for storing these types of data. Instead, they offer the ability to store data in one or more existing data formats that are already in general use. For example, a multimedia format may allow text to be stored as PostScript or Rich Text Format (RTF) data rather than in conventional ASCII plain text format. Still-image bitmap data may be stored as BMP or TIFF files rather than as raw bitmaps. Similarly, audio, video, and animation data can be stored using industry-recognized formats specified as being supported by that multimedia file format.

Multimedia formats are also optimized for the types of data they store and the format of the medium on which they are stored. Multimedia information is 174

overview

commonly stored on CD-ROM. Unlike conventional disk files, CD-ROMs are limited in the amount of information they can store. A multimedia format must therefore make the best use of available data storage techniques to efficiently store data on the CD-ROM medium. There are many types of CD-ROM devices and standards that may be used by multimedia applications. If you are interested in multimedia, you should become familiar with them.

The original Compact Disc first introduced in early 1980s was used for storing only audio information using the CD-DA (Compact Disc-Digital Audio) standard produced by Phillips and Sony. CD-DA (also called the Red Book) is an optical data storage format that allows the storage of up to 74 minutes of audio (764 megabytes of data) on a conventional CD-ROM. The CD-DA standard evolved into the CD-XA (Compact Disc-Extended Architecture) standard, or what we call the CD-ROM (Compact Disc-Read Only Memory). CD-XA (also called the Yellow Book) allows the storage of both digital audio and data on a CD-ROM. Audio may be combined with data, such as text, graphics, and video, so that it may all be read at the same time. An ISO 9660 file system may also be encoded on a CD-ROM, allowing its files to be read by a wide variety of different computer system platforms. (See Appendix A, What's On the CD-ROM?, for a discussion of the ISO 9660 standard.) The CD-I (Compact Disc-Interactive) standard defines the storage of interactive multimedia data. CD-I (also called the Green Book) describes a computer system with audio and video playback capabilities designed specifically for the consumer market. CD-I units allow the integration of fully interactive multimedia applications into home computer systems. A still-evolving standard is CD-R (Compact Disc-Recordable or Compact DiscWrite Once), which specifies a CD-ROM that may be written to by a personal desktop computer and read by any CD-ROM player. CD-R (also called the Orange Book) promises to turn any home computer into a CD-ROM publishing facility within the next few years. For more specific information on multimedia, refer to the articles on the RIFF, DVI, QuickTime, and MPEG multimedia formats in Part II of this book.

Types of Data The following sections describe various types of data that you might find, in addition to static graphics data, in multimedia files.

Multimedia

175

Animation

Somewhere between the motionless world of still images and the real-time world of video images lies the flip-book world of computer animation. All of the animated sequences seen in educational programs, motion CAD renderings, and computer games are computer-animated (and in many cases, computer-generated) animation sequences. Traditional cartoon animation is little more than a series of artwork cells, each containing a slight positional variation of the animated subjects. When a large number of these cells is displayed in sequence and at a fast rate, the animated figures appear to the human eye to move. A computer-animated sequence works in exactly the same manner. A series of images is created of a subject; each image contains a slightly different perspective on the animated subject. When these images are displayed (played back) in the proper sequence and at the proper speed (frame rate), the subject appears to move. Computerized animation is actually a combination of both still and motion imaging. Each frame, or cell, of an animation is a still image that requires compression and storage. An animation file, however, must store the data for hundreds or thousands of animation frames and must also provide the information necessary to play back the frames using the proper display mode and frame rate.

Animation file formats are only capable of storing still images and not actual video information. It is possible, however, for most multimedia formats to contain animation information, because animation is actually a much easier type of data than video to store.

The image-compression schemes used in animation files are also much simpler than most of those used in video compression. Most animation files use a delta compression scheme, which is a form of Run-Length Encoding that stores and compresses only the information that is different between two images (rather than compressing each image frame entirely). (See Chapter 9, Data Compression, for a description of RLE compression.) Storing animations using a multimedia format also produces the benefit of adding sound to the animation (what's a cartoon without sound?). Most animation formats cannot store sound directly in their files and must rely on storing the sound in a separate disk file which is read by the application that is playing back the animation. Animations are not only for entertaining kids and adults. Animated sequences are used by CAD programmers to rotate 3D objects so they can be observed from different perspectives; complex, ray-traced objects may be sent into 176

Overview

movement so their behavior can be seen; mathematical data collected by an aircraft or satellite may be rendered into an animated fly-by sequence; and movie special effects benefit greatly by computer animation. For more specific information on animation, refer to the articles on the FLI and GRASP animation formats in Part II of this book.

Digital Video One step beyond animation is broadcast video. Your television and video tape recorder are a lot more complex than an 8mm home movie projector and your kitchen wall. There are many complex signals and complicated standards that are involved in transmitting those late-night reruns across the airwaves and cable. Only in the last few years has a personal computer been able to work with video data at all.

Video data normally occurs as continuous, analog signals. In order for a computer to process this video data, we must convert the analog signals to a noncontinuous, digital format. In a digital format, the video data can be stored as a series of bits on a hard disk or in computer memory. The process of converting a video signal to a digital bitstream is called analogto-digital conversion (A/D conversion), or digitizing. A/D conversion occurs in two steps: 1. Sampling captures data from the video stream. 2. Quantizing converts each captured sample into a digital format. Each sample captured from the video stream is typically stored as a 16-bit integer. The rate at which samples are collected is called the sampling rate. The sampling rate is measured in the number of samples captured per second (samples/second). For digital video, it is necessary to capture millions of samples per second. Quantizing converts the level of a video signal sample into a discrete, binary value. This value approximates the level of the original video signal sample. The value is selected by comparing the video sample to a series of predefined threshold values. The value of the threshold closest to the amplitude of the sampled signal is used as the digital value. A video signal contains several different components which are mixed together in the same signal. This type of signal is called a composite video signal and is not really useful in high-quality computer video. Therefore, a standard composite video signal is usually separated into its basic components before it is digitized.

Multimedia

177

The composite video signal format defined by the NTSC (National Television Standards Committee) color television system is used in the United States. The PAL (Phase Alternation Line) and SECAM (Sequential Coleur Avec Memoire) color television systems are used in Europe and are not compatible with NTSC. Most computer video equipment supports one or more of these system standards.

The components of a composite video signal are normally decoded into three separate signals representing the three channels of a color space model, such as RGB, YUV, or YIQ. Although the RGB model is quite commonly used in still imaging, the YUV, YIQ, or YCbCr models are more often used in motion-video imaging. TV practice uses YUV or similar color models because the U and V channels can be downsampled to reduce data volume without materially degrading image quality. The three composite channels mentioned here are the same channels used in the downsampling stage of JPEG compression; for more information, see the discussion of JPEG in Chapter 9, Data Compression.) Once the video signal is converted to a digital format, the resulting values can be represented on a display device as pixels. Each pixel is a spot of color on the video display, and the pixels are arranged in rows and columns just as in a bitmap. Unlike a static bitmap, however, the pixels in a video image are constantly being updated for changes in intensity and color. This updating is called scanning, and it occurs 60 times per second in NTSC video signals (50 times per second for PAL and SECAM). A video sequence is displayed as a series of frames. Each frame is a snapshot of a moment in time of the motion-video data, and is very similar to a still image. When the frames are played back in sequence on a display device, a rendering of the original video data is created. In real-time video the playback rate is 30 frames per second. This is the minimum rate necessary for the human eye to successfully blend each video frame together into a continuous, smoothly moving image. A single frame of video data can be quite large in size. A video frame with a resolution of 512 x 482 will contain 246,784 pixels. If each pixel contains 24 bits of color information, the frame will require 740,352 bytes of memory or disk space to store. Assuming there are 30 frames per second for real-time video, a 10-second video sequence would be more than 222 megabytes in size! It is clear there can be no computer video without at least one efficient method of video data compression. There are many encoding methods available that will compress video data. The majority of these methods involve the use of a transform coding scheme,

178

Overview

usually employing a Fourier or Discrete Cosine Transform (DCT). These transforms physically reduce the size of the video data by selectively throwing away unneeded parts of the digitized information. Transform compression schemes usually discard 10 percent to 25 percent or more of the original video data, depending largely on the content of the video data and upon what image quality is considered acceptable. Usually a transform is performed on an individual video frame. The transform itself does not produce compressed data. It discards only data not used by the human eye. The transformed data, called coefficients, must have compression applied to reduce the size of the data even further. Each frame of data may be compressed using a Huffman or arithmetic encoding algorithm, or even a more complex compression scheme such as JPEG. (See Chapter 9, Data Compression, for a discussion of these compression methods.) This type of intraframe encoding usually results in compression ratios between 20:1 to 40:1 depending on the data in the frame. However, even higher compression ratios may result if, rather than looking at single frames as if they were still images, we look at multiple frames as temporal images. In a typical video sequence, very little data changes from frame to frame. If we encode only the pixels that change between frames, the amount of data required to store a single video frame drops significantly. This type of compression is known as interframe delta compression, or in the case of video, motion compensation. Typical motion compensation schemes that encode only frame deltas (data that has changed between frames) can, depending on the data, achieve compression ratios upwards of 200:1. This is only one possible type of video compression method. There are many other types of video compression schemes, some of which are similar and some of which are different. For more information on compression methods, refer to Chapter 9 and to the articles in Part II that describe multimedia file formats. Digital Audio All multimedia file formats are capable of storing sound information. Sound data, like graphics and video data, has its own special requirements when it is being read, written, interpreted, and compressed. Before looking at how sound is stored in a multimedia format we must look at how sound itself is

stored as digital data. All of the sounds that we hear occur in the form of analog signals. An analog audio recording system, such as a conventional tape recorder, captures the entire sound wave form and stores it in analog format on a medium such as magnetic tape.

Multimedia

179

Because computers are now digital, and not analog, devices it is necessary to store sound information in a digitized format that computers can readily use. A digital audio recording system does not record the entire wave form as analog systems do (the exception being Digital Audio Tape [DAT] systems). Instead, a digital recorder captures a wave form at specific intervals, called the sampling rate. Each captured wave-form snapshot is converted to a binary integer value and is then stored on magnetic tape or disk. Storing audio as digital samples is known as Pulse Code Modulation (PCM). PCM is a simple quantizing or digitizing (audio to digital conversion) algorithm, which linearly converts all analog signals to digital samples. This process is commonly used on all audio CD-ROMs. Differential Pulse Code Modulation (DPCM) is an audio encoding scheme that quantizes the difference between samples rather than the samples themselves. Because the differences are easily represented by values smaller than those of the samples themselves, fewer bits may be used to encode the same sound (for example, the difference between two 16-bit samples may only be four bits in size). For this reason, DPCM is also considered an audio compression scheme. One other audio compression scheme, which uses difference quantization, is Adaptive Differential Pulse Code Modulation (ADPCM). DPCM is a nonadaptive algorithm. That is, it does not change the way it encodes data based on the content of the data. DPCM uses the sample number of bits to represent every signal level. ADPCM, however, is an adaptive algorithm and changes its encoding scheme based on the data it is encoding. ADPCM specifically adapts by using fewer bits to represent lower-level signals than it does to represent higher-level signals. Many of the most commonly used audio compression schemes are based on ADPCM.

Digital audio data is simply a binary representation of a sound. This data can be written to a binary file using an audio file format for permanent storage much in the same way bitmap data is preserved in an image file format. The data can be read by a software application, can be sent as data to a hardware device, and can even be pressed into a plastic medium and stored as a CDROM.

The quality of an audio sample is determined by comparing it to the original sound from which it was sampled. The more identical the sample is to the original sound, the higher the quality of the sample. This is similar to comparing an image to the original document or photograph from which it was scanned.

180

Overview

The quality of audio data is determined by three parameters: • Sample resolution • Sampling rate • Number of audio channels sampled The sample resolution is determined by the number of bits per sample. The larger the sampling size, the higher the quality of the sample. Just as the apparent quality (resolution) of an image is reduced by storing fewer bits of data per pixel, so is the quality of a digital audio recording reduced by storing fewer bits per sample. Typical sampling sizes are eight bits and 16 bits. The sampling rate is the number of times per second the analog wave form was read to collect data. The higher the sampling rate, the greater the quality of the audio. A high sampling rate collects more data per second than a lower sampling rate, therefore requiring more memory and disk space to store. Common sampling rates are 44.100 kHz (higher quality), 22.254 kHz (medium quality), and 11.025 kHz (lower quality). Sampling rates are usually measured in the signal processing terms hertz (Hz) or kilohertz (kHz), but the term samples per second (samples/second) is more appropriate for this type of measurement.

A sound source may be sampled using one channel (monaural sampling) or two channels (stereo sampling). Two-channel sampling provides greater quality than mono sampling and, as you might have guessed, produces twice as much data by doubling the number of samples captured. Sampling one channel for one second at 11,000 samples/second produces 11,000 samples. Sampling two channels at the same rate, however, produces 22,000 samples/second. The amount of binary data produced by sampling even a few seconds of audio is quite large. Ten seconds of data sampled at the lowest quality (one channel, 8-bit sample resolution, 11.025 samples/second sampling rate) produces about 108K of data (88.2 Kbits/second). Adding a second channel doubles the amount of data to produce nearly a 215K file (176 Kbits/second). If we increase the sample resolution to 16 bits, the size of the data doubles again to 430K (352 Kbits/second). If we now increase the sampling rate to 22.05 Ksamples/second, the amount of data produced doubles again to 860K (705.6 Kbits/second). At the highest quality (two channels, 16-bit sample resolution, 44.1 Ksamples/second sampling rate), our 10 seconds of audio now requires 1.72 megabytes (1411.2 Kbits/second) of disk space to store. Consider how little information can really be stored in 10 seconds of sound. The typical musical song is at least three minutes in length. Music videos are from five to 15 minutes in length. A typical television program is 30 to 60 multimedia

181

minutes in length. Movie videos can be three hours or more in length. We're talking a lot of disk space here. One solution to the massive storage requirements of high-quality audio data is data compression. For example, the CD-DA (Compact Disc-Digital Audio) standard performs mono or stereo sampling using a sample resolution of 16 bits and a sampling rate of 44.1 samples/second, making it a very high-quality format for both music and language applications. Storing five minutes of CD-DA information requires approximately 25 megabytes of disk space—only half the amount of space that would be required if the audio data were uncompressed. Audio data, in common with most binary data, contains a fair amount of redundancy that can be removed with data compression. Conventional compression methods used in many archiving programs (zoo and pkzip, for example) and image file formats don't do a very good job of compressing audio data (typically 10 percent to 20 percent). This is because audio data is organized very differently from either the ASCII or binary data normally handled by these types of algorithms. Audio compression algorithms, like image compression algorithms, can be categorized as lossy and lossless. Lossless compression methods do not discard any data. The decompression step produces exactly the same data as was read by the compression step. A simple form of lossless audio compression is to Huffman-encode the differences between each successive 8-bit sample. Huffman encoding is a lossless compression algorithm and, therefore the audio data is preserved in its entirety. Lossy compression schemes discard data based on the perceptions of the psychoacoustic system of the human brain. Parts of sounds that the ear cannot hear, or the brain does not care about, can be discarded as useless data. An algorithm must be careful when discarding audio data. The ear is very sensitive to changes in sound. The eye is very forgiving about dropping a video frame here or reducing the number of colors there. The ear, however, notices even slight changes in sounds, especially when specifically trained to recognize audial infidelities and discrepancies. However, the higher the quality of an audio sample, the more data will be required to store it. As with lossy image compression schemes, at times you'll need to make a subjective decision between quality and data size. Audio

There is currently no "audio file interchange format" that is widely used in the computer-audio industry. Such a format would allow a wide variety of audio

182

Overview

data to be easily written, read, and transported between different hardware platforms and operating systems. Most existing audio file formats, however, are very machine-specific and do not lend themselves to interchange very well. Several multimedia formats are capable of encapsulating a wide variety of audio formats, but do not describe any new audio data format in themselves.

Many audio file formats have headers just as image files do. Their header information includes parameters particular to audio data, including sample rate, number of channels, sample resolution, type of compression, and so on. An identification field ("magic" number) is also included in several audio file format headers.

Several formats contain only raw audio data and no file header. Any parameters these formats use are fixed in value and therefore would be redundant to

store in a file header. Stream-oriented formats contain packets (chunks) of information embedded at strategic points within the raw audio data itself. Such formats are very platform-dependent and would require an audio file format reader or converter to have prior knowledge of just what these parameter values are.

Most audio file formats may be identified by their file types or extensions. Some common sound file formats are: .AU

Sun Microsystems

.SND

NeXT

HCOM

Apple Macintosh

.VOC

SoundBlaster

.WAV

Microsoft Waveform

AIFF

Apple/SGI Apple/SGI

8SVX

A multimedia format may choose to either define its own internal audio data format or simply encapsulate an existing audio file format. Microsoft Waveform files are RIFF files with a single Waveform audio file component, while Apple QuickTime files contain their own audio data structures unique to QuickTime files. MIDI Standard

Musical Instrument Digital Interface (MIDI) is an industry standard for representing sound in a binary format. MIDI is not an audio format, however. It

Multimedia

183

does not store actual digitally sampled sounds. Instead, MIDI stores a description of sounds, in much the same way that a vector image format stores a description of an image and not image data itself. Sound in MIDI data is stored as a series of control messages. Each message describes a sound event using terms such as pitch, duration, and volume. When these control messages are sent to a MIDI-compatible device (the MIDI standard also defines the interconnecting hardware used by MIDI devices and the communications protocol used to interchange the control information) the information in the message is interpreted and reproduced by the device. MIDI data may be compressed, just like any other binary data, and does not require special compression algorithms in the way that audio data does. For Further Information Information about multimedia products from Microsoft may be obtained from the following address: Microsoft Corporation Multimedia Systems Group Product Marketing One Microsoft Way Redmond, WA 98052-6399

The following documents, many of which are included in the Microsoft Multimedia Development Kit (MDK), contain information on multimedia applications and file formats:

Microsoft Windows Multimedia Development Kit (MDK) 1.0 Programmers Reference

Microsoft Windows 3.1 Software Development Kit (SDK) Multimedia Programmer's Reference Microsoft Windows Multimedia Programmer's Guide Microsoft Windows Multimedia Programmer's Reference Multimedia Developer Registration Kit (MDRK) Multimedia Programming Interface and Data Specification 1.0, August 1991 Microsoft Multimedia Standards Update March 13, 1993, 2.0.0

184

OVERVIEW

A great deal of useful information about multimedia files and applications may be found at the following FTP site: ftp.microsoft.com (in the developer/drg/Multimedia directory) The specification for MIDI may be obtained from: International MIDI Association (IMA) 5316 West 57th Street

Los Angles, CA 90056 213-649-6434

Refer to the articles on Microsoft RIFF, Intel DVI, MPEG, and QuickTime in Part II of this book for specific information on multimedia file formats.

MULTIMEDIA

185

Part

Two

Graphics File Formats

Adobe Photoshop | Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

Adobe Photoshop Photoshop 2.5 Bitmap See article

Uncompressed, RLE 30,000x30,000 No

Big-endian Adobe Macintosh, MS Windows

Photoshop, others

Specification on CD:

Yes

Code on CD:

No

Images on CD:

No

See Also:

Usage:

None

Storage and interchange of files for use with Adobe's Photoshop application.

comments: A simple yet well-thought-out bitmap format notable for the large number of colors supported.

Overview The Adobe Photoshop format was created to support Adobe's Photoshop application, which is currently in revision 2.5. Photoshop is a highly regarded professional application with tools for bitmap editing, color, and image manipulation. It is widely distributed and used primarily in the commercial sector. There are versions which run under both the Apple Macintosh and Microsoft Windows. A previous version of the format, version 2.0, suffered from several shortcomings, including a lack of compression and dependence on the Macintosh file system. These problems have been eliminated in version 2.5.

Adobe Photoshop

189

Adobe Photoshop (cont'd)

File Organization Files in the Photoshop format consist of the following: • A fixed-length header, which may include color information • A variable-length mode block • A variable-length image resources block • A reserved block

• A word indicating the compression type • The image data On the Macintosh platform, files are identified by the filetype code 8BPS, which is contained in the signature string of the header. Under MS Windows, on the IBM PC platform, files are saved with the extension .PSD. File Details The Adobe Photoshop header has the following format: typedef struct _PSHOP_HEADER { CHAR Signature[4]; WORD Version; CHAR Reserved[6]; WORD Channels; DWORD Rows; DWORD Columns;

/* signature string */ /* version number */

WORD Depth; WORD Mode;

/* bits per channel */ /* color mode */

/* reserved data */ /* number of color channels */ /* number of rows in image */ /* number of columns in image */

} PSHOP_HEADER;

Signature contains the four bytes 8BPS, which also identifies the filetype on the Macintosh system. Version is currently always 1. Adobe advises rejecting the image if the version number is anything but 1. Reserved is six bytes of reserved data following the version number; these should be filled with zeros.

Channels is the number of color channels in the image. Between one and 16 color channels are supported in Photoshop 2.5.

190

Graphics File Formats

Adobe Photoshop (cont'd) Rows and Columns indicate the size of the image. Depth supplies the number of bits per color channel, which currently may be either 1 or 8.

Mode is the color mode of the file, which provides information about how the image data should be interpreted. Possible values are: 0 1

Bitmap Gray scale

2

Palette color

3

RGB color

4

CMYK color

7

Multi-channel color

8

Duotone

9

Lab color

The above header is followed by the mode data. This consists of a 4-byte length field and subsequent data. If the mode field indicates that palette data is contained in the file, the length is 768 bytes. If the mode field indicates that duotone data is contained in the file, the mode data contains undocumented duotone specification information, and the uninitiated reader may treat the duotone data as gray-scale information. If the mode field indicates any other type of data (not palette or duotone), the length field is zero, and mode data is not present. Following the mode data is another variable-length image resources block. This consists of a 4-byte length field, followed by a Photoshop 2.5 image resources block, which contains information on the image's resolution and data relevant to the application. For more information about the Photoshop 2.5 image resources block, contact Adobe. Following the image resources block is a reserved block, consisting of a 4-byte length and data. Currently this is reserved for future use, and the length is zero.

The reserved block is followed by two bytes of compression data, for which two values are currently defined: 0

1

Uncompressed Run length each codeup (RLE)

The RLE algorithm used is the same as that used by the Macintosh PackBits system routine and in the TIFF file format.

Adobe Photoshop

191

Adobe Photoshop (cont'd) The image data follows and is planar; each plane is in scanline order. There are no pad bytes. If the data is RLE compressed, the image data is preceded by the byte counts for all the scan lines. Each count is stored as a 2-byte value. This is followed by the RLE compressed data. Each line is compressed separately. For Further Information For further information about the Adobe Photoshop format, see the specification included on the CD-ROM that accompanies this disk. You can also contact Adobe Systems at: Adobe Systems Inc. Attn: Adobe Systems Developer Support 1585 Charleston Rd. P.O. Box 7900

Mountain View, CA 94039-7900 Voice: 415-961-4400 Voice: 800-344-8335 FAX: 415-961-3769

192

Graphics File Formats

Atari ST Graphics Formats | Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

Specification on CD:

Bitmap and animation Typically 16 None and RLE

Typically 320x200 pixels Yes (animation formats only) Big-endian Various Atari ST software developers Atari ST

Many Yes (third-party description)

Code on CD:

No

Images on CD:

No

See Also:

usage:

Atari ST graphics file formats .AM, .ANM, .CE1, .CE2, .CE3, .FLM, .UC1, .UC2, .UC3., NEO, .PAC, .PCI, .PC2, .PC3, PC3, .PI1, .PI2, .PI3, .RGB, .SEQ, .TNY .TNI, .TN2, .TN3

IFF

All of these formats are used by paint and animation packages found on the Atari ST.

comments: The Atari ST, with its superior graphics capabilities, was a natural platform for development of multimedia, so much of the multimedia developments of today are based on these formats.

Overview The Atari ST computer is the home of many sparsely documented image file formats. Many of these formats are used specifically for storing animation images and dumps of images displayed on the screen. Although the Electronic Arts IFF format is used by most Atari ST paint and animation programs, many software developers have devised their own special-purpose formats to fill their needs.

Atari ST Graphics Formats

193

Atari ST Graphics Formats (cont'd)

File Organization and Details This section contains a brief description of each of the Atari ST file formats; each format has its own file extension.

Animatic Film Format (.FLM) The Animatic Film file format (file extension .FLM) stores a sequence of lowresolution 16-color images, which are displayed as an animation. Files in the .FLM format are stored as a header followed by one or more frames of image data. The header is 26 bytes in length and has the following format: typedef struct _AnimaticFilinHeader { /* Number of frames in the animation */ WORD NumberOfFrames; WORD Palette[16]; /* Color palette */ /* Speed of playback */ WORD FilmSpeed; /* Direction of play */ WORD PlayDirection; /* Action to take after last frame */ WORD EndAction; WORD FrameWidth; /* Width of frame in pixels */ /* Height of frame in pixels */ WORD FrameHeight; WORD MajorVersionNumber; /* Animatic major version number */ WORD MinorVersionNumber; /* Animatic minor version number */ /* ID number (always 27182818h) */ LONG MagicNumber; /* Unused (all zeros) */ LONG Reserved[3]; } ANIMATICFILMHEADER;

NumberOfFrames specifies the total number of frames in the animation. Palette is the color palette for the animation, stored as an array of 16 WORD values.

FilmSpeed is the number of delay (vblank) frames to display between each animation frame. The value of this field may be in the range 0 to 99. PlayDirection is the direction the animation is played. Values for this field are OOh for forwards and 01 h for backwards.

EndAction specifies the action to take when the last frame of the animation is reached during playback. A value of OOh indicates that the player should pause and then repeat the animation from the beginning. A value of 01 h indicates that the animation should immediately repeat from the beginning (loop). A value of 03h indicates that playback should repeat in the reverse direction. FrameWidth and FrameHeight are the size of the animation frames in pixels.

194

Graphics File Formats

Atari ST Graphics Formats (cont'd) MajorVersionNumber and MinorVersionNumber contain the version number of the Animatic software that created the animation.

MagicNumber contains an identification value for Animatic Film files. This value is always 27182818h. Reserved is 12 bytes of space reserved for future header fields. All bytes in this field have the value OOh.

ComputerEyes Raw Data Format (.CE1 and .CE2) The ComputerEyes Raw Data Format is found in a low-resolution (file extension .CE1) and a medium-resolution (file extension .CE2) format. The header is 10 bytes in length and has the following format: typedef struct _ComputerEyesHeader { LONG Id; /* Identification value (always 45594553h) */ WORD Resolution; /* Image data resolution */ BYTE Reserved[8]; /* Miscellaneous data */ } COMPUTEREYESHEAD;

Id is a value used to identify a file as containing ComputerEyes-format data. The value of this field is always 45594553h or "EYES" as an ASCII string. Resolution is the resolution of the image data stored in the file. This value is OOh for low-resolution data and 01 h for high-resolution data. Reserved is eight bytes of additional information, which is not needed for decoding the image data. If the Resolution field value is OOh (low resolution), then the image data will be divided into three 320x220 RGB planes. Each plane is 64,000 bytes in size and is stored in red, green, blue order. The image data stores one pixel per byte, and only the lower six bits of each byte are used. Low-resolution image data is stored vertically, so rows of data are read along the Y-axis and not along the X-axis as in most other formats.

If the Resolution field value is Olh (high resolution), then the image data is stored in a single 640x480 plane, which is always 256,000 bytes in size. The image data stores one pixel per WORD, with the red value stored in bits 0 through 4, green in bits 5 through 9, and blue in bits 10 through 14. Bit 15 is not used. High-resolution image data is also stored along the vertical, rather than the horizontal, axis of the bitmap.

atari ST Graphics Formats

195

Atari ST Graphics Formats (cont'd) Cyber Paint Sequence Format (.SEQ) The Cyber Paint Sequence file format (file extension .SEQ) is used for storing sequences of 16-color low-resolution images used in animations. Cyber Paint also supports an efficient form of delta-encoded data compression. typedef struct _CyberPaintHeader { WORD MagicNumber; WORD VersionNumber; LONG NumberOfFrames; WORD DisplayRate; BYTE Reserved [118];

/* Identification number */ /* Version number */ /* Total number of frames */ /* Display speed */ / * Unused * /

LONG Frame0ffsets[NumberOfFrames] ; /* Array of frame offsets */ } CUVERPAINTHEAD;

MagicNumber is an identification number used to indicate that the file contains Cyber Paint Sequence data. This value is typically FEDBh or FEDCh. VersionNumber is the version number of the format.

NumberOfFrames specifies the number of data frames stored. DisplayRate is the number of delay (vblank) frames to display between each animation frame.

Reserved is a 118-byte field reserved for future header fields. This field is set to a value of OOh.

FrameOffsets is an array of LONG offset values with a number of elements equal to the value stored in the NumberOfFrames field. Each offset value indicates the starting position of each frame, calculated from the beginning of the file.

Each frame contains a header of descriptive information in the following format:

typedef struct _CyberPaintFrame { /* Frame type */ WORD Type; /* Frame Resolution */ WORD Resolution; /* Color palette */ WORD Palette[16]; /* Name of frame data file */ BYTE FileName[12]; /* Color animation limits */ WORD Limits; /* Color animation speed and direction */ WORD Speed; /* Number of color steps */ WORD NumberOf Steps; WORD XOffset; /* Left position of frame on display */

196

Graphics File Formats

Atari ST Graphics Formats (cont'd) WORD YOffset;

/* Top position of frame on display */

WORD FrameWidth;

/* Width of the frame in pixels */

WORD FrameHeight;

/* Height of the frame in pixels */

BYTE Operation;

/* Graphics operation */

BYTE Compression;

/* Data storage method */

LONG DataSize;

/* Length of the frame data in bytes */ /* Unused */

BYTE Reserved[60]; } CYBERPAINTFRAME;

Type is an identification value identifying the header as belonging to a frame. Resolution is the resolution of the frame data and is usually OOh. Palette is an array of values for the 16-color palette for this frame. FileName stores the name of the disk file in which the frame data is stored.

The default string stored in this field is "

. ", which indicates no filename.

Limits is the color animation limits of the frame.

Speed specifies the speed and direction of the playback. NumberOfSteps is the number of color steps in the image data. XOffset is the left position of the frame on the display. This value may be in the range of 0 to 319. YOffset is the top position of the frame on the display. This value may be in the range of 0 to 199. FrameWidth and FrameHeight are the size of the frame in pixels. Operation is the graphics operation to perform on the frame data. A value of OOh indicates copy, and a value of 01 h indicates an exclusive OR. Compression indicates whether the frame data is compressed (a value of Olh) or uncompressed (a value of OOh). DataSize is the actual size of the data (compressed or uncompressed) stored in the frame.

Reserved is a 60-byte field reserved for future header fields. All bytes in this field have the value OOh.

Frame data stored in a Sequence file is always 320x200 pixels. The frame data is stored as four bitplanes, with one pixel stored per WORD. Pixels are always stored along the vertical (Y) axis of the bitmap. Therefore, the first 200 bytes of frame data are the first pixels of the first bitplane of the frame, and so on.

Atari ST Graphics Formats

197

Atari ST Graphics Formats (cont'd) Frame data may be compressed using a delta-encoding algorithm. Using this technique, only the changes between frames are actually encoded. Interframe data that does not change is not saved. The first frame in a sequence is always stored in its entirety. \fou have to start someplace. Each frame thereafter is compared to the previous frame, and only the X and Y coordinates of rectangular regions of pixel values (called change boxes) that have changed are saved. Only one change box is stored per frame. Each change box may be stored in one of five different variations, always using the variation that yields the best compression for a particular change box. These variations are:

• Uncompressed Copy, where the frame data is uncompressed and is simply copied onto the screen at coordinates specified in the XOffset and YOffset header fields.

• Uncompressed EOR, where the frame data is exclusive ORed with the data already at XOffset,YOffset. • The frame data is compressed and must be uncompressed before copying to the screen.

• Compressed EOR, where the frame data must be uncompressed before it is exclusive ORed with the screen data.

• Null Frame, which contains no data (height and width are OOh) and is treated as the previous frame. Compressed data contains a sequence of control WORDs (16-bit signed WORDs) and data. A control WORD with a value between 1 and 32,767 indicates that the next WORD is to be repeated a number of times equal to the control WORD value. A control WORD with a negative value indicates that a run of bytes equal to the absolute value of the control WORD value is to be read from the compressed data. DEGAS Format (.PI1, .PI2, .PI3, .PCI, .PC2, .PC3) The DEGAS animation file format actually occurs in three different variations. The DEGAS and DEGAS Elite formats support low, medium, and highresolution graphics data (files have the extension .PI1, .PI2, and .PI3 respectively). The DEGAS Elite Compressed format supports low, medium, and highresolution graphics data (files have the extension .PCI, .PC2, and .PC3 respectively) , and it also supports data compression.

198

graphics File formats

Atari ST Graphics Formats (cont'd) The DEGAS format stores only a single image of the display. The header is 34 bytes long and is followed by 32,000 bytes of image data: typedef struct _DegasHeader { WORD Resolution; /* Image resolution */ WORD Palette[16]; /* Color palette */ } DEGASHEAD;

Resolution is the resolution of the image data stored as a bit-field. Valid values are:

OOh

Left

Olh

Off

02h

Right

Palette is an array of 16 WORD values that holds the color palette for the image. The DEGAS Elite format contains the same header and image data structure as the DEGAS format. It differs form the DEGAS format in that it has a 32-byte footer containing additional information: typedef struct _DegasEliteFooter { /* Left color animation limit table */ WORD LeftColor[4]; WORD RightColor[4]; /* Right color animation limit table *./ WORD Direction[4]; /* Animation channel direction flag */ WORD Delay[4]; /* Animation channel delay */ } DEGASELITEFOOT;

LeftColor stores the left color animation limit table containing the starting color numbers for the animation.

RightColor stores the right color animation limit table containing the ending color numbers for the animation.

Direction contains the animation channel direction bit-field flag. Valid values are:

OOh

Left

Olh

Off

02h

Right

ATARI ST GRAPHICS FORMATS

199

Atari ST Graphics Formats (cont'd) Delay is the animation channel delay rate between frames. This value is measured in 1/60 of a second and is subtracted from the constant 128 to calculate this value. The DEGAS Elite Compressed format contains the same header and footer as the DEGAS Elite format, with one variation in the header data. The Resolution field uses the following bit values to indicate the resolution of the image data: 8000h 800 lh 8002h

Low resolution Medium resolution High resolution

The compression algorithm used is identical to RLE scheme found in the Interchange file format (IFF); see the article on the Interchange format for details. RGB Intermediate Format (.RGB) The RGB Intermediate Format (file extension .RGB) is actually three lowresolution DEGAS .PI1 files concatenated into a single file. The pixel data in each plane contains an actual red, green, or blue color-channel value rather than an index into the 16-color palette. On the Atari ST, there are only three bits per color channel. The Atari ST with the extended color palette uses four bits per color channel. The structure of an entire RGB file is shown here: struct _RgbFile { WORD RedResolution; WORD RedPalette[16]; WORD RedPlane[16000];

/* Red plane resolution (ignored) */ /* Red plane palette (ignored) */ /* Red plane data */

WORD GreenResolution

/* Green plane resolution (ignored) */

WORD GreenPalette

/* Green plane palette (ignored) */

WORD GreenPlane[16000]; /* Green plane data */ WORD BlueResolution;

/* Blue plane resolution (ignored) */

WORD BluePalette;

/* Blue plane palette (ignored) */

WORD BluePlane[16000]; /* Blue plane data */ }

200

Graphics File formats

Atari ST Graphics Formats (cont'd) Imagic Film/Picture Format (JC1, JC2, JC3) The Imagic Format stores low-, medium-, and high-resolution image data using the file extensions .IC1, .IC2, and .IC3 respectively. The header is 49 bytes long and is formatted as follows: typedef struct _ImagicHeader { /* File identification value */ BYTE Id[4]; WORD Resolution; WORD Palette[16];

/* Image resolution */ /* Color palette */

WORD Date;

/* Date stamp */

WORD Time; BYTE Name[8];

/* Time stamp */ /* Base file name */

WORD Length;

/* Length of data */

LONG Registration; BYTE Reserved[8];

/* Registration number */ /* Unused */

BYTE Compression; } IMAGICHEAD;

/* Data compression flag */

Id is the identification value for this format and contains the characters IMDC.

Resolution specifies the resolution of the image data. Values are: OOh

Low resolution

01 h

Medium resolution

02h

High resolution

Palette is an array of 16 elements storing the color palette for this image. Date and Time contain a date and time stamp indicating when the file was created. These stamps are in GEMDOS (Atari native operating system) format. Name is the base filename of the image file. Length is the length of the image data stored in the file. Registration is the registration number of the Imagic application program which created the image file. Reserved is an 8-byte field which is unused and set to a value of OOh. Compression indicates whether the image data is compressed. A value of OOh indicates no compression, while a value of Olh indicates that the image data is compressed.

ATARI ST GRAPHICS FORMATS

201

Atari ST Graphics Formats (cont'd) Image data may be run-length encoded (RLE) or delta compressed. Delta compression results in smaller animation files than RLE, although on complex images RLE works better. NEOchrome Format (.NEO) NEOchrome image files have the file extension .NEO and contain a 79-byte header followed by 16,000 bytes of image data. The format of the header is as follows: typedef struct _NeochromeHeader { WORD Flag; WORD Resolution;

/* Flag byte (always OOh) */

WORD Palette[16];

/* Color palette */

CHAR FileName[12]; WORD Limits;

/* Name of image file */ /* Color animation limits */

/* Image resolution */

WORD Speed;

/* Color animation speed and direction */

WORD NumberOfSteps; WORD XOffset;

/* Number of color steps */

WORD YOffset;

/* Image Y offset (always OOh) */

WORD Width;

/* Image width (always 320) */

WORD Height; WORD Reserved[33];

/* Reserved (always OOh) */

/* Image X offset (always OOh) */

/* Image height (always 200) */

} NEOCHROMEHEAD;

Flag is a collection of flag bits and is always set to a value of OOh. Resolution specifies the resolution of the image data. Values are: OOh

Low resolution

01 h

Medium resolution

02h

High resolution

Palette is the color palette for this image stored as an array of 16 WORD values. FileName is the name of the image file. The default string for this field is " . " . Limits specifies the color animation limits of the image. Bits 0 through 3 specify the value of the upper-right limit, and bits 4 through 7 specify the value of the lower-left limit. Bit 15 is set to 1 if the animation data is valid.

Speed specifies the color animation speed and direction. Bits 0 through 7 specify the speed of the playback in number of blank frames displayed per animation frame. Bit 15 indicates the direction of playback. A value of 0 indicates normal and a value of 1 indicates reversed.

202

Graphics File Formats

Atari ST Graphics Formats (cont'd) NumberOfSteps is the number of frames in the animation. XOffset and YOffset are the starting coordinates of the image on the display. These values are always OOh. Width is the width of the image in pixels. This value is always 320. Height is the height of the image in pixels. This value is always 200. Reserved is a field of 33 bytes reserved for future header fields. All bytes in this field are set to OOh.

NEOchrome Animation Format (.ANI) NEOchrome Animation files have the file extension .ANI and contain a header

followed by a sequence of one or more frames of animation data stored in their playback order. The header is 22 bytes and is formatted as follows: typedef struct _NewchromeAniHeader { LONG MagicNumber; /* ID value (always BABEEBEAh) */ WORD Width; /* Width of image in bytes */ WORD Height; /* Height of image in scan lines */ WORD Size; /* Size of image in bytes + 10 */ WORD XCoord; /* X coordinate of image */ WORD YCoord; /* Y coordinate of image - 1 */ /* Total number of frames */ WORD NumberOfFrames; WORD Speed; LONG Reserved; } NEWCHROMEANIHEAD;

/* Animation playback speed */ /* Reserved (always OOh) */

MagicNumber is the identification value for Neochrome animation files. This value is always BABEEBEAh. Width is the width of the animation in pixels. This value must always be divisible by 8. Height is the height of the animation frames in pixels (scan lines). Size is the total size of a frame in bytes, plus 10. XCoord specifies the left position of the image in pixels, minus one. This value must be divisible by 16. YCoord specifies the top position of the image in pixels, minus one. NumberOfFrames specifies the number of image frames in the animation.

ATARI ST GRAPHICS FORMATS

203

Atari ST Graphics Formats (cont'd) Speed specifies the playback speed of the animation in number of blank frames displayed per image frames. Reserved is an unused field, which is set to OOh. STAD Format (.PAC) The STAD image file format has the file extension .PAC. It contains a header followed by a single block of RLE-compressed image data. The header is seven bytes in size and contains only information necessary to decompress the image data. The format of the header is as follows: typedef struct _StadHeader { CHAR Packed[4]; /* BYTE IdByte; /* BYTE PackByte; /* BYTE SpecialByte; /* } STADHEADER;

Packing orientation of image data */ RLE ID value of a 'PackByte' run */ The value of a 'PackByte' run */ RLE ID value of a non-'PackByte' run */

Packed contains the characters pM86 if the image data in the file is vertically packed, or pM85 if it is horizontally packed. IdByte is a value used to indicate an RLE byte run that uses the PackByte value. PackByte is the most frequently occurring byte value in the image data. SpecialByte is a value used to indicate an RLE byte run using a value stored in the image data. The image data in a STAD file is always compressed using a simple RLE algorithm. STAD is a bit unique in that it allows the option of packing image data either horizontally along the bitmap (with the scan lines), or vertically down the bitmap (across the scan lines). The direction of the encoding is specified in the Packed field in the header. The most frequently occurring byte value in the image data is stored in the PackByte field of the header. This reduces the size of the compressed data by not requiring this value to be redundantly stored in the compressed image data itself. The STAD RLE algorithm uses three types of packets: • The first type of packet is two bytes in length and contains an Id value and a run-count value. If the ID matches the value stored in the IdByte field of

204

Graphics File Formats

Atari ST Graphics Formats (cont'd) the header, then the value in the PackByte header field is repeated "run count + 1" times. This packet is used only to encode byte runs of the value stored in the PackByte header field. • The second type of packet is three bytes in length and is used to store a run of a value other than that in the PackByte field. The first byte is an ID matching the SpecialByte field in the header. The second byte is the value of the run, and the third byte is the number of bytes in the run. • The third type of packet is a single literal byte. If an ID byte is read, and it does not match either the IdByte or SpecialByte value, then this byte is simply written literally to the output. Following is a simple, pseudo-code description of the RLE decoding process: Read a byte If the byte is the IdByte value Read a byte (the RunCount) Repeat the PackByte value RunCount + 1 times else If the byte is the SpecialByte value Read a byte (the RunValue) Read a byte (the RunCount) Repeat the RunValue RunCount times else

Use the byte value literally. Tiny Format (.TNY, .TNI, .TN2, .TN3) The Tiny format (.TNY) is similar to the NEOchrome formats. Tiny files may contain low (.TNI), medium (.TN2), or high (.TN3) resolution image data. Tiny files may have one of two different header formats. The most common is 37 bytes in length and is formatted as follows: typedef struct _TinyHeader { BYTE Resolution; WORD Palette[16]; WORD ControlBytes; WORD DataWords; } TINYHEAD;

/* Resolution of the image data */ /* Color palette */ /* Number of control bytes */ /* Number of data words */

Atari ST Graphics Formats

205

Atari ST Graphics Formats (cont'd) Resolution specifies the resolution of the image data. Values are: OOh

Low resolution

01 h

Medium resolution

02h

High resolution

Palette is the 16-color palette of the image data. ControlBytes is the number of control bytes found in the image data. This value is in the range of 3 to 10,667. DataWords is the number of WORDs of image data stored in the file. This value is in the range of 1 to 16,000. If the value of the Resolution field is 03h or greater, the Tiny header has the following format: typedef struct _TinyHeader { BYTE Resolution; BYTE Limits; BYTE Speed; WORD Duration; WORD Palette[16]; WORD ControlBytes; WORD DataWords; } TINYHEAD;

/* Resolution of the image data */ /* Color animation limits */ /* Speed and direction of playback */ /* Color rotation duration */ /* Color palette */ /* Number of control bytes */ /* Number of data words */

Limits specifies the left and right color animation limits. Bits 0 through 3 store the right (end) limit value, and bits 4 through 7 store the left (start) limit value.

Speed specifies the speed and direction of the animation playback. A negative value indicates left playback, and a positive value indicates right playback. The absolute value is the speed (delay between frames) in increments of 1/60 of a second.

Duration specifies the color rotation duration (number of iterations). ControlBytes specifies the size of a BYTE array that follows the header. This array contains control values used to specify how the Tiny image data is to be uncompressed. DataWords specifies the number of data words in the image data. The runlength encoded image data follows the control-value array. Tiny image data is

206

Graphics File Formats

Atari ST Graphics Formats (cont'd) uncompressed by reading a control value and then, based on the control value, performing an action on the encoded data. If a control value is negative, then the absolute value of the control value indicates the number of WORDs to read literally from the compressed data. If a control value is equal to zero, another control value is read, and its value (128 to 32767) is used to specify the number of times to repeat the next data WORD. If a control value is equal to one, another control value is read and its value (128 to 32767) is used to specify the number of literal WORDs to read from the data section. And a control value greater than one specifies the number of times to repeat the next WORD read from the data section (two to 127). The uncompressed image data is stored along its Yaxis in a fashion identical to many other Atari image file formats. For Further Information Although we have not been able to obtain an official specification document from Atari, the article included on the CD that accompanies this book contains detailed information about the Atari ST graphics file formats. See the following: Baggett, David M., Atari ST Picture Formats. The author has also kindly agreed to be a resource for information about the Atari ST format files. Contact:

David M. Baggett [email protected]

ATARI ST GRAPHICS FORMATS

207

I AutoCAD DXF Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

AutoCAD DXF

AutoCAD Drawing Exchange Format, DXF, .DXB, .SLD, .ADI Vector 256 None NA No

Multiple Autodesk MS-DOS

AutoCad, many CAD programs, Corel Draw, others

Specification on CD:

Yes

Code on CD:

No

Images on CD:

No

See Also:

Usage:

None

Storage and exchange of CAD and vector information.

comments: A difficult format, mainly because it is controlled by a single vendor, but one that is in wide use, both for the exchange of CAD information and simple line-oriented vector information. It has two twin binary formats, one with the same file extension as the non-binary version and one tailored to the originating application (AutoCad). Data is stored as 7-bit text.

Overview The AutoCAD DXF (Drawing eXchange Format) and the AutoCAD DXB (Drawing eXchange Binary) formats are associated with the CAD application AutoCAD, created and maintained by Autodesk. DXB is a binary version of a DXF file used for faster loading and is apparently tailored for use by AutoCAD. The DXB format first appeared in AutoCAD in Release 10. Other file formats associated with AutoCAD are the slide (.SLD) and plot (.ADI) formats. DXF supports many features not found in most other vector formats, including the ability to store three-dimensional objects and to handle associative dimensioning. 208

Graphics File Formats

AutoCAD DXF (cont'd) One curiosity of the format is that each object in a DXF file can be assigned a color value, but because the file contains no palette information, there is no way to associate the color values with any external referent. Because DXF was created in support of a CAD program, there is good support for included text. Although DXF is widely used as a least-common-denominator format for the exchange of simple line data, an application designer wishing to support DXF must consider that fact that AutoCAD can store curves and 3-D objects as well as 2-D. Although there is a provision in AutoCAD (and other programs) for writing only the simplest vector data, providing full support for DXF is a huge task. Many application designers ultimately decide to write, but not read, DXF files, preferring to leave the hard part to others. Because the DXF format is quite complex and is fully and concisely documented in about 40 pages, we will simply outline some of the information necessary to understand the file. Please refer to the original specification document on the CD-ROM included with this book.

File Organization Each DXF file consists of five sections: a header; tables, blocks, and entities sections; and an end-of-file marker. • The header contains zero or more groups of header variables that contain information applying to the entire image. • The tables section contains organized data that is referenced by other sections of data. Table data may include information on font sizes and styles, line type descriptions, and layer information. • The blocks section describes each block of information that is found in the image. • The entities section contains the actual object data of the image. • The end-of-file marker is the string EOF; it appears as the last line in the file. Each section contains zero or more groups of data; there is no minimum set of groups that is required to appear in any section. Each group occupies two lines in the file in the following format: GROUP CODE GROUP VALUE

AUTOCAD DXF

209

AutoCAD DXF (cont'd) The first line is the group code, which identifies the type of data found within the group. It is a numeric field, three bytes, and is padded with spaces if it consists of less than three characters. Group codes are followed by a carriage return/linefeed pair. Some common group codes include: 0

The start of a section

1

A text value

2

A section name identifier

9

A header variable name identifier

10

X coordinate

20

Y coordinate

30

Z coordinate

62

color value

999

Comment

The second line of a group is the group value. This is a label, similar to a variable name, which identifies how the value of a group is to be used. There is no minimum set of header variables required to appear in all DXF headers. Of the more than 100 defined header variables, only the ones necessary to render the image are actually found in a DXF header. A DXF header with no header variable would appear as: 0 SECTION 2 HEADER 999 [header variables would appear here.] 0 ENDSEC

Normally, a DXF file contains only 7-bit ASCII characters, but a binary version of the format exists, which also has the extension .DXF. Binary DXF files are smaller (by 20-30 percent), load faster into AutoCAD, and preserve floatingpoint data more accurately than the ASCII DXF format. Release 10 was the first version of AutoCAD to support binary-format DXF files. File Details Binary DXF files always begin with a specific 22-byte identification string: AutoCAD Binary DXF The format is the same as the ASCII DXF file, but all numeric values are re-

encoded in binary form. All group codes are 2-byte little-endian words; all floating-point numbers are 8-byte double precision words; and all ASCII strings are NULL-terminated. In addition, there are no comments in binary DXF files. The Drawing Interchange Binary (DXB) format is a subset of the DXF format that is also binary encoded. The DXB files are much smaller than binary DXF

210

Graphics File Formats

AutoCAD DXF (cont'd) format files and load very quickly. DXB format also supports fewer entities than the DXF format.

A DXB file can be distinguished from a binary DXF file by the file extension .DXB, and by the fact that it always begins with a 19-byte identification string: AutoCAD DXB 1.0

For Further Information For further information about the AutoCAD DXF format, see the DXF specification included on the CD that accompanies this book. The AutoCAD Manual Release 12 also contains complete information on the DXF format; see: AutoCAD Customization Manual, Release 12, Autodesk Inc., 1992, pp.241-281. Autodesk has also released an electronic document describing the DXF format, which may be found on many online services and BBSs. Many books on AutoCAD have been published, and several include in-depth information on the DXF format, including the following: Jones, Frederick H. and Martin, Lloyd. The AutoCAD Database Book, Ventana Press, Chapel Hill, NC, 1989. Johnson, N. AutoCAD, The Complete Reference, second edition, McGrawHill, New York, NY, 1991. For additional information about this format, you may also contact: Autodesk, Inc. Attn: Neele Johnston Autodesk Developer Marketing 2320 Marinship Way Sausalito, CA 94965 Voice: 415-491-8719 neele@autodesk. com

AUTOCAD DXF

211

|BDF

Overview BDF (Bitmap Distribution Format) is used by the X Window System as a method of storing and exchanging font data With other systems. The current version of BDF is 2.1; it is part of XI1 Release 5. BDF is similar in concept to the PostScript Page Description Language. Both formats store data as printable ASCII characters, using lines of ASCII text that vary in length. Each line is terminated by an end-of-line character that may be a carriage return (ASCII ODh), a linefeed (ASCII OAh), or both. Each BDF file stores information for exactly one typeface at one size and orientation (in other words, a font). A typeface is the name of the type style, such as Goudy, Courier, or Helvetica. A font is a variation in size, style, or orientation of a typeface, such as Goudy 10 Point, Courier Italic, or Helvetica Reversed. A glyph is a single character of a font, such as the letter "j". A BDF file therefore contains the data for one or more glyphs of a single font and typeface. 212

Graphics File Formats

BDF (cont'd)

File Organization A BDF file begins with information pertaining to the typeface as a whole, followed by the information on the font, and finally by the bitmapped glyph information itself. The information in a BDF file is stored in a series of records.

Each record begins with an uppercase keyword, followed by one or more fields called tokens: KEYWORD . . .

All records, keywords, and information fields contain only ASCII characters and are separated by spaces. Lines are terminated by a , , or pair. More than one record may appear on a physical line. File Details Following are some of the more common records found in BDF files: STARTFONT ENDFONT

All BDF files begin with the STARTFONT record. This record contains a single information field indicating the version of the BDF format used in the file. The STARTFONT record contains all of the information within the BDF file and is

terminated by the ENDFONT keyword as the last record in the file. COMMENT

COMMENT records may be found anywhere between the STARTFONT and ENDFONT records. They usually contain human-readable text that is ignored by font-reader applications. FONT

The FONT record specifies the name of the font contained within the BDF file. The name is specified using either the XFD font name or a private font name. The name may contain spaces, and the line containing the FONT record must be terminated by an end-of-line character. SIZE

SIZE specifies the size of the font in points and the resolution of the output device that is to support the font.

BDF

213

BDF (cont'd) FONTBOUNDINGBOX

The FONTBOUNDINGBOX record stores the size and the position of the font's bounding box as an offset from the origin (the lower-left corner of the bitmap). STARTPROPERTIES ENDPROPERTIES The STARTPROPERTIES record contains subrecords that define the charac-

teristics of the font. The STARTPROPERTIES keyword is followed by the number of properties defined within this record. The subrecords specify information such as the name of the font's creator, the typeface of the font, kerning and other rendering information, and copyright notices. The ENDPROPERTIES record always terminates the STARTPROPERTIES record. Following the ENDPROPERTIES record is the actual font data. Following are descriptions of some common record keywords that may be used to describe the font data: CHARS

The CHARS record indicates the number of font character (glyph) segments stored in the file. STARTCHAR ENDCHAR

The STARTCHAR record contains subrecords that store each glyph's information and data. The STARTCHAR keyword is followed by the name of the glyph. This name can be up to 14 characters in length and may not contain any spaces. The subrecords specify the index number of the glyph, the scalable width, and the position of the character. The BITMAP record contains the actual glyph data encoded as 4-digit hexadecimal values. All bitmapped lines are padded on the right with zeros out to the nearest byte boundary. All of the glyph information is contained between the STARTCHAR record and the terminating ENDCHAR record. There is one STARTCHAR/ENDCHAR section per glyph stored in the BDF file. Refer to the BDF documentation included with the X11R5 distribution for more information about the BDF information records.

214

Graphics File Formats

BDF (cont'd) Following is an example of a BDF file containing the characters j and quoteright ( '). Note that more than one record appears per physical line: STARTFONT 2.1 COMMENT This is a sample font in 2.1 format. FONT -Adobe-Helvetica-Bold-R-Normal-24-240-75-75P-65-IS08859-1 SIZE 24 75 75 FONTBOUNDINGBOX 9 24 -2 -6 STARTPROPERTIES 19 FOUNDRY "Adobe" FAMILY "Helvetica" WEIGHT_NAME "Bold" SLANT "R" SETWIDTH_NAME "Normal" ADD_STYLE_NAME "" PIXEL_SIZE 24 POINT_SIZE 240 RESOLUTION_X 75 RESOLUTION_Y 75 SPACING "P" AVERAGE_WIDTH 65 CHARSET_REGISTRY "IS08859" CHARSET_ENCODING "1" MIN_SPACE 4 FONT_ASCENT 21 FONT_DESCENT 7 COPYRIGHT "Copyright (c) 1987 Adobe Systems, Inc." NOTICE "Helvetica is a registered trademark of Linotype Inc." ENDPROPERTIES CHARS 2 STARTCHAR j ENCODING 106 SWIDTH 355 0 DWIDTH 8 0 BBX 9 22 -2 -6 BITMAP 0380 0380 0380 0380 0000 0700 0700 0700 0700 0E00 0E00 0E00 0E00 0E00 1C00 1C00 1C00 1C00 3C00 7800 F000 E000 ENDCHAR STARTCHAR quoteright ENCODING 39 SWIDTH 223 0 DWIDTH 5 0 BBX 4 6 2 12 ATTRIBUTES 01C0 BITMAP 70 70 70 60 E0 CO ENDCHAR ENDFONT

The following is the same BDF file with each of the records stored on separate lines and indented to illustrate the layering of BDF records and subrecords: STARTFONT 2.1 COMMENT This is a sample font in 2.1 format. FONT -Adobe-Helvetica-Bold-R-Normal-24-240-75-75-P-65-ISO8859-l SIZE 24 75 75 FONTBOUNDINGBOX 9 24 -2 -6 STARTPROPERTIES 19 FOUNDRY "Adobe" FAMILY "Helvetica" WEIGHT_NAME "Bold" SLANT "R" SETWIDTH_NAME "Normal" ADD_STYLE_NAME "" PIXEL_SIZE 24 POINT_SIZE 240 RESOLUTION_X 75 RESOLUTION_Y 75 SPACING "P" AVERAGE_WIDTH 65 CHARSET_REGISTRY "IS08859" CHARSET.ENCODING "1"

BDF

215

BDF (cont'd) MINJSPACE 4 FONT_ASCENT 21 FONTJDESCENT 7 COPYRIGHT "Copyright (c) 1987 Adobe Systems, Inc." NOTICE "Helvetica is a registered trademark of Linotype Inc." ENDPROPERTIES CHARS 2 STARTCHAR j ENCODING 106 SWIDTH 355 0 EWIDTH 8 0 BBX 9 22 -2 -6 BITMAP 0380 0380 0380 0380 0000 0700 0700 0700 0700 0E00 0E00 0E00 0E00 0E00 1C00 1C00 1C00 1C00 3C00 7800 F000 E000 ENDCHAR STARTCHAR quoteright ENCODING 39 SWIDTH 223 0 EWIDTH 5 0 BBX 4 6 2 12 ATTRIBUTES 01C0 BITMAP 70 70 70 60 E0 CO ENDCHAR ENDFONT

For Further Information For further information, see the BDF specification on the CD that accompanies this book. You may also find information about the BDF format in the X11R5 distribution of the X Window System, available via FTP from ftp.x. org.

216

Graphics File Formats

BRL-CAD| Name:

BRL-CAD

also known as: Ballistic Research Laboratory CAD Package type:

See article

Colors:

NA

Compression:

NA

MAXIMUM IMAGE SIZE:

NA

Multiple Images Per File:

NA

numerical format:

See article

originator: US Army Advanced Communication Systems Platform:

Supporting Applications:

UNIX BRL-CAD

specification on cd: Yes (summary only; specification is too lengthy) code on cd:

No

Images on CD:

No

See Also:

usage:

None

Solid modeling, network-distributed image processing.

comments: A massive, polymorphic system consisting of several standards.

Overview BRL-CAD (Ballistic Research Laboratory CAD) is a solid-modeling system that originated at the Advanced Computing Systems Group of the U.S. Army Ballistic Research Laboratory. It was originally designed to provide an interactive editor for use in conjunction with a vehicle-description database. In the U.S. Army, 'Vehicle" often means "tank," and the documention contains many interesting and high-quality renderings of tank-like objects. BRL-CAD is massive, consisting of more than 100 programs, and includes about 280,000 lines of C source code. It is extraordinarily well-documented and has been distributed to at least 800 sites worldwide.

BRL-CAD

217

BRLrCAD (cont'd) Conceptually, BRL-CAD implements several subsystems: • Solid geometry editor • Ray tracer and ray tracing library • Image-processing utilities • General utilities Data in the current release of BRL-CAD can be in several forms:

• BRL-specific CSG (Constructive Solid Geometry) database • Uniform B-Spline and NURB surfaces • Faceted data

• NMG (n-manifold geometry) For Further Information The BRL-CAD documentation is extensive and well-written, and we found it a pleasure to work with. Unfortunately, it is too extensive to be included on the CD, so we have elected to include only a summary description there. The full documentation is readily available in The Ballistic Research Laboratory CAD Package, Release 4.0, December 1991, albeit in paper form. Our copy came in a box weighing about ten pounds! It consists of five volumes: Volume I

The BRL-CAD Philosophy

Volume II

The BRL-CAD User's Manual

Volume III

The BRL-CAD Applications Manual

Volume IV

The MGED User's Manual

Volume V

The BRL-CAD Analyst's Manual

This is an extraordinary document set, and not only in contrast to the rest of the documention we've run across in the research for this book. It's just a great job. If the application is one-tenth as well-crafted as the documentation, it must be a marvel.

For general information about BRL-CAD, contact: Attn: Mike Muuss BRL-CAD Architect

U.S. Army Research Laboratory Aberdeen Proving Ground, MD 21005-5068 mike@brl. mil

218

Graphics File Formats

BRLrCAD (cont'd) BRL-CAD is distributed in two forms:

1. Free distribution with no ongoing support. You must complete and return an agreement form; in return, you will be given instructions on how to obtain and decrypt the files via FTP. Files are archived at a number of sites worldwide. One copy of the printed documentation will be sent at no cost. For further information about this distribution, contact: BRL-CAD Distribution Attn: SCLBR-LV-V

Aberdeen Proving Ground, MD 21005-5066 FAX: 410-278-5058 [email protected]

2. Full-service distribution with support. Items provided are similar to those mentioned in the free distribution described above, except that it costs U.S. $500 and may include the software on magnetic tape. For further information about this distribution, contact: BRL-CAD Distribution

Attn: Mrs. Carla Moyer SURVIAC Aberdeen Satellite Office

1003 Old Philadelphia Road, Suite 103 Aberdeen, MD 21001 Voice: 410-273-7794 FAX: 410-272-6763 cad_dist@brl. mil

BRL-CAD

219

|BUFR

Overview BUFR (Binary Universal Form for the Representation of Meteorological Data) was created by the World Meteorological Organization (WMO). Technically it is known as WMO Code Form FM 94-IX Ext. BUFR. It is the result of a commit-

tee, which produced the first BUFR documents in 1988. The current revision of the format, Version 2, dates from 1991. Work on the format is ongoing. It is a code in the sense that it defines a protocol for the transmission of quantitative data, one of a number of codes created by the WMO. BUFR was designed to convey generalized meteorological data, but due to its flexibility it can be used for almost anything. BUFR files, in fact, were designed

220

Graphics File formats

BUFR (cont'd) to be infinitely extensible, and to this end are written in a unique data description language. We've included BUFR in this book because it can and has been used for trans-

mission and exchange of graphics data, although that is not its primary purpose. It also is associated with observational data obtained from weather satellites.

BUFR data streams and files adhere to the specification called WMO Standard Formats for Weather Data Exchange Among Automated Weather Information Systems.

File Organization BUFR files are stream-based and consist of a number of consecutive records.

The format documentation describes BUFR records as self-descriptive. Records, or messages, make up the BUFR data stream, and each always contains a table consisting of a complete description of the data contained in the record, including data type identification, units, scaling, compression, and bits per data item. File Details Detailing the data definition language implemented in BUFR is beyond the scope of this article. It is extremely complex and is, at this point, used in a narrow area of technology. For Further Information For detailed information about BUFR, see the summary description included on the CD that accompanies this book: Thorpe, W., "Guide to the WMO Code Form FM 94-IX EXT. BUFR," Fleet Numerical Oceanography Center, Monterey, California. Although there are a number of documents on BUFR available from meteorological sources, this article is the most useful that we have found. Additional information about WMO data specifications can be found in the following official specification: Standard Formats for Weather Data Exchange Among Automated Weather Information Systems, Document Number FCM-S2-1990.

BUFR

221

BUFR (cont'd) This document is available from:

U.S. Department of Commerce/National Oceanic and Atmospheric Administration Attn: Ms. Lena Loman

Office of the Federal Coordinator for Meteorological Services and Supporting Research (OFCM) 6010 Executive Blvd, Suite 900 Rockville, MD 20852 Voice: 301-443-8704 For more information about the BUFR format, contact:

U.S. Department of Commerce/National Oceanic and Atmospheric Administration (NOAA) Attn: Dr. John D. Stackpole Chief, Production Management Branch, Automation Division National Meteorological Center WINMC42, Room 307, WWB 5200 Auth Road

Camp Springs, MD 20746 Voice: 301-763-8115 FAX: 301-763-8381

jstack@sun 1. wwb. noaa.gov

222

Graphics File formats

CALS Raster | Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

CALS Raster CALS, CAL, RAS

Bitmap Mono

CCITT Group 4, uncompressed Unlimited

Yes (Type II only) NA

U.S. Department of Defense All Too numerous to list

Specification on CD:

No

Code on CD:

No

Images on CD:

Yes

See Also:

Usage:

None

Compound document exchange, DTP, CAD/CAM, image processing.

comments: A well-documented, though cumbersome, format that attempts to do many things. If you are unfamiliar with U.S. government specification documents, you will probably find working with this format a complicated and challenging task. CALS raster is mandatory for use in most U.S. government document-handling applications. Because all data is byte-oriented, big-endian versus litde-endian problems never arise.

Overview The CALS raster format is a standard developed by the Computer Aided Acquisition and Logistics Support (CALS) office of the United States Department of Defense to standardize graphics data interchange for electronic publishing, especially in the areas of technical graphics, CAD/CAM, and image processing applications. CALS is also an Office Document Interchange Format (ODIF) used in the Office Document Architecture (ODA) system for the exchange of compound document data between multiple machine platforms and software applications. CALS is an attempt to integrate text, graphics, and image data into a standard CALS RASTER

223

CALS Raster (cont'd) document architecture. Its ultimate goal is to improve and integrate the logistics functions of the military and its contractors. All technical publications for the federal government must conform to the CALS standard. Many other government organizations are also quickly adopting CALS. Commercial businesses, such as the medical, telecommunications, airline, and book publishing industries have also standardized on CALS. CALS has also come into wide use in the commercial computer industry, such as in CAD/CAM applications, and in the aerospace industry, which owes a large part of its business to government and military contracts. CALScompliant technical illustration systems also use the PostScript Page Description Language and Encapsulated PostScript files to exchange data between themselves and commercial systems.

File Organization There are two types of CALS raster formats as defined by MIL-STD-28002A. They are specified as the Type I and Type II raster formats. Type I raster data files contain a single, monochrome image compressed using the CCITT Group 4 (T.6) encoding algorithm and appended to a CALS raster header record data block.

Type II image files contain one or more monochrome images that are also stored using the CCITT Group 4 encoding algorithm. In addition, the Type II format supports the encoding of image data as a collection of pel tiles. Each tile of image data is either separately encoded using CCITT Group 4 or is stored in its raw, unencoded format. The location of each tile within the image is stored in a tile offset index, for convenient retrieval of individual tiles. For further detail on the CALS Type II raster graphics format, refer to MILR-28002A.

The structures of the two CALS variants, Type I and Type II, are shown below. The Type I file format consists of the following: Header

Image Data The Type II file format looks like this: Header Document Profile

Presentation Styles Document Layout 224

Graphics File Formats

CALS Raster (cont'd) Root Layout Layout Object Page 1 Tile Index

Image Data Layout Object Page 2 Tile Index

Image Data Layout Object Page N Tile Index

Image Data As you can see, the Type II format is considerably more complex than the Type I. Each Type II file may contain one or more pages of image data. There is also a considerable amount of page and document formatting data present in a Type II file. But by far the most common use of the Type II format is simply to store a collection of Type I CALS raster images in the same physical file. In such an arrangement, all the image pages are untiled, CCITT Group 4 encoded, and the profile, style, and layout information are omitted. The raster data in a Type I file is always encoded using the CCITT Group 4 encoding method. CCITT Group 3 encoded and unencoded data is not supported. Type II files may contain tiles that are either CCITT Group 4 encoded or raw, unencoded data. Both raw and encoded tiles may occur in the same Type II CALS file and are always 512 pels in size. If the end of the image is reached before a tile is completely encoded, then this partial tile is completed by adding padding. Two other types of tiles found in Type II images are null foreground and null background tiles. Null tiles are entirely white or entirely black, depending upon the designated background and foreground colors. They are actually pseudo-tiles that are not present in the image data and have no tile offset value.

Tile data is stored in the image data along the pel path (rows) and down the line progression (columns). Storage of randomly distributed tiles is possible, but discouraged. Tiles are normally encoded, unless the image data is so complex that the time required to encode the image is too great or unless very little reduction in the size of the data would result if the tile were encoded. The

inclusion of unencoded data in a T6-encoded data stream is not supported by the CALS raster format.

CALS RASTER

225

CALS Raster (cont'd) File Details This section contains detailed information about the components of a CALS raster file. Header Record Data Block

The CALS raster header is different from most other graphics file format headers in that it is composed entirely of 7-bit ASCII characters in a humanreadable format. When most graphics image files are displayed as a text file, seemingly random garbage is displayed on the screen. Listing a CALS raster file, however, will reveal ASCII information which is quite understandable to a human reader. The unintelligible garbage following the header is the compressed image data. The CALS raster data file header is 1408 bytes and is divided into eleven 128-byte records. Each record begins with a predefined 7-bit ASCII record identifier that is followed by a colon. The remaining part of the 128-byte record is the record information. If a record contains no information, the character string NONE is found in the record. The remainder of the bytes in each record contain space characters (ASCII 32) as filler. All data in the header block is byte-oriented, so no adjustments need to be made for byte order.

Following the last record in the header is 640 bytes of padding that rounds the header out to a full 2048 bytes in length. In fact, the raster image data always begins at offset 2048. Although this padding is not actually defined as part of the header, additional records added to future versions of the CALS header would be placed in this area. The byte value used for the padding is usually a space character (ASCII 2Oh), but any ASCII character can be used. The structure for the CALS raster header block is shown below. typedef struct _CalsHeader { CHAR SourceDocId[128] ;

/* Destination Document ID */

CHAR TextFileId[128];

/* Text File Identifier

CHAR Figureld[128];

/* Table Identifier

CHAR SourceGraph[128]; CHAR DocClass[128];

/* Source System Filename */

CHAR RasterType[128]; CHAR Orientation[128]; CHAR PelCount[128];

226

/* Source Document Identifier

CHAR DestDocId[128];

Graphics File formats

*/

*/

/* Data File Security Label */ /* Raster Data Type /* Raster Image Orientation /* Raster Image Pel Count

*/

CALS Raster (cont'd) CHAR Density[128]; CHAR Notes[128];

/* Raster Image Density */ /* Notes */

CHAR Padding[640]; } CALSHEAD;

/* Pad header out to 2048-bytes

*/

Image record identifiers Each record in a CALS raster file starts with a record identifier, which is a string of ASCII characters followed by a colon and a single space. Record data immediately follows the record identifier. If the record does not contain any relevant data, then the ASCII string NONE is written after the identifier. SourceDocId:

SourceDocId starts with the source system document identifier (srcdocid). This record is used by the source system (the system on which the document was created) to identify the document to which the image is attached. This identifier can be a document title, publication number, or other similar information. DestDocId:

DestDocId starts with the destination system document identifier (dstdocid). This record contains information used by the destination organization to identify the document to which the image is attached. This record may contain the document name, number or title, the drawing number, or other similar information. TextFileld: TextFileld starts with the text file identifier (txtfilid). This record contains a

string indicating the document page that this image page contains. A code is usually found in this record that identifies the section of the document to which the image page belongs. Such codes may include: cov

FOR

Cover or title page List of effective pages Warning pages Promulgation pages Change record Forword or preface

TOC

Table of contents

LOI

Lists of illustrations and tables

SUM

Safety Summary

LEP WRN PRM CHR

CALS RASTER

227

CALS Raster (cont'd) PTn

Part number n

CHn

Chapter number n

SEn

Section number n

APP-n GLS

Appendix n Glossary

INX

Index

FOV

Foldout section

Figureld: Figureld starts with the figure or table identifier (figid). This is the number by which the image page figure is referenced. A sheet number is preceded by the ASCII string -S and followed by the drawing number. A foldout figure is preceded by the ASCII string -F and followed by the number of pages in the foldout.

SourceGraph: SourceGraph starts with the source system graphics filename (srcgph). This record contains the name of the image file. DocClass:

DocClass starts with the data file security label (doccls). This record identifies the security level and restrictions that apply to this image page and/or associated document.

RasterType: RasterType starts with the raster data type (rtype). This is the format of raster image data that follows the header record data block in this file. This record contains the character 1 for Type I raster data and 2 for Type II raster data. Orientation:

Orientation starts with the raster image orientation identifier (rorient). This record indicates the the proper visual orientation of the displayed image. This data is represented by two strings of three numeric characters separated by a comma. The first three characters are the direction of the pel path of the image page. Legal values are 0, 90, 180, and 270 representing the number of degrees the image was rotated clockwise from the origin when scanned. A page scanned normally has a pel path of 0 degrees, while an image scanned in upside-down has a pel path of 180 degrees.

228

Graphics File Formats

CALS Raster (cont'd) The second three characters represent the direction of line progression of the document. Allowed values are 90 and 270 representing the number of degrees clockwise of the line progression from the pel path direction. A normal image has a line progression of 270, while a mirrored image has a line progression of 90. PelCount:

PelCount starts with the raster image pel count identifier (rpelcnt). This record indicates the width of the image in pels and the length of the image in scan lines. This data is represented by two strings of six numeric characters separated by a comma. Typical values for this record are shown in Table CALS-1. Table CALS Raster-1: Typical CALS Raster Pel Count Values Drawing Size Pels Per Line, Number of Lines A B C D E F

001728,002200 002240,003400 003456,004400 004416,006800 006848,008800 005632,008000

Density: Density starts with the raster image density identifier (rdensity). This density is a single four-character numeric string representing the numerical density value of the image. This record may contain the values 200, 240, 300, 400, 600, or 1200 pels per inch, with 300 pels per inch being the default. Notes:

Notes starts with the notes identifier (notes). This is a record used to contain miscellaneous information that is not applicable to any of the other records in the CALS raster file header. Example This section contains an example of a CALS raster file header data block created by a facsimile (FAX) software application. This image file contains a single page of a facsimile document received via a computer facsimile card and stored to disk as a CALS raster image file.

CALS RASTER

229

CALS Raster (cont'd) The source of the document is identified as FAX machine number one. The

destination is an identification number used to index the image in a database. The ID number is constructed from the date the FAX was received, the order in which it was received (e.g., it was the third FAX received that day), and the total number of pages. The text file identifier indicates that this file is page three of a seven-page facsimile document, for example. The figure record is not needed, so the ASCII string NONE appears in this field. The source graphics filename contains the MS-DOS filename of the CALS raster file in which the page is stored. The remaining records indicate that the FAX document is unclassified, contains Type I CALS raster image data, has a normal orientation, and that the size and density of the image correspond to that of a standard facsimile page. The Notes field contains a time stamp showing when the FAX was actually received. Please note that this is only one possible way that data may appear in a CALS header block. Most government and military software applications create CALS header blocks that are far more cryptic and confusing than this example. On the other hand, several CAD packages create simpler CALS headers. Following is an example of a CALS header record data block: srcdocid:

Fax machine #1

dstdocid:

910814-003.007

txtfilid:

003,007

figid:

NONE

srcgph:

F0814003.007

doccls:

Unclass

rtype:

1

rorient:

000,270

rpelcnt:

001728,002200

rdensity:

0200

notes:

Fri Aug 14 12:21:43 1991 PDT

For Further Information Information about the CALS raster format is found primarily in the following military standards documents: Automated Interchange of Technical Information, MIL-STD-1840A. This document contains a description of the header (called a header record data block) in the CALS format.

230

Graphics file formats

CALS Raster (cont'd) Requirements for Raster Graphics Representation in Binary Format, MILR-28002A. This document contains a description of the image data in the CALS format.

The CALS raster file format is supported through the following office of the Department of Defense: CALS Management Support Office (DCLSO) Office of the Assistant Director for Telecommunications

and Information Systems Headquarters Defense Logistics Agency Cameron Station Alexandria, VA 22314

The documents MIL-STD-1840 and MIL-R-28002A may be obtained from agencies that distribute military specifications standards, including the following: Standardization Documents Ordering Desk Building 4D 700 Robbins Avenue

Philadelphia, PA 19111-5094 Global Engineering Documents 2805 McGaw Avenue Irvine, CA 92714 Voice: 800-854-7179 Voice: 714-261-1455

Useful and readily available periodical articles on CALS and ODA include the following: Dawson, R, and Nielsen, R, "ODA and Document Interchange," UNIX Review, Volume 8, Number 3, 1990, p. 50. Hobgood, A., "CALS Implementation—Still a Rew Questions," Advanced Imaging, April 1990, pp. 24-25.

CALS RASTER

231

|CGM Name:

Also Known As:

Type:

CGM

Computer Graphics Metafile Metafile Unlimited

Compression:

MAXIMUM IMAGE SIZE:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

RLE, CCITT Group3 and Group4 Unlimited Yes NA ANSI, ISO All

Too many to list

Specification on CD:

No

Code on CD:

No

Images on CD:

Yes

See Also:

None

Standardized platform-independent format used for the interchange of bitmap and vector data. Comments:

CGM is a very feature-rich format which attempts to support the graphic needs of many general fields (graphic arts, technical illustration, cartography, visualization, electronic publishing, and so on). While the CGM format is rich in features (many graphical primitives and attributes), it is less complex than PostScript, produces much smaller (more compact) files, and allows the interchange of very sophisticated and artistic images. In fact, so many features are available to the software developer that a full implementation of CGM is considered by some to be quite difficult. Nevertheless, CGM use is spreading quickly.

Overview CGM (Computer Graphics Metafile) was developed by experts working on committees under the auspices of the International Standards Organization (ISO) and the American Standards National Institute (ANSI). It was specifically designed as a common format for the platform-independent interchange of bitmap and vector data, and for use in conjunction with a variety of input

232

Graphics File formats

CGM (cont'd) and output devices. Although CGM incorporates extensions designed to support bitmap (called raster in the CGM specification) data storage, files in CGM format are used primarily to store vector information. CGM files typically contain either bitmap or vector data, but rarely both. The newest revision of CGM is the CGM: 1992 standard, which defines three upwardly compatible levels of increasing capability and functionality. Version 1 is the original CGM:1987 standard, a collection of simple metafile primitives. Version 2 metafiles may contain Closed Figures (a filled primitive comprised of other primitives). Version 3 is for advanced applications, and its metafiles may contain Beziers, NURBS, parabolic and hyperbolic arcs, and the Tile Array compressed tiled raster primitive. CGM uses three types of syntactical encoding formats. All CGM files contain data encoded using one of these three methods: • Character-based, used to produce the smallest possible file size for ease of storage and speed of data transmission • Binary encoded, which facilitates exchange and quick access by software applications • Clear-text encoded, designed for human readability and ease of modification using an ASCII text editor CGM is intended for the storage of graphics data only. It is sometimes (erroneously) thought to be a data transfer standard for CAD/CAM data, like IGES, or a three-dimensional graphic object model data storage standard. However, CGM is quite suited for the interchange of renderings from CAD/CAM systems, but not for the storage of the engineering model data itself. CGM supports and is used by the Graphical Kernel System (GKS) standard, but is something completely different. GKS, which is in fact an API graphics library specification, is often mistaken for a graphics file format. CGM has found a role on most platforms as a method for the transfer of graphics data between applications. Programs that support CGM include most business graphics and visualization packages and many word processing and CAD applications. Vector primitives supported by Version 1 CGM metafiles include lines, polylines, arcs, circles, rectangles, ellipses, polygons, and text. Each primitive may have one or more attributes, including fill style (hatch pattern), line or edge color, size, type, and orientation. CGM supports bitmaps in the form of cell arrays and tile arrays. The logical raster primitives of CGM are deviceindependent.

cgm

233

CGM (cont'd) A minor point, but one worth noting, is that the three flavors of encoding supported by CGM may not all be readable by all software applications that import CGM files. Despite the existence of a solid body of rules and encoding schemes, CGM files are not universally interchangeable. Many CGM file-writing applications support different subsets of standard features, often leaving some features out that may be required by other CGM readers. Also, because CGM allows vendor-specific extensions, many (such as custom fills) have been added, making full CGM support by an application difficult.

The CGM: 1987 standard included a "Minimum Recommended Capabilities" list to aid developers in implementing a CGM application capable of reading and writing CGM metafiles correctly. Unfortunately, some of the big manufacturers chose to ignore even these modest requirements. Therefore, because it is impossible to police everyone who implements CGM in an application, many incompatibilities do exist. In an effort to improve compatibility, the CGM: 1992 standard has removed the "Minimum Recommended Capabilities" list in anticipation of the publication of the pending CGM Amendment 1, which defines more stringent conformance requirements and a "Model Profile," which could be considered a minimal useful implementation level.

File Organization and Details All CGM files start with the same identifier, the BEGIN METAFILE statement, but its actual appearance in the file depends on how the file is encoded. In clear-text encoding, the element is simply the ASCII string BEGMF. If the file is binary encoded, you must read in the first two bytes as a word; the most significant byte (MSB) is followed in the file by the least significant byte (LSB). Bits in this word provide the following information: 15-12: 11-05: 04-00:

234

Element class Element ID Parameter list length

Graphics File Formats

CGM (cont'd) BEGIN METAFILE is a "Delimiter Element," making it class 0. The element ID within that class is 1. The parameter list length is variable, so it must be ANDed out when comparing. The bit pattern is then: 00000000001XXXXX

To check it, simply AND the word with 0XFFE0 and compare it with 0X0020. In reading the standard, we get the impression that it is actually legal to add padding characters (nulls) to the beginning of the file. We rather doubt that anyone would actually do this, but it may be appropriate to read in words until a non-zero word is read and compare this word. You can read in full words because all elements are constrained to start on a word boundary. For Further Information CGM is both an ANSI and an ISO standard and has been adopted by many countries, such as Australia, France, Germany, Japan, Korea, and the United Kingdom. The full ANSI designation of the current version of CGM is: Information Processing Systems—Computer Graphics Metafile for the Storage and Transfer of Picture Description Information, ANSI/ISO 8632-1992 (commonly called CGM:1992). Note that CGM: 1992 is the current standard. Be careful not to obtain the ear-

lier ANSI X3.122-1986 if you need the latest standard. This earlier document, CGM: 1986, defining the Version 1 metafile, was superseded by ISO/IEC 8632:1992. ANSI adopted CGM:1992 without modification and replaced ANSI X3.122-1986 with it. The CGM standard is contained in four ISO standards documents:

ISO 8632-1 Part 1: Functional Specification ISO 8632-2 Part 2: Character Encoding ISO 8632-3 Part 3: Binary Encoding ISO 8632-4 Part 4: Clear Text Encoding These may be purchased from any of the following organizations: International Standards Organization (ISO) 1 rue de Varembe Case Postal 56 CH-1211 Geneva 20 Switzerland Voice:+41 22 749 01 11 FAX: +41 22 733 34 30

cgm

235

CGM (cont'd) American National Standards Institute (ANSI) Sales Department 1430 Broadway New York, NY 10018 Voice: 212-642-4900 Canadian Standards Association (CSA)

Sales Group 178 Rexdale Blvd. Rexdale, Ontario, M9W 1R3 Voice: 416-747-4044

Other countries also make the CGM specification available through their standards organizations; these include DIN (Germany), BSI (United Kingdom), AFNOR (France), andJIS (Japan). The National Institute of Standards and Technology (NIST) has set up a CGM Testing Service for testing CGM metafiles, generators, and interpreters. The Testing Service examines binary-encoded CGM files for conformance to Version 1 CGM, as defined in the application profiles of FIPS 128-1 and the DoD CALS CGM AP military specification MIL-D-28003A. You can purchase the testing tool used by NIST so you can do internal testing on various PC and UNIX systems. For more information about the CGM Testing Service, contact: National Institute of Standards and Technology (NIST) Computer Systems Laboratory Information Systems Engineering Division Gaithersburg, MD 20899 Voice: 301-975-3265

You can also obtain information about CGM from the following references: Henderson. L.R. and Mumford, A.M., The CGM Handbook, Academic Press, San Diego, CA, 1993. Arnold, D.B. and Bono, PR., CGM and CGI: Metafile and Interface Standards for Computer Graphics, Springer-Verlag, New York, Ni^ 1988. Arnold, D.B. and Bono, PR., CGM et CGI: normes de metafichier et d'interfaces pour Vinfographie, French translation and updating of the above reference, Masson, 1992.

236

Graphics File Formats

CGM (cont'd) P.R. Bono, J.L. Encarnacao, L.M. Encarnacao, and W.R. Herzner, PC Graphics With GKS, Prentice-Hall, Englewood Cliffs, NJ, 1990.

com

237

I CMU Formats Name:

Also Known As:

Type:

Colors:

Compression:

CMU Formats

Andrew Formats, CMU Bitmap Multimedia NA

Uncompressed

Maximum Image Size:

NA

Multiple Images Per File:

NA

Numerical Format:

NA

Originator:

Platform:

Supporting Applications:

Carnegie Mellon University All Andrew Toolkit

Specification on CD:

Yes

Code on CD:

No

Images on CD:

No

See Also:

Usage:

None

Used primarily at Carnegie Mellon University in conjunction with the Andrew Toolkit.

comments: Included mainly for its architectural uniqueness.

Overview The Andrew Consortium at Carnegie Mellon University is the source of the Andrew Toolkit, which is associated with the Andrew User Interface System. The Toolkit API is the basis for applications in the Andrew User Interface System. Data objects manipulated by the Andrew Toolkit must adhere to conventions crystallized in the Andrew Data Stream specification, a draft of which is included on the CD accompanying this book. The system was designed to support multimedia data from a variety of programs and platforms. We understand that there is a bitmap format which originated at Carnegie Mellon University, but were unable to locate information prior to publication. The PBM utilities may include some support for converting and manipulating a CMU Bitmap, however.

238

Graphics File Formats

CMU Formats (cont'd)

File Organization In the CMU formats, data is organized into streams and is written in 7-bit ASCII text. This is an interesting idea—nearly unique in the graphics file format world—which appears designed to enhance the portability of the format, at some cost in file size. Text may include tabs and newline characters and is limited to 80 characters of data per line. Note that Andrew Toolkit files assume access by the user to the Andrew Toolkit. In the words of the documentation authors:

As usual in ATK, the appropriate way to read or write the data stream is to call upon the corresponding Read or Write method from the AUIS distribution. Only in this way is your code likely to continue to work in the face of changes to the data stream definition. Moreover, there are a number of special features—mostly outdated data streams—that are implemented in the code, but not described here. File Details Data files used by the Andrew Toolkit consist of data objects, which are marked in the file by a begin/end marker pair. The initial marker associated with each data object must include information denoting the object type, as well as a unique identifier, which may be used as a reference ID by other objects. The following is an example from the documentation: \begindata{text,1}

\begindata{picture,2}

\enddata{picture,2} \view {pictureview,2}

\enddata{text,1} Text Data Streams Text data streams are similar to other data streams. Their structure is as follows: \begindata line \textdsversion line \template line

CMU FORMATS

239

CMU Formats (cont'd) definitions of additional styles the text body itself styled text embedded objects in text body \enddata line Each of these elements is described below.

\begindata This line has the form: \begindata{text,99999}

where 99999 is a unique identifier. \textdsversion This line has the form: \textdsversion{12}

There are apparently files written with data stream versions other than 12. \template A file may use a style template, in which case there will be a line of the form: \template{default}

where default is the name of the template used and is the prefix of a filename. The system appends a suffix .tpl and looks for the template along file paths defined in the Andrew Toolkit installation. Please see the specification for further information.

Definitions of additional styles Additional styles may be defined and used on the fly; each style consists of two or more lines: \define{internalstylename menuname

attribute

attribute}

internalstylename is always written in lowercase and may not contain spaces.

240

Graphics File Formats

CMUFormats (cont'd) The menuname line is optional. If it is missing, there must be an empty line in its place. If present, it has the form: menu: [Menu card name,Style name]

Attributes are also optional; if they are missing, the closing } appears at the end of the menuname line. Attribute lines are of the form: attr:[attributename basis units value]

where value is a signed integer. Text body Text consists of any number of consecutive lines, each terminated by a newline character.

Styled text Text in the body may be displayed in a style, in which case it is preceded by a previously defined name: \internalstylename{

and is followed by the corresponding closing brace. Embedded objects Objects may be embedded in the text body. The documentation for the CMU formats describes the use of embedded objects as follows: When an object is embedded in a text body, two items appear: the data stream for the object and a View line. The \begindata for the object is always at the beginning of a line. (The previous line is terminated with a backslash if there is to be no space before the object.) The \enddata line for the object always ends with a newline (which is not treated as a space). The View line has the form: \view{rasterview,8888,777,0,0} \enddata The\enddata line has the form: \enddata{text,99999}

CMU FORMATS

241

CMU Formats (cont'd) Bitmap Images A bitmap image is a standard data stream beginning with a \begindata line and ending with a \enddata line. These generally surround a header and an image body. The first line of the header consists of the following: 2 0 65536 65536 0 0 484 603

The following describes the numbers in this header: Raster version

2

Denotes the second version of this

encoding Options

0

This field may specify changes to the image before displaying it: raster_INVERT(l«0) /* exchange black and white */ raster_FLIP(l«l)

/* exch top and bottom */

raster_FLOP (1«2)

/* exch left and right */

raster_ROTATE(l«3) /* rotate 90 clockwise */

xScale, yScale:

65536 65536 Affects the size at which the image is printed. The value rasterJJNITSCALE (136535) prints the image at approximately the size on the screen. The default scale of 65,536 is approximately half the screen size.

x, y, width, height:

0 0 484 603

It is possible for a raster object to display a portion of an image. These fields select this portion by specifying the index of the upper-left pixel and the width and height of the image in pixels. In all instances so far, x and y are both zero, and the width and height specify the entire image.

The second header line has three possible variations. Currently, only the first is used.

242

Graphics File Formats

CMU Formats (cont'd) Variation 1: bits 10156544 484 603

RasterType:

bits

Rasterld:

10156544

Width, Height:

484 603

Width and Height describe the width of each row and the number of rows. Variation 2: refer 10135624

RasterType:

refer

Rasterld:

10135624

The current data object refers to the bits stored in another data object that appears earlier in the same data stream. Variation 3: file 10235498 filename path file RasterType: Rasterld:

10235498

The bit data is found in the file .

Please check the specification document on the CD-ROM for subtleties and further details of the format.

For Further Information For further information about the CMU formats and the Andrew Toolkit, contact: Andrew Consortium

Attn: Andrew Plotkin, Fred Hansen Carnegie Mellon University Pittsburgh, PA 15213-3890 info-andrew+@andrew. emu. edu

CMU FORMATS

243

|DKB

Overview The DKB ray trace application was created by David K. Buck, for whom it is named. DKB has enjoyed wide distribution, particularly in the PC/MS-DOS BBS world.

Mr. Buck declined our request for information on the format and permission to reprint the relevant documentation. He feels that there is no interest in the program, and that it has been superseded by the POV ray tracer, which is based in part on DKB. We think otherwise, of course. Once a format is out there—especially if it is distributed as freeware or shareware—it is out there, and there is no practical way to stop people from using it. Our survey of some of the large BBSs in the U.S. showed that DKB, along with other, older ray trace programs, is being downloaded with some regularity, despite the

244

Graphics File Formats

DKB (cont'd) availability of POV. Unfortunately, we are not able to describe the DKB format in any detail here. For Further Information The DKB ray trace package is available for download from Internet archive sites and BBS systems running on a number of platforms.

DKB

245

I Pore Raster File Format Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

multiple images per file:

Numerical Format:

Originator:

Platform:

Supporting Applications:

Dore Raster File Format Dore, RFF

Bitmap Unlimited None Unlimited No

Any Kubota Pacific Computers UNIX

AVS visualization package, many others

Specification on CD:

Yes

Code on CD:

No

Images on CD:

No

See Also:

None

Storage and interchange of 2D and 3D image data.

usage:

comments: The Dore Raster File Format is one of the rare formats that combines

both ASCII and binary information within the same image file. Dore lacks a native method of data compression, but does support the storage of voxel data.

Overview The Dore Raster File Format (Dore RFF) is used by the Dore graphics library for storing two- and three-dimensional image data. Once stored, the data may be retrieved or the RFF files may be used to interchange the image information with other software packages. Dore itself is an object-oriented 3D graphics library used for rendering 2D and 3D near-photographic quality images and scenes. Dore supports features such as texture and environment mapping, 2D and 3D filtering, and storage of 3D raster data.

Most raster (bitmap) file formats are only capable of storing two-dimensional pixels (picture elements). Dore RFF is capable of storing three-dimensional 246

Graphics File Formats

Dore Raster File Format (cont'd) pixels called voxels (volume elements). A voxel contains standard pixel information, such as color and alpha channel values, plus Z-depth information, which describes the relative distance of the voxel from a point of reference. The Dore RFF format defines Z-depth data as a 32-bit integer value in the range of OOh to FFFFFFFFh (2^32 - 1). A value of OOh places a voxel at the farthest possible point from the point of reference, and a value of FFFFFFFFh places a voxel at the closest possible point. The Dore Raster File Format is the underlying API for the AVS visualization package.

File Organization Dore RFF is a rather simple, straightforward format that contains primarily byte-oriented data. All Dore RFF files contain an ASCII header, followed by binary image data. Following is an example of a Dore RFF file. The position of the binary image data is indicated by the label in brackets. The symbols are formfeed (ASCII OCh) characters: # # Dore Raster File Example # rastertype = image width = 1280 height = 1024 depth = 2 pixel = r8g8b8a8z32 wordbyteorder = big-endian

[Binary Image Data]

The header may contain up to six fields. Each field has the format keyword = value and is composed entirely of ASCII characters. There is typically one field per line, although multiple fields may appear on a single line if they are separated by one or more white-space characters. Comments begin with the # character and continue to the end of the line.

dore raster File format

247

Dare Rosier File Format (cont'd) The header fields are summarized below:

rastertype Type of raster image contained in the file. The only supported value for this keyword is image. This field must appear first in all RFF headers and has no default value. width

2-byte WORD value that indicates the width of the image in pixels or voxels. The width field must appear in all RFF headers and has no default value.

height

2-byte WORD value that indicates the height of the image in pixels or voxels. The height field must appear in all RFF headers and has no default value.

depth

2-byte WORD value that indicates the depth of the raster. 2D raster images always contain pixels, and therefore the depth value is 1. 3D raster images always contain voxels, and therefore the depth value is always greater than 1. The depth field is optional and has a default value of 1.

pixel

String indicating the data format of each pixel or voxel. The pixel field has no default value and must appear in all RFF headers. The possible values for this field are as follows: r8g8b8 Pixels are three bytes in size and stored as three separate 8-bit RGB values.

r8g8b8a8 Pixels are four bytes in size and stored as three separate 8-bit RGB values and a single 8-bit alpha channel value in RGBA order.

a8b8g8r8 Pixels are four bytes in size and stored as three separate 8-bit RGB values and a single 8-bit alpha channel value in ABGR order.

a8g8b8a8z32 Voxels are eight bytes in size and stored as three separate 8-bit RGB values, a single 8-bit alpha channel value, and a single 32-bit DWORD containing the Z-depth value in RGBAZ order.

248

a8

Pixels or voxels are a single byte in size and only contain a single 8-bit alpha channel value.

z32

Voxels are a single byte in size and only contain a single 32-bit Z-depth value.

Graphics File formats

Dore Raster File Format (cont'd) wordbyteorder Indicates the byte-order of the Z-depth values in the binary image data. The value of this field may be either the string big-endian or little-endian. The appearance of the wordbyteorder field in the header is optional, and the value defaults to big-endian. The order of the fields within the header is not significant, except for the rastertype field, rastertype must always appear in every RFF header and must always be the first field. The width, height, and pixel fields must appear in every header as well. All other fields are optional; if not present, their default values are assumed.

The End Of Header marker is two formfeed characters. If it is necessary to pad out the header to end on a particular byte boundary, you can place any number of ASCII characters, except formfeeds, between these two formfeed characters. See the example below: # # Example of using the End Of Header marker to pad out the header # # to a byte boundary # rastertype = image width = 64 height = 64 depth = 1 pixel = r8g8b8 wordbyteorder = little-endian

[ ASCII data used to extend the length of the header ]

[Binary Image Data]

File Details The image data immediately follows the End OF Header maker and is always stored in an uncompressed binary form. The format of the image data depends upon the format of the pixel (or voxel) data indicated by the value of the pixel field in the header. The image data is always stored as contiguous pixels or voxels organized within scan lines. All RGB and alpha channel values are stored as bytes. Z-depth values are stored as 32-bit DWORDs, with the byte-order of the Z-depth values

DORE RASTER FILE FORMAT

249

Dare Raster File Format (cont'd) being indicated by the value of the wordbyteorder field in the header. Any pixel containing a Z-depth value is regarded as a voxel. The format of each possible type of pixel or voxel is illustrated using C language structures as follows: /* Pixel - Simple RGB triple */ typedef struct _r8g8b8 { BYTE red; BYTE green; BYTE blue; } R8G8B8; /* Pixel - RGB triple with alpha channel value */ typedef struct _r8g8b8a8 { BYTE red; BYTE green; BYTE blue; BYTE alpha; } R8G8B8A8; /* Pixel - RGB triple with alpha in reverse order */ typedef struct _a8b8g8r8 { BYTE alpha; BYTE blue; BYTE green; BYTE red; } A8B8G8R8; /* Voxel - RGB triple, alpha, and Z-depth */ typedef struct _a8g8b8a8z32 { BYTE red; BYTE green; BYTE blue; BYTE alpha DWORD zdepth; } R8G8B8A8Z32; /* Pixel or voxel mask - Only an alpha channel value */ typedef struct _a8 { BYTE alpha;

250

Graphics file formats

Dore Raster File Format (cont'd) } A8; /* Voxel mask - Only a Z-depth value */ typedef struct _z32 { DWORD zdepth; } Z32;

For Further Information For further information about the Dore Raster File Format, see the specification included on the CD-ROM that accompanies this book. You can also contact:

Kubota Pacific Computer, Inc. Attn: Steve Hollasch 2630 Walsh Avenue Santa Clara, CA 95051 Voice: 408-727-8100

hollasch@kpc. com You can also contact:

Lori Whippier [email protected]

Dore Raster File Format

251

Dr. Halo Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

multiple images per file:

Numerical Format:

Originator:

Dr. Halo CUT, PAL

Bitmap 8-bit maximum

RLE, uncompressed 64KbX64Kb pixels No Little-endian

Media Cybernetics

Platform:

MS-DOS

Supporting Applications:

Dr. Halo

Specification on CD:

Yes

Code on CD:

No

Images on CD:

Yes

See Also:

Usage:

None

Used in device independent file interchange

comments: A well-defined, well-documented format in wide use, which is quick and easy to read and decompress. It lacks, however, a superior compression scheme, making it unsuited for the storage of deep-pixel images.

Overview The Dr. Halo file format is a device-independent interchange format used for transporting image data from one hardware environment or operating system to another. This format is associated with the HALO Image File Format Library, the Dr. Halo III paint program, and other software applications written and marketed by Media Cybernetics. Dr. Halo images may contain up to 256 colors, selectable from an 8-bit palette. Only one image may be stored per file. The Dr. Halo format is unusual in that it is divided into two separate files. The first file has the extension .CUT and contains the image data; the second has the extension .PAL and contains the color palette information for the image.

252

graphics File Formats

Dr. Halo (cont'd)

File Organization The Dr. Halo header is shown below: typedef struct _HaloHeader { WORD Width; WORD Height; WORD Reserved; } HALOHEAD;

/* OOh Image Width in Pixels */ /* 02h Image Height in Scan Lines */ /* 04h Reserved Field (set to 0) */

Width and Height represent the size of the image data. Reserved is set to zero to allow for possible future expansion of the header. Following the header is the image data. Each scan line is always encoded using a simple byte-wise run length encoding (RLE) scheme. File Details The .CUT file contains image data in the form of a series of scan lines. The first two bytes of each encoded scan line form a Run Count value, indicating the number of bytes in the encoded line. Each encoded run begins with a onebyte Run Count value. The number of pixels in the run is the seven least significant bits of the Run Count byte and ranges in value from 1 to 127. If the most significant bit of the Run Count is 1, then the next byte is the Run Value and should be repeated Run Count times. If the most significant bit is zero, then the next Run Count bytes are read as a literal run. The end of every scan line is marked by a Run Count byte, which may be OOh or 80h. The following pseudocode illustrates the decoding process: ReadScanLine:

Read a WORD value of the number of encoded bytes in this scan line ReadRunCount: Read a BYTE value as the Run Count

If the value of the seven Least Significant Bits (LSB) If the Most Significant Bit (MSB) Read the next byte as the Run Value and repeat it Run Count times else If the MSB of the Run Count is 0

Read the next Run Count bytes Goto ReadRunCount:

DR. HALO

253

Dr. Halo (cont'd) else If the value of the seven LSB of the Run Count is 0 The end of the scan line has been reached Goto ReadScanLine:

The second Dr. Halo image file usually has the extension .PAL and contains the color palette information for the image. Having a separate color palette file offers the advantage of being able to change the stored colors of an image without re-encoding the image data. The PAL file header is 40 bytes in length and has the following format: typedef struct _HaloPalette i

BYTE Fileld[2] WORD Version;

/

/* OOh /* 02h

File Identifier - always "AH" */ File Version */

WORD Size;

/* 04h

CHAR FileType;

/* 06h

CHAR SubType; WORD Boardld;

/* 07h

WORD GraphicsMode; WORD Maxlndex;

/* OAh

WORD MaxRed; WORD MaxGreen;

/* OEh

WORD MaxBlue;

/* 12h

Graphics Mode of Stored Image */ Maximum Color Palette Index */ Maximum Red Palette Value */ Maximum Green Palette Value */ Maximum Blue Color Value */

CHAR Paletteld[20];

/* 14h

Identifier String "Dr. Halo" */

/* 08h /* OCh /* lOh

File Size in Bytes minus header */ Palette File Identifier */ Palette File Subtype */ Board ID Code */

} HALOPAL;

There are actually two types of .PAL files: generic and video hardware-specific. The header shown above is for the generic type. A hardware-specific palette file may contain additional information in the header. Fileld always contains the byte values 41 h and 48h. Version indicates the version of the HALO format to which the palette file conforms.

Size is the total size of the file minus the header. This gives the total size of the palette data in bytes. FileType, the palette file identifier, is always set to OAh. Subtype, the palette file subtype, is set to OOh for a generic palette file and to 01 h for hardware-specific.

254

Graphics File Formats

Dr. Halo (cont'd) Boardld and GraphicsMode indicate the type of hardware and the mode that created and displayed the palette data. Maxlndex, MaxRed, MaxGreen, and MaxBlue describe the palette data. Paletteld contains up to a 20-byte string with an ASCII identifier. Unused string elements are set to OOh.

Palette data is written as a sequence of three-byte triplets of red, green, and blue values in 512-byte blocks. If a triplet does not fit at the end of a block, the block is padded and the triplet used to start the next block. All RGB values are in the range of 0 to 255. For Further Information For further information about the Dr. Halo format, see the specification included on the CD-ROM that accompanies this book. For additional information, contact: Media Cybernetics Attn: BillShotts

Technical Support Manager 8484 Georgia Avenue Silver Spring, MD 20910 Voice: 301-495-3305, extension 235 FAX: 301-495-5964

DR. HALO

255

I Encapsulated PostScript Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

Encapsulated PostScript EPS, EPSF, EPSI Page Description Language (PDL); used as metafile (see article) Mono See article NA No NA Adobe Macintosh, MS-DOS, MS Windows, UNIX, others Too numerous to list

Specification on CD:

Yes

Code on CD:

No

Images on CD:

Yes

See Also:

Usage:

None

Illustration and DTP applications, bitmap and vector data interchange.

comments: A file format with wide support associated with the PostScript PDL. Although complex, internal language features are well-documented in Adobe's excellent PostScript publications and elsewhere. Many applications, however, write but do not read EPSF-format files, preferring to avoid supporting PostScript rendering to the screen.

Overview Data in an Encapsulated PostScript (EPSF) file is encoded in a subset of the PostScript Page Description Language (PDL) and then "encapsulated" in the EPS standard format for portable interchange between applications and platforms. An EPSF file is also a special PostScript file that may be included in a larger PostScript language document. EPSF files are commonly used to contain the graphics and image portions of a document. The main body of the document is defined in one or more PostScript files, but each piece of line art or photographic illustration embedded in the document is stored in a separate EPSF file. This scheme offers 256

Graphics File Formats

Encapsulated PostScript (conVd) several advantages, including the ability to alter illustrations in a document without having to edit the document file itself. EPSF also provides the ability to store image data in a 7-bit ASCII form, which is occasionally more convenient than the 8-bit binary format used by most bitmap formats. Although we choose not to discuss PostScript itself in detail, because it is described so extensively elsewhere, we must look briefly at it in order to understand EPSF, which implements a subset of the language. PostScript was created in 1985 by Adobe Systems and is most often described as a PDL. It is used mainly as a way to describe the layout of text, vector graphics, and bitmap images on a printed or displayed page. Text, color, black-and-white graphics, or photographic-quality images obtained from scanners or video sources can all be described using PostScript. PostScript, however, is a versatile general-purpose computer language, similar in some respects to Forth. Partly because it is a language, PostScript is device-independent and provides portability and consistent rendering of images across a wide range of platforms. It also implements a de facto industry-standard imaging model for communicating graphics information between applications and hardware devices, such as between word processors and printers. In addition to general-purpose language features, however, PostScript includes commands used for drawing. A PostScript output device typically contains an interpreter designed to execute PostScript programs. An application sends a stream of PostScript commands (or copies a file to) the device, which then renders the image by interpreting the commands. In the case of printers, typesetters, imagesetters, and film recorders, their main function is to interpret a stream of PostScript language code and render the encoded graphical data onto a permanent medium, such as paper or photographic film. PostScript is capable of handling monochrome, gray-scale, or color images at resolutions ranging from 75 to over 3000 DPI.

PostScript files are written in 7-bit ASCII and can be created using a conventional text editor. Although PostScript can be written by hand, the bulk of the PostScript code produced today comes from applications, which include illustration packages, word processors, and desktop publishing programs. Files produced in this manner are often quite large; it is not unusual to see files in the range of several megabytes. The PostScript specification is constantly evolving, and two fairly recent developments include Display PostScript and PostScript Level 2. Display PostScript Encapsulated PostScript

257

Encapsulated PostScript (cont'd) is used for on-screen imaging and is binary-encoded. It is used to drive windoworiented PostScript interpreters for the display of text, graphics, and images. PostScript Level 2 adds additional features to the PostScript Level 1 language, including data compression (including JPEG), device-independent color, improved halftoning and color separation, facsimile compatibility, forms and patterns caching, improved printer support, automatic stroke adjustment, step and repeat capability, and higher operational speeds. PostScript Level 2 code is completely compatible with PostScript Level 1 devices.

File Organization As we mentioned at the beginning of this section, the EPS format is designed to encapsulate PostScript language code in a portable manner. To accomplish this, an EPS file normally contains nothing but 7-bit ASCII characters, except for the Display EPS format discussed later. An example of a small EPS file is shown in the section called "EPS Files."

File Details This section contains information about, and examples of, EPS, EPSI, and EPSF files. EPS Files

An EPS file contains a PostScript header, which is a series of program comments, and may appear as: %!PS-Adobe-3.0 EPSF-3.0 %%Title: Figure 1-1, Page 34 %%Creator: The Image Encapsulator %%CreationDate: 12/03/91 13:48:04 %%BoundingBox:126 259 486 534 % % EndCommen t s

Any line beginning with a percent sign (%) in a PostScript file is a comment. Normally, comments are ignored by PostScript interpreters, but comments in the header have special meanings. Encapsulated PostScript files contain two comments that identify the EPS format. The EPS identification comment appears as the first line of the file:

258

Graphics File Formats

Encapsulated PostScript (cont'd) %!PS-Adobe-3.0 EPSF-3.0

The version number following the "PS-Adobe-" string indicates the level of conformance to the PostScript Document Structuring Conventions. This number will typically be either 2.0 or 3.0. The version number following the "EPSF-" indicates the level of conformance to the Encapsulated PostScript Files Specification and is typically 1.2, 2.0, or 3.0. The next EPS-specific comment line is: %%BoundingBox:

followed by four numeric values, which describe the size and the position of the image on the printed page. The origin point is at the lower-left corner of the page, so a 640x480 image starting at coordinates 0,0 would contain the following comment: %%BoundingBox: 0 0 640 480

Both the %%PS-Adobe- and the %%BoundingBox: lines must appear in every EPS file. Ordinary PostScript files may formally be changed into EPS files by adding these two lines to the PostScript header. This, however, is a kludge and does not always work, especially if certain operators are present in the PostScript code (such as initgraphics, initmatrix, initclip, setpageparams, framedevice, copypage, erasepage, and so forth) which are not part of the EPSF subset.

The other comment lines in the header, %%Title:, %%Creator:, and %%CreationDate: are used to identify the name, creator, and creation date of the EPS file. The header is always terminated by the %%EndComments comment. Other comments may also appear in the header, such as %%IncludeFile, %%IncludeFont, and %%Page. Encapsulated PostScript (EPS) Example The following shows an example of an EPS file: %!PS-Adobe-2.0 EPSF-1.2 %%Creator: O'Reilly 2.1 %%CreationDate: 12/12/91 14:12:40 %%BoundingBox:126 142 486 651 % %EndComment s /Id {load def} bind def /s /stroke Id /f /fill Id /m /moveto Id /l /lineto Id /c /curveto Id /rgb {255 div 3 1 roll 255 div 3 1 roll 255 div 3 1 roll setrgbcolor} def 126 142 translate

Encapsulated PostScript

259

Encapsulated PostScript (cont'd) 360.0000 508.8000 scale /picstr 19 string def 152 212 1[152 0 0 -212 0 212]{currentfile picstr readhexstring pop}image

FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

FFFF00000000000000000000000000FFFFFFFF FFFF000000000000000000000000000000FFFF FFFF000000000000000000000000000800FFFF

FFFF000000000000000000000000000001FFFF

%% Thousands of lines deleted to save space in this book.

FFFF000000000000000000000000000000FFFF

FFFF000000000000000000000000000000FFFF FFFF80000000000000000000000000837DFFFF FFFF8000000067DFFCF0000000001FFFFFFFFF FFFF0000000000000000000000000FFFFFFFFF FFFF000000000000000000000000000001FFFF

showpage

Looking at the EPS example, we can see the comments header at the start of the file. Following the header there is a short block of PostScript code, which does the actual drawing, scaling, cropping, rotating, and so on of the image. This block is sometimes all that needs to be changed in order to alter the appearance of the image.

260

Graphics File Formats

Encapsulated PostScript (cont'd) Following the PostScript code block is bitmap data, which in an EPS file is called a graphics screen representation. This consists of hexadecimal digits. Each byte of image data contains eight pixels of information and is represented by two bytes of hexadecimal data in the bitmap. Image line widths are always a multiple of eight bits and are padded when necessary. Only black-andwhite, 1-bit per pixel images are currently supported by EPS. At the end of the EPS file is the showpage operator, which normally indicates a PostScript output device on which to print or display the completed image or page. In EPS files embedded in other documents, however, showpage is not really needed. Sometimes an EPS file fails to display or import properly because the PostScript interpreter reading the file does not expect to encounter a showpage and becomes confused. You can solve the problem either by disabling the showpage operator in the interpreter or by removing the showpage operator from the EPS files. If you look at a PostScript file in a editor, you may find that the very last character in the file is a CTRL-D (ASCII 04h) character. This control code has a special meaning to a PostScript device; it is an End-Of-Job marker and signals that the PostScript code stream has ended. When a PostScript device reads this character, it may perform an end-of-file terminate-and-reset operation in preparation for the next PostScript data stream. The presence of this character can sometimes be a source of problems to applications that are not expecting or equipped to handle a non-printable ASCII character in the data stream. On the other hand, problems can occur in PostScript output devices if a file does not have this character at the end of the code stream. And if spurious CTRL-D characters appear in the data stream, not much will come out of your printer at all.

EPS files (as opposed to normal PostScript) are generally created exclusively by code generators and not by hand. Each line is a maximum of 255 characters wide and is terminated with a carriage return (ASCII ODh) on the Macintosh, a linefeed (ASCII OAh) under UNIX, and a newline (ASCII ODh/OAh) under MS-DOS. A PostScript interpreter should be able to recognize files using any of these line-termination conventions.

EPS files may also be in preview format. In this format, an actual image file is appended to the end of the file. This provides a quick method of viewing the image contents of the EPS file without having to actually translate the PostScript code. This is handy for applications that cannot handle PostScript interpretation, but which can display bitmap graphics and wish to import EPS files. Previews are typically scaled down (but not cropped), lower-resolution Encapsulated PostScript

261

Encapsulated PostScript (cont'd) versions of the image. EPS previews are similar to postage stamp images found in the Truevision TGA and Lumena bitmap formats. Four file formats may be used as EPS preview images: TIFF, Microsoft Windows Metafile (WMF), Macintosh PICT, and EPSI (Encapsulated PostScript Interchange format). In the Macintosh environment, an EPS file is stored only in the file data fork. A PICT preview is stored in the resource fork of the EPS file and will have a resource number of 256. PostScript Level 2 supports JPEGcompressed images as well. TIFF and WMF files are appended to EPS files in their entirety. Because MS-DOS files do not have a resource fork or similar mechanism, a binary header must be prepended to the EPS file containing information about the appended image file. EPS Preview Header

The EPS preview header is 32 bytes in length and has the following format: typedef struct EPSHeader { BYTE Id[4];

/* Magic Number (always C5D0D3C6h) */

DWORD PostScriptOffset;

/* Offset of PostScript code */

DWORD PostScriptLength;

/* Size of PostScrip code */

DWORD WMFOffset;

/* Offset of Windows Metafile */

DWORD WMFSize;

/* Size of Windows Metafile */

DWORD TIFOffset;

/* Offset of TIFF file */

DWORD TIFSize;

/* Size of TIFF file */

DWORD Checksum;

/* Checksum of previous header fields */

} EPSHEADER;

Id, in the first four bytes of the header, contains an identification value. To detect whether an MS-DOS EPS file has a preview section, read the first four bytes of the file. If they are the values C5h DOh D3h C6h, the EPS file has a preview section. Otherwise, the first two bytes of an EPS file will always be 25h 21h (%!). PostScriptOffset and PostScriptLength point to the start of the PostScript language section of the EPS file. The PostScript code begins immediately after the header, so its offset is always 32. WMFOffset and WMFSize point to the location of the WMF data if a WMF file is appended for preview; otherwise, these fields are zero. The same is true for TIFOffset and TIFSize. Because either a TIFF or a WMF

file (but not both) can be appended, at least one, and possibly both, sets of the 262

Graphics File Formats

Encapsulated PostScript (cont'd) fields will always be zero. If the checksum field is set to zero, ignore it. Offsets are always measured from the beginning of the file. The three preview formats detailed are only somewhat portable and are fairly device dependent; not all environments can make use of the TIFF and WMF image file formats and fewer still of PICT. For one thing, the addition of 8-bit binary data to the EPS file prevents the file from being transmitted via a 7-bit data path without special encoding. The EPSI format, described in the next section, however, is designed as a completely device-independent method of image data interchange. EPSI Files

EPSI bitmap data is the same for all systems. Its device independence makes its use desirable in certain situations where it is inconvenient to store preview data in 8-bit TIFF, WMF, or PICT format. EPSI image data is encoded entirely in printable 7-bit ASCII characters and requires no uncompression or decoding. EPSI is similar to EPS bitmap data except that each line of the image begins with a comment % token. In fact, nearly every line in the EPSI preview is a comment; this is to keep a PostScript interpreter from reading the EPSI data as if it were part of the EPS data stored in the file. Typically, an application will support one or more of the device-dependent preview formats. An application should also support the EPSI format for the export of EPS data to environments that cannot use one of the other preview formats.

The following shows an example of an EPSI file: %%Title: EPSI Sample Image %%Creator: James D. Murray %%CreationDate: 12/03/91 13:56:24 %%Pages: 0 %%BoundingBox: 0 0 80 24 % %EndComment s %%BeginPreview: 80 24 1 24 % COFFFFFFFFFFFFFFFPFF % 0007F1E18F80FFFFFFFF % 0007F1C000000001FFFF % 001FFFE78C01FFFFFFFF % E7FFFFFFFFFFFFFFFFFF % 000FFFFF81FFFFFFFFFF % 0007F38000000001FFFF

Encapsulated PostScript

263

Encapsulated PostScript (cont'd) % 000FFFCE00000001FFFF % E1FFFFFFFFC7FFFFFFFF % OOFFFFFFFFEFFFFFFFFF % 0003FFCF038008EFFFFF % 0003FFCF00000000FFFF % 007FFFFFFFCFFFFFFFFF % 40FFFFFFFFFFFFFFFFFF % 0003FFFFFFFBFFFFFFFF % 00003FFFFFE00001FFFF % 0001FFFFFFF00031FFFF

% 00000E000FF80073FFFF % 00000E000FF80001FFFF % 0003FFCFFFFFFFFFFFFF % 0007FFFFFFFFFFFFFFFF % 000003800071FFFFFFFF %%EndPreview %%EndProlog %%Page: "one: 1 4 4 moveto 72 0 rlineto 0 16 rlineto -72 0 rlineto closepath 8 setlinewidth stroke %%Trailer

EPSF Files

A question that is frequently asked is "What is the difference between an EPS file and an EPSF file?" The answer is that there is no difference; they are the same format. The actual designation, EPSF, is often shortened to EPS, which is also the file extension used for EPSF files on operating systems such as MSDOS and UNIX.

EPSF files come in two flavors. The first is a plain EPSF file that contains only PostScript code. The second is a Display or Preview EPSF file that has a TIFF, WMF, PICT, or EPSI image file appended to it. Under MS-DOS, Encapsulated PostScript files have the extension .EPS, and Encapsulated PostScript Interchange files have the extension .EPI. On the Macintosh, all PostScript files have the file type EPSF. Also on the Macintosh, a file type of TEXT is allowed for PostScript files created in an editor. Such files should have the extension .EPSF or .EPSI. All other systems should use the filename extensions .EPSF and .EPSI.

264

graphics File Formats

Encapsulated PostScript (cont'd) For Further Information PostScript was created and is maintained by Adobe Systems Inc. For specific information about PostScript, contact: Adobe Systems Inc. Attn: Adobe Systems Developer Support 1585 Charleston Rd. P.O. Box 7900 Mountain View, CA 94039-7900 Voice: 415-961-4400 Voice: 800-344-8335 FAX: 415-961-3769

Adobe Systems distributes and supports a PostScript Language Software Development Kit to help software developers create applications that use PostScript. The kit includes technical information on PostScript Level 1 and Level 2 compatibility, optimal use of output devices, font software, and file format specifications. Also included are sample fonts, source code, and many PostScript utilities.

There are also numerous books written on PostScript. The fundamental reference set consists of the blue, green, and red books available in bookstores or directly from Adobe Systems or Addison-Wesley: Adobe Systems, PostScript Language Tutorial and Cookbook, AddisonWesley, Reading, MA, 1985. Adobe Systems, PostScript Language Program Design, Addison-Wesley, Reading, MA, 1988. Adobe Systems, PostScript Language Reference Manual, Second Edition, Addison-Wesley, Reading, MA, 1990. Other excellent and readily available books about PostScript include: Braswell, Frank Merritt, Inside PostScript, Peachpit Press, Atlanta, GA, 1989.

Glover, Gary, Running PostScript from MS-DOS, Wincrest Books, 1989. Reid, Glenn C, Thinking in PostScript. Addison-Wesley, Reading, MA, 1990.

Roth, Stephen F, Real World PostScript, Addison-Wesley, Reading, MA, 1988.

Encapsulated PostScript

265

Encapsulated PostScript (cont'd) Smith, Ross, Learning PostScript, A Visual Approach, Peachpit Press, 1990. A very helpful PostScript resource also exists in the form of a gentleman named Don Lancaster. Mr. Lancaster is the author of more than two dozen

books on PostScript, laser printers, and desktop publishing. His articles can be read regularly in Computer Shopper magazine. His company, Synergetics, offers many PostScript products and information as well as a free technical support hotline for PostScript users. Contact: Synergetics Attn: Don Lancaster Box 809-PCT Thatcher, AZ 85552 Voice: 602-428-4073

266

graphics File Formats

FaceSaver | Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

MULTIPLE IMAGES PER FILE:

Numerical Format:

Originator:

Platform:

Supporting Applications:

FaceSaver None

Bitmap 8-bit

Uncompressed NA No NA

Metron Computerware, Ltd. UNIX FaceSaver

Specification on CD:

Yes

Code on CD:

No

IMAGES ON CD:

No

See Also:

usage:

None

A little-used format, but extremely simple and w the fact that there is support for this format in the PBM utilities.

comments: This is an interesting format to examine if you are implementing an ID card system or a similar type of system that deals with the storage of small images (e.g., people's faces).

Overview The FaceSaver format was created by Lou Katz, and FaceSaver is a registered trademark of Metron Computerware, Ltd. It was created to hold video facial portrait information in a compact, portable, and easy to use form. It is intended to be printed on a PostScript printer. Outside the UNIX graphics world, it is known chiefly because it is supported by the widely used PBM utilities included on the CD-ROM that comes with this book.

The original FaceSaver images were digitized using a Truevision TGA M8 video board.

FaceSaver

267

FaceSaver (cont'd)

File Organization Each FaceSaver file consists of several lines of personal information in ASCII format, followed by bitmap data. There must be at least two lines of personal information, and they must include at least the PicData and Image fields. File Details The personal information in a FaceSaver file can consist of the following fields: FirstName: LastName: E-mail: Telephone: Company: Addressl: Address2: CityStateZip: Date: PicData:

width - height - image bits/pixel

Image:

width - height - bits/pixel

Following these fields is a blank line, which is required, and which separates the personal information from the bitmap data that follows it. The bitmap data consists of ASCII-encoded hexadecimal information, suitable for printing on a PostScript printer. The data is stored in scanline order, starting from the bottom and continuing to the top of the image, and from left to right. The image data comes originally from a video camera, and is first rotated 90 degrees before being written to a file. Each pixel is transformed before it's written to the file by multiplying its value by the factor: 256 / (max - min)

where max and min are the maximum and minimum pixel values found in the image, respectively. The Image field in the personal information above is used to correct for nonsquare pixels. The author writes: In most cases, there are 108 (non-square) pixels across in the data, but they would have been 96 pixels across if they were square. Therefore, Image says 96; PicData says 108.

268

graphics File formats

FaceSaver (cont'd) For Further Information For further information about FaceSaver, see the file format specification included on the CD-ROM that accompanies this book. You may also contact the author directly for additional information. Lou Katz

lou@orange. metron. com

FACESAVER

269

I FAX Formats Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

FAX Formats FAX, Facsimile File Formats

Bitmap Mono

RLE, CCITT Group 3, CCITT Group 4 1728x2200 pixels (typical) No NA

Many All Too numerous to list

Specification on CD:

No

Code on CD:

No

Images on CD:

No

See Also:

usage:

TIFF, PCX, Chapter 9, Data Compression (CCITT Group 3 and CCITT Group 4 compression)

Storage of FAX images received through computer-based FAX and FAXmodem boards.

comments: There is no one single FAX format. The closest to standard are the formats based on a proprietary specification, such as PCX or TIFF. Consider yourself blessed if the format you need to support is one of these.

Overview There are many facsimile (FAX) file formats, almost as many as there are FAX add-in boards. The PC-based Hijaak Graphics File Conversion Utility (by Inset Systems), as of version 2.1, supports no fewer than 22 different FAX file formats. Each format, however, is basically the same, and consists of a binary header, followed by one or more sections of compressed image data. The data encoding is usually a variant of RLE, CCITT Group 3 or CCITT Group 4. Several FAX formats, in fact, are proprietary variants of better-known formats, such as TIFF and PCX.

270

Graphics File Formats

FAX Formats (cont'd) Even though all of these FAX file formats were created to store the same kind of image data obtained from the same type of hardware device (i.e., FAX cards), each one is slightly different from all the others. This is problematic. The evolution of the FAX card industry in some ways recapitulates the early evolution of the computer industry as a whole. Each company perhaps imagined it was working alone, or, if not, would quickly come to dominate the market. As the presence of competition became clear, a mad scramble to ship products ensued, and corners were cut. Because companies making FAX cards are by definition hardware-oriented, you can guess where the corners were cut: software.

As the industry started to mature, companies realized that competition was a fact of life, and tried to differentiate their products. One way to do so was through the promulgation of proprietary "standards," designed to keep the originator company one jump ahead of any competition unlucky enough not to be able to push their own specification. Add an unhealthy glop of NIH ("not invented here") spread liberally over the entire FAX board industry, and you have the present situation. Recently, there have been signs of true maturity in the FAX card industry, with the emergence of the realization that all companies benefit by standardization, and an effort in this direction has been underway for some time. An extension of the TIFF file format, called TIFF Class F, would add the necessary tag extensions to allow easier storage and manipulation of FAX images stored in TIFF format files. (For further information, see the article on TIFF.) At the time of this writing, only one company, Everex, has adopted the unofficial TIFF Class F as its FAX file format standard (perhaps because a now-dead subsidiary of Everex, Cygnet Technologies, pioneered TIFF Class F). If you need to need to convert FAX file formats, you will need a very versatile conversion utility, such as Hijaak. If you need to write code for an application that reads and writes one or more FAX file formats, you will ordinarily need to contact the manufacturer of the FAX card and obtain any information they are willing to release. If your FAX format is a common one and worth supporting, you should find that you are able to obtain the specifications you need from the manufacturer.

FAX FORMATS

271

FAX Formats (cont'd) For Further Information As mentioned above, the best source of information on FAX file formats is from the manufacturer of the FAX card you wish to support. Some FAX card companies publish developers' toolkits for designing software to work with their FAX cards. Unless a company considers its format proprietary, it will have some sort of specification available for their FAX file format. For more information about TIFF Class F, see the TIFF article. You may also be able to obtain the following document: Campbell, Joe. The Spirit of TIFF Class F, Cygnet Technologies, Berkeley, CA, April 1990. Cygnet is no longer in business, and Aldus now supports the TIFF Class F specification. You contact Aldus at:

Aldus Corporation Attn: Aldus Developers Desk 411 First Avenue South Seattle, WA 98104-2871 Voice: 206-628-6593 Voice: 800-331-2538 FAX: 206-343-4240

272

Graphics file formats

FITS I

Overview The FITS image file format is used primarily as a method of exchanging bitmap data between different hardware platforms and software applications that do not support a common image file format. FITS is used mostly by scientific organizations and government agencies that require the storage of astronomical image data (e.g., image data returned by orbital satellites, manned spacecraft, and planetary probes) and ground-based image data (e.g., data obtained from CCD imagery, radio astronomy, and digitized photographic plates). Although the I in FITS stands for Image, FITS is actually a general-purpose data interchange format. In fact, there is nothing in the FITS specification that limits its use to bitmapped image data.

fits

273

FITS (cont'd) FITS was originally designed explicitly to facilitate the interchange of data between different hardware platforms, rather than between software applications. Much of the FITS data in existence today is (and traditionally, always has been) ground-based and most, if not all, of the agencies and organizations requiring the use of FITS are astronomical in nature. FITS was originally created by Eric Greisen, Don Wells, Ron Harten, and P. Grosbol and described in a series of papers published in the journal Astronomy & Astrophysics Supplement The NASA/OSSA Office of Standards and Technology (NOST) codified FITS by consolidating these papers into a draft standard of a format for the interchange of astronomical data between scientific organizations. Many such organizations use proprietary imaging software and image file formats not supported by other organizations. FITS, along with VICAR2 and PDS, became a standard interchange format that allows the successful exchange of image data. FITS is supported by all astronomical image processing facilities and astrophysics data archives. Much of the solar, lunar, and planetary data that is retrieved by the Astrophysics branch of the National Aeronautics and Space Administration (NASA) is distributed using the FITS file format. FITS is currently maintained by a Working Group of the International Astronomical Union (IAU). Image data normally is converted to FITS not to be stored, but to be imported into another image processing system. Astronomical image data is generally stored in another format, such as the VICAR2 (Video Image Communication and Retrieval) format used by the Multi-Mission Image Processing Laboratory (MIPL). FITS itself is a very general format capable of storing many types of data, including bitmaps, ASCII text, multidimensional matrices, and binary tables. The simplest FITS file contains a header and a single data stream, called a multidimensional binary array. This is the type of FITS image file we will be examining in this article.

File Organization In FITS terminology, a basic FITS file contains a primary header and single primary data array. This data structure is known collectively as a Header and Data Unit (HDU). An HDU may contain a header followed by data records, or may contain only a header. All data in a FITS file is organized into logical records 2880 bytes in length. 274

Graphics File Formats

FITS (cont'd) Basically, a FITS file is a header, normally followed by a data stream. Every FITS file begins with an ASCII header which contains one or more logical records. Each logical record is 2880 bytes in size. The last logical record in the header must be padded to 2880 bytes with spaces (ASCII 32). Each logical record contains 36 records, called card images. A card image is a logical field similar to a data field in a binary image file header. Each card image contains 80 bytes of ASCII data, which describes some aspect of the organization of the FITS image file data. Card images are padded with spaces when necessary to fill out the 80 bytes and do not have delimiters. Card images that are not needed for the storage of a particular set of data contain only spaces.

Most card images may appear in any order within the header, with a few exceptions. The SIMPLE card image must always be first, followed by BITPIX second, NAXIS third, and END last. Every card image has the following syntax: keyword = value /comment

keyword is a 1- to 8-character, left-justified ASCII string that specifies the format and use of the data stored in the card image. If a keyword contains fewer than eight characters, it is padded with spaces. A keyword always occupies columns one through eight in a card image. Only uppercase alphanumerics, hyphens ( -), and underscores ( _) may be used as characters in a keyword. No lowercase characters, other punctuation, or control codes may be used. If a card image does not have a keyword (the keyword is all spaces), then the card image is treated as a comment. If the keyword has an associated value, it is then followed by a two-character value indicator ( =). This indicator always occupies columns nine and ten in the card image, and if it is present, a value follows the keyword. If the keyword does not have an associated value, then any other ASCII characters may appear in place of the value indicator. value is an ASCII representation of the numerical or string data associated with the keyword. The value is an ASCII representation of boolean, string, integer, or real (floating-point) data. comment is an optional field which may follow any value within a card image. A comment is separated from the value by a slash (/) or a space and a slash (/); the latter is recommended. A comment may contain any ASCII characters.

fits

275

FITS (cont'd) The data in a card image is stored in specific columns. For example, the keyword identifier always occupies columns one through seven in a card image. The keyword value indicator, if present, always occupies columns eight and nine. A boolean value always occupies column 30. And a complex-integer value always occupies columns 31 through 50. Columns that do not contain data are filled with spaces. Character strings are contained within single quotes. If a string contains a single quote, then it is represented by two consecutive single quotes ('O'Reilly' becomes '0"Reilly'). All strings contain only 7-bit ASCII values and must be at least eight characters in length, padded with spaces, if necessary. Strings may contain a maximum of 68 characters. Strings may begin with leading spaces, but trailing spaces are considered padding. All boolean, integer, and floating-point values are represented by their ASCII string equivalents. Boolean variables are represented by the value T or F and are always found in column 30 of the card image. Integer and floating-point values are located in columns 11 through 30 and are right-justified with spaces, if necessary. Complex integers and complex floating-point values are located in columns 31 through 50 and are also right-justified when necessary. All letters used in exponential forms are uppercase. Examples of valid values are shown below. Size in bits

Name ASCII character

Integer Unsigned, one byte Unsigned, two bytes Unsigned, four bytes Single-precision real Fixed-point notation Exponential notation Double-precision real Exponential notation

276

Graphics file Formats

8 8

Example 'Saturn'

127

16

32767

32

1451342181

32

3.14159

32

0.314159E+01

64

0.3141592653525D+01

FITS (cont'd) File Details This section describes FITS headers and image data. Keywords There are many keywords that may be included in FITS headers, and it is unlikely that any FITS reader will understand them all (unrecognized keywords are treated as comments by FITS readers). There are five keywords that are required in every FITS file: SIMPLE, BITPIX, NAXIS, NAXISn, and END. (EXTEND is also a required keyword if extensions are present in the file.) These mandatory keywords are described below: SIMPLE

The SIMPLE keyword always appears first in any FITS header. The value of this keyword is a boolean value indicating the conformance level of the file to the FITS specification. If the file conforms to the FITS standard, this value is T. If the value is F, then the file differs in some way from the requirements specified in the FITS standard. BITPIX

The BITPIX card image contains an integer value which specifies the number of bits used to represent each data value. For image data, this is the number of bits per pixel. BITPIX Value 8

16

32 -32 -64

Data

Character or unsigned binary integer 16-bit two's complement binary integer 32-bit two's complement binary integer 32-bit floating point, single precision 64-bit floating point, double precision

NAXIS

The NAXIS card image contains an integer value in the range of 0 to 999, indicating the number of axes in the data array. Conventional bitmaps have an NAXIS value of 2. A value of 0 signifies that no data follows the header, although an extension may be present.

fits

277

FITS (cont'd) NAXISn

The NAXISn card image indicates the length of each axis in BITPIX units. No NAXISn card images are present if the NAXIS value is 0. The value field of this indexed keyword contains a non-negative integer, representing the number of positions along axis n of an ordinary data array. The NAXISn card image must be present for all values n = 1, ..., NAXIS. A value of 0 for any of the NAXISn card images signifies that no data follows the header in the HDU. If NAXIS is equal to 0, there should not be any NAXISn keywords. EXTEND

The EXTEND card image may be included if there are extensions in the FITS file. If there are no extensions, there are no EXTEND card images. END

The END keyword indicates the end of the header and is always the last card image in a header. END has no value. The card image contains spaces in columns 9 though 80 and is padded out with spaces so that the length of the header is a multiple of 2880 bytes. Sample Header The header of a basic FITS image file might appear as follows (the first two lines are for positional information only and are not included in the FITS file): 12345678 12345678901234567890123456789012345678901234567890123456789012345678901234567890 SIMPLE =

T

BITPIX =

8/8 bits per pixel

NAXIS

2/ Table is a two-dimensional matrix

=

NAXIS1 = NAXIS2 = DATE

168/ Width of table row in bytes 5/ Number of rows in table

= '09/17/93'

ORIGIN = 'C'Reilly & Associates'/ Publisher AUTHOR = 'James D. Murray'/ Creator REFERENC= 'Graphics File Formats'/ Where referenced COMMENT = 'Sample FITS header' END

For a description of all other valid FITS header keywords, refer to the FITS specification.

278

Graphics File Formats

FITS (cont'd) FITS Image Data Immediately following the header is the binary image data. This data is stored in 8-bit bytes and is currently never compressed. At the time of this writing, an extension to the FITS standard has been proposed to the FITS community, so future revisions to the FITS standard may incorporate data compression. The presence or absence of a primary data array is indicated by the values of either the NAXIS or the NAXISn keyword in the primary header. Data in a FITS file may be stored as bytes, 16- or 32-bit words, and 32- or 64-bit floating-point values that conform to the ANSI/IEEE-754 standard. Fill (ASCII OOh) is added to the data to pad the data out to end on a 2880-byte boundary. The number of bits of image data, not including the padding added to the end of the image data, may be calculated from the BITPIX, NAXIS, and NAXISn card image values: NumberOfBits = BITBIX * (NAXIS1 * NAXIS2 * ... * NAXIS [NAXIS] )

For Further Information The specification for FITS is contained in the NOST document included on the CD-ROM that accompanies this book: Implementation of the Flexible Image Transport System (FITS), Draft Implementation Standard NOST 100-0.3b., December 1991. A tutorial and historical guide to FITS is included in the following document, also on the CD-ROM: A Users Guide for FITS, January 1993. Both of these documents are also available from the NASA/OSSA Office of Standards and Technology (NOST) FITS Support Office: NASA/OSSA Office of Standards and Technology Code 633.2 Goddard Space Flight Center Greenbelt, MD 20771 Voice: 301-441-4189 Voice: 301-513-1634 Internet: [email protected]. nasa.gov [email protected]. nasa.gov

fits

279

FITS (cont'd) The FITS standard is also described in the following references, known collectively as the "Four FITS Papers:" Wells, D. C, Greisen, E. W., and Harten, R. H. "FITS: a flexible image transport system," Astronomy and Astrophysics Supplement Series, Volume 44, 1981, pp. 363-370. Greisen, E. W. and Harten, R. H. "An extension of FITS for small arrays of data," Astronomy and Astrophysics Supplement Series, Volume 44, 1981, pp. 371-374. Grosbol, P., Harten, R. H., Greisen, E. W., and Wells, D. C. "Generalized extensions and blocking factors for FITS,"Astronomy and Astrophysics Supplement Series, Volume 73, 1988, pp. 359-364. Harten, R. H., Grosbol. P., Greisen, E. W., and Wells, D. C. "The FITS tables extension," Astronomy and Astrophysics Supplement Series, Volume 73, 1988, pp. 365-372. Updated information on FITS, including new software applications, frequently appears on the Usenet newsgroups sci.astro.fits and sci.data.formats. Additional software and information on FITS may also be obtained from the following FTP sites:

fits. cv. nrao. eduFITS ames. arc. nasa.gov:pub/SPA CE/SOFTWARE hypatia.gsfc. nasa.govipuh/software FITS is also one of the primary responsibilities of the Working Group on Astronomical Software (WGAS) of the American Astronomical Society. The North American FITS Committee (Dr. Robert J. Hanisch at Space Telescope Science Institute is the chairman) is appointed under the auspices of the WGAS. The WGAS also has a list server, which may be reached by sending a mail message to the following for information on the WGAS mail exploder: [email protected]. nasa.gov The IMDISP program displays FITS images on MS-DOS machines and is maintained, enhanced, and distributed free of charge by Archibald Warnock, Ron Baalke, and Mike Martin. It is included on the CD-ROM. (See Appendix A, What's On the CD-ROM? for information about obtaining IMDISP via FTP as well.) The NSSDC Coordinated Request and Support Office (CRUSO) will provide IMDISP on floppy for a nominal fee. Contact them at:

280

graphics File Formats

FITS (cont'd) Voice: 301-286-6695

[email protected]. nasa.gov The FITSIO package contains a collection of subroutines for reading and writing data in the FITS format. This library supports most machines, including Sun, VAX/VMS, Amiga, and the IBM PC and mainframes. It is available via FTP from:

tetra.gsfc. nasa.gov.pub/fitsio

fits

281

|FLI

Overview The FLI file format (sometimes called Flic) is one of the most popular animation formats found in the MS-DOS and Windows environments today. FLI is used widely in animation programs, computer games, and CAD applications requiring 3D manipulation of vector drawings. Flic, in common with most animation formats, does not support either audio or video data, but instead stores only sequences of still image data. FLI is popular because of its simple design. It easy to implement FLI readers and writers in software-only applications. FLI also enables quick animation playback and does not require special hardware to encode or decode its data. FLI is best suited for computer-generated or hand-drawn animation sequences, such as those created using animation and CAD programs. These images achieve the best compression ratios when stored using the FLI format. 282

Graphics File Formats

FU (cont'd) Natural, real-world images may also be animated by FLI, but such images usually contain a fair amount of noise that will degrade the ability of the FLI encoding algorithms to compress the data and will therefore possibly affect the speed of the animation playback. Also, the fewer the colors in the animation, the better the compression ratio will typically be. There are two types of FLI animation files. The original FLI format has an extension of .FLI, has a maximum display resolution of 320x200 pixels, and is only capable of supporting 64 colors. This format was created for use by the Autodesk Animator application. The new FLI format has the extension .FLC, has a maximum display resolution of 64Kx64K pixels, and supports up to 256 colors. The data compression scheme used by .FLC files is also more efficient than the scheme used by .FLI files. Applications such as the IBM Multimedia Tool Series, Microsoft Video for Windows, and Autodesk Animator Pro all support .FLC files. Any application capable of reading the newer .FLC files should be able to read and play back the older .FLI files as well. However, most newer FLI file writers may only have the capability of creating .FLC files. There is really no reason to create .FLI files, unless the animations you are producing must run under software that reads only the .FLI format.

File Organization FLI animations are sequences of still images called frames. Each frame contains a slice of the animation data. The speed of the animation playback is controlled by specifying the amount of delay that is to occur between each frame. The data in each frame is always color mapped. Each pixel in a frame contains an index value into a color map defined for that frame. The colors in the map may change from frame to frame as required. And, although the FLI file is limited to displaying a maximum of 256 colors per frame, each pixel is 24 bits in depth, resulting in a palette of more than 16 million colors from which to choose.

The FLI format also supports several types of data compression. Each frame of a FLI animation is typically compressed using an interframe delta encoding scheme. This scheme encodes only the differences between adjacent image frames and not the frames themselves. This strategy results in significantly smaller files than if each frame were independently encoded (intraframe encoding). Interframe encoded data is also very fast to uncompress and display. fli

283

FU (cont'd) The first frame of every FLI animation is compressed in its entirety, using a simple run-length encoding algorithm. Because only the differences between each successive frame are encoded, you have to start somewhere. If a frame is delta encoded and the resulting compressed data is larger than the original uncompressed data (quite possible with noisy, natural images), then the frame may be stored uncompressed. File Details The header of a FLI file is 128 bytes in length. The first nine fields (22 bytes) are the same for both .FLI and .FLC files. The last ten fields (106 bytes) contain valid data only in .FLC files and are set to OOh in .FLI files. The FLI file header has the following format: typedef struct _FlicHea der { DWORD FileSize;

/* Total size of file */

WORD Fileld;

/* File format indicator */

WORD NumberOf Frames;

/* Total number of frames */

WORD Width; WORD PixelDepth;

/* Screen width in pixels */ /* Screen height in pixels */ /* Number of bits per pixel */

WORD Flags;

/* Set to 03h */

DWORD FrameDelay;

/* Time delay between frames */ 7* Not used (Set to OOh) */

WORD Height;

WORD Reservedl ;

// The following fields are set to OOh in a .FLI file DWORD DateCreated;

/* Time/Date the file was created */

DWORD CreatorSN;

/* Serial number of creator program */

DWORD LastUpdated;

/* Time/Date the file last changed */

DWORD UpdaterSN;

/* Serial number of updater program */

WORD XAspect;

/* X-axis of display aspect ratio */

WORD YAspect;

/* Y-axis of display aspect ratio */

BYTE Reserved^ [38]; /* Not used (Set to OOh) */ DWORD FramelOffset;

/* Offset of first frame */

DWORD Frame20ffset;

/* Offset of second frame */

BYTE Reserved3 [40]; /* Not used (Set to OOh) */ } FLICHEADER;

284

Graphics File Formats

FU (cont'd) FileSize contains the total size of the FLI file in bytes. Fileld contains a value identifying the type of Flic file. A value of AFllh indicates an .FLI file, and a value of AF12h indicates an .FLC file. NumberOfFrames contains the total number of frames of animation data. A

.FLC file may contain a maximum of 4000 frames; this does not include the ring frame. Width and Height specify the size of the animation in pixels. PixelDepth indicates the number of bits per pixel; the value of this field is always 08h. Flags is always set to 03h, as an indication that the file was properly updated. FrameDelay indicates the amount of time delay between frames and is used to control the speed of playback. For .FLI files, this value is interpreted in units of 1/70 of a second. For .FLC files, this value is in units of 1/1000 of a second. Reservedl is not used and is set to OOh.

DateCreated is an MS-DOS date stamp (the number of seconds occurring since midnight, January 1, 1970) of the date the FLI file was created. CreatorSN contains the serial number of the Animator Pro application program that created the FLI file. If the file was created by an application using the FlicLib development library, the value of this field will be 46h 4Ch 49h 42h ("FLIB"). LastUpdated is an MS-DOS date stamp indicating the last time the FLI file was modified (also in number of seconds since midnight January 1, 1970). UpdaterSN contains the serial number of the program that last modified the Flic file.

XAspect and Y\spect contain the aspect ratio of the display used to create the animation. For a display with a resolution of 320x200, the aspect ratio is 6:5, and these fields contain the values 6 and 5 respectively. For all other resolutions, the aspect ratio is typically 1:1. Reserved2 is not used and is set to OOh. Frame 1 Offset and Frame20ffset contain the offset of the first and second

frames, respectively, of the animation from the beginning of the file. The first

fli

285

FU (cont'd) frame offset is used to identify the beginning of the animation. The second offset is used as the starting point when the animation loops back to the beginning of the animation from the ring frame. Reserved3 is not used and is set to OOh. Chunks

All of the data in a FLI file is encapsulated into chunks. Each chunk is a collection of data beginning with a header and followed by the data for that chunk. Chunks may also contains subchunks of data with the same basic format. If a Flic reader encounters a chunk it does not recognize, it should simply skip over it.

Each chunk in a FLI file begins with a 16-byte header that contains the following format: format: typedef struct _ChunkHeader { DWORD ChunkSize;

/* Total size of chunk */

WORD ChunkType;

/* Chunk identifier */

WORD NumberOfChunks;

/* Number of subchunks in this chunk */

BYTE Reserved[8];

/* Not used (Set to OOh) */

} CHUNKHEADER;

ChunkSize is the size of the chunk in bytes. This value includes the size of the header itself and any subchunks contained within the chunk. ChunkType is an identification value indicating the format of the chunk and the type of data it contains. NumberOfChunks specifies the number of subchunks contained within this chunk. Reserved is not used and is set to OOh.

As we have said, a chunk may contain subchunks. In fact, the entire FLI file itself is a single chunk that begins with the FLICHEADER structure. In .FLC files, an optional CHUNKHEADER structure may follow the FLICHEADER structure. This secondary header is called a prefix header. If present, this header contains information specific to the Animator Pro application that is not used during the playback of the animation. Other applications can safely skip over

286

Graphics File Formats

FU (cont'd) the prefix header and ignore the information it contains. Applications other than Animator Pro should never include a prefix header in any .FLC files they create.

For the prefix header, ChunkSize is the size of the entire FLI file minus the 128 bytes of the FLICHEADER. ChunkType is FlOOh. NumberOfChunks contains the total number of subchunks in the file.

Following the prefix header is a series of frame chunks. Each frame chunk contains a single frame of data from the animation. For the frame chunk, ChunkSize is the total number of bytes in the frame, including the header and all subchunks. ChunkType is always FIFAh. NumberOfChunks contains the total number of subchunks in the frame. If the NumberOfChunks value is 0, then this frame is identical to the previous frame, so no color map or frame data is stored, and the previous frame is repeated with the delay specified in the header.

The following lists all the subchunks that may be found in a frame chunk. ChunkType Value

Chunk Name

Chunk Data Description

04h

COLOR_256

07h

DELTA_FLC

OBh

COLOR_64

OCh

DELTA_FLI

256-level color palette (.FLC files only) Delta-compressed frame data (.FLC files only) 64-level color palette (.FLI files only) Delta-compressed frame data (.FLI files only)

ODh

BLACK

Black frame data

OFh

BYTE_RUN

lOh

FLI_COPY PSTAMP

RLE-compressed frame data Uncompressed frame data Postage stamp image (.FLC files only)

12h

The following lists the general internal arrangement of a FLI file: FLI header

Prefix header (optional) Frame 1 (RLE compressed) PSTAMP subchunk (optional) COLOR_256 subchunk (256 colors) BYTE RUN subchunk

fli

287

FU (cont'd) COLOR_256 subchunk (256 colors) BYTE_RUN subchunk Frame 2 (Delta compressed) COLOR_256 subchunk (colors different from previous map DELTA_FLC subchunk Frame 3 (Uncompressed) COLOR_256 subchunk (colors different from previous map) FLI_COPY subchunk Frame 4 (Black) BLACK subchunk

Frame n (Delta compressed) COLOR_256 subchunk (colors different from previous map) DELTA_FLC subchunk Each frame chunk contains at least two subchunks: a color map and the data for the frame. The frame data may be stored in one of several different compressed or uncompressed formats. The first frame of a Flic animation may also contain an additional postage stamp subchunk. Following is an explanation of each subchunk: DELTA_FLI chunk The DELTA_FLI chunk contains a single frame of data, which is compressed using delta encoding. The data in this chunk contains the pixel value differences between the current frame and the previous frame. Each scan line of the frame which contains pixel changes is encoded into packets, and only the values of the pixels in the line that have changed are stored. The DELTA_FLI encoding scheme is an older scheme found mostly in .FLI files, although .FLC files may also contain DELTA_FLI chunks. The format of a DELTA_FLI chunk is as follows: typedef struct _DeltaFliChunk { /* Header for this chunk */ CHUNKHEADER Header; /* Number of initial lines to skip */ WORD LinesToSkip; /* Number of encoded lines */ WORD NumberOfLines; /* Encoded line (one per 'NumberOfLines') */ struct _Line { BYTE NumberOfPackets; /* Number of packets in this line */

288

graphics File formats

FU (cont'd) BYTE LineSkipCount; /* Number of lines to skip */ struct _Packet /* Encoded packet (one/NumberOfPackets) */ { BYTE SkipCount; /* Number of pixels to skip */ BYTE PacketType; /* Type of encoding used on this packet */ BYTE PixelData[]; /* Pixel data for this packet */ } Packet[NumberOfPackets]; } Lines[NumberOfLines]; } DELTAFLICHUNK;

LinesToSkip contains the number of lines down from the top of the image that are unchanged from the prior frame. This value is used to find the first scan line which contains deltas. NumberOfLines indicates the number of encoded scan lines in this chunk.

NumberOfPackets indicates the number of packets used to encode this scan line. Each encoded scan line begins with this value. LineSkipCount is the number of lines to skip to locate the next encoded line. Each packet in every encoded line contains two values. SkipCount indicates the location of the pixel deltas in this line that are encoded in this packet. PacketType specifies the type of encoding used in this packet. A positive value indicates that the next "PacketType" pixels should be literally read from the chunk and written to the display. A negative value indicates that the absolute value of "PacketType" pixels are to be read literally from the encoded data. For example, suppose that we have a frame with three encoded scan lines. The first is line number 25, which contains deltas at pixels 4, 22, 23, 24, and 202. The second is line number 97, which contains deltas at pixels 20 and 54 through 67. The third is line number 199, in which all 320 pixels of the line have changed to the same color. The sequence of line and packet field values is shown below:

LinesToSkip NumberOfLines Line NumberOfPackets

LineSkipCount Packet

24 Skip 24 lines to first encoded line 3

Three encoded lines in this frame Line 1

3 Three encoded packets in this line 71 Skip 71 lines to the next encoded line Packet 1

fli

289

F1I (cont'd) SkipCount PacketType PixelData

4 1 23

Packet

Packet 2 17

SkipCount PacketType

-3

PixelData

65

PixelData

Skip 17 pixels Read one pixel and repeat 3 times New value of pixels 22, 23, and 24 is 65 Packet 3

Packet

SkipCount PacketType

Skip 4 pixels Read one pixel literally New value of pixel 4 is 23

176 Skip 176 pixels 1 17

ine

Read one pixel literally New value of pixel 202 is 17 Line 2

NumberOfPackets

LineSkipCount

2 102

Packet 1

Packet

SkipCount PacketType PixelData

20 1 121

Packet

SkipCount PacketType PixelData

Two encoded packets in this line Skip 102 lines to the next encoded line

Skip 20 pixels Read one pixel literally New value of pixel 20 is 121 Packet 2

32 13 255

Skip 32 pixels Read next 13 pixels literally New value of pixels 54 through 67 is 255 Line 3

ine NumberOfPackets

2

Two encoded packets in this line

LineSkipCount

0

Last encoded line in frame Packet 1

Packet

SkipCount PacketType PixelData

0 -256 0

Packet 2

Packet

SkipCount PacketType PixelData

Start at first pixel in line Read one pixel and repeat 256 times New value of pixels 0 though 255 is 0

256 -64 0

Skip 256 pixels Read one pixel and repeat 64 times New value of pixels 256 though 319 is 0

DELTA_FLC chunk The DELTA_FLC chunk is a newer version of the DELTA_FLI chunk and is found in all .FLC files. This chunk is essentially the same as the DELTAJFLI chunk with a few field modifications. The PixelData values stored in a

290

Graphics File Formats

FU (cont'd) DELTA_FLC chunk are 16 bits in size rather than the 8-bit pixel size found in the DELTAJFLI chunk. The structure of a DELTA_FLC chunk is as follows: typedef struct _DeltaFlcChunk { /* Header for this chunk */ CHUNKHEADER Header; /* Number of encoded lines in chunk */ WORD NumberOfLines; struct _Line { WORD PacketCount;

/* Encoded line (one/'NumberOfLines') */ /* Packet count, skip count, or last byte value */

/* ** Additional WORDs of data may appear in this location */ struct _Packet {

/* Encoded packet (one per 'Count') */

BYTE SkipCount; /* Number of pixels to skip */ BYTE PacketType; /* Type of encoding used on this packet */ WORD PixelData[]; /* Pixel data for this packet */ } Packet[NumberOfPackets]; } Lines [NumberOf Lines ] ; } DELTAFLCCHUNK;

The number of fields occurring between the PacketCount and the first packet will vary depending upon the value stored in PacketCount. The two most significant bits in PacketCount determine the interpretation of the value stored in this field. If these two bits are 0, then the value is the number of packets occurring in this line. Packet data immediately follows this field and there are no additional WORD values following. A value of 0 in this field indicates that only the last pixel on the line has changed. If the most significant bit (bit 15) is 1 and the next bit (bit 14) is 0, the low byte in this WORD is to be stored in the last byte of the current line. A WORD field containing the number of packets in this line follows this value. If both bits 14 and 15 are set to 1, PacketCount contains a skip count to the next encoded line. PacketCount may then be followed by additional WORD values containing a packet count, skip counts, or last byte values. fli

291

FLI (cont'd) BYTE_RUN chunk When a frame is run-length encoded, the data is stored in a BYTE_RUN chunk. Normally, only the data in the first frame of an animation is encoded using this scheme. The structure of a BYTE_RUN chunk is as follows: typedef struct _ByteRunChunk { CHUNKHEADER Header; /* Header for this chunk */ BYTE PixelData[];

/* RLE pixel data */

} BYTERUNCHUNK;

Each line in the frame is individually encoded into a series of one or more RLE packets. In the original .FLI format, the first byte of each encoded line was the count of the number of packets used to encode that line, with a packet maximum of 255. The .FLC format, however, allows much longer lines to be used in an animation, and more than 255 packets may be used to encode a line. Therefore, in both .FLC and .FLI files, this initial count byte is read and ignored. Instead, a FLI reader should keep track of the number of pixels decoded to determine when the end of a scan line has been reached.

The RLE scheme used in the BYTE_RUN packet is fairly simple. The first byte in each packet is a type byte that indicates how the packet data is to be interpreted. If the value of this byte is a positive number then the next byte is to be read and repeated "type" times. If the value is negative then it is converted to its absolute value and the next "type" pixels are read literally from the encoded data.

FLI_COPY chunk

This chunk contains a single, uncompressed frame of data. When a frame is stored uncompressed, the FLI_COPY chunk is used. Data is only stored uncompressed when delta or RLE encoding would result in negative compression. The structure of a FLI_COPY chunk is as follows: typedef struct _CopyChunk { CHUNKHEADER Header;

/* Header for this chunk */

BYTE PixelData[];

/* Raw pixel data */

} COPYCHUNK;

292

Graphics File Formats

FU (cont'd) The number of pixels in this chunk is equal to the product of the Width and Height fields (Width*Height) in the FLI file header. FLI_COPY chunks usually result when very complex or noisy images cause the compressed frames to be larger than the uncompressed originals. PSTAMP chunk

The PSTAMP is a postage stamp of a FLI animation found in the first frame chunk only in .FLC files. This stamp may be a reduced-sized copy of a frame from the animation, possibly from the title screen, that is used as an icon. The size of the stamp is usually 100x63 pixels, but will vary to match the aspect ratio of the frame. This chunk is skipped by FLI readers that do not support the use of PSTAMP chunks. The PSTAMP chunk contains a CHUNKHEADER and two subchunks: typedef struct _PstarnpChunk { DWORD ChunkSize;

/* Total size of chunk */

WORD ChunkType;

/* Chunk identifier */

WORD Height; WORD Width;

/* Height of stamp in pixels */

WORD ColorType; BYTERUNCHUNK PixelData;

/* Color translation type */

/* Width of stamp in pixels */ /* Postage stamp data */

} PSTAMPCHUNK; ChunkSize is the total size of the PSTAMP chunk.

ChunkType value is Ofh, lOh, or 12h. Height and Width are the height and width of the stamp in pixels. ColorType indicates the type of color space used by the postage stamp image. This value is always 01 h, indicating a six-cube colorspace (see the FLI file format specification for more information on six-cube color space). Following this header is the postage stamp data chunk. ChunkType of this header indicates the format of the pixel data. Values are: OFh lOh

Indicates run-length encoding (a BYTEJRUN chunk) Indicates uncompressed data (a FLI_COPYchunk)

12h

Indicates a six-cube color translation table

fli

293

FLI (cont'd) BLACK chunk

The BLACK chunk represents a single frame of data in which all pixels are set to the color index 0 (normally black) in the color map for this frame. This chunk itself contains no data and has a ChunkType of ODh. The BLACK chunk contains only a CHUNKHEADER: typedef struct _BlackChunk { /* Header for this chunk */ CHUNKHEADER Header; } BLACKCHUNK; COLOR_64 and COLOR_256 chunks

The FLI file format uses a color map to define the colors in an animation. The older .FLI format may have a maximum of 64 colors and stores its color map in a COLOR_64 chunk. A .FLC file may have up to 256 colors and stores its color map in a COLOR_256 chunk. Both of these chunks have the same format: typedef struct _ColormapChunk { CHUNKHEADER Header; WORD NumberOfElements; struct _ColorElement

/* Header for this chunk */ /* Number of color elements in map */ /* Color element (one per NumberOfElements) */

{ BYTE SkipCount; BYTE ColorCount;

/* Color index skip count */ /* Number of colors in this element */

struct _ColorComponent

/* Color component (one /'ColorCount') */

{ BYTE Red; BYTE Green; BYTE Blue;

/* Red component color */ /* Green component color */ /* Blue component color */ } ColorComponents[ColorCount]; } ColorElements[NumberOfElements]; } COLORMAPCHUNK;

The value of ChunkSize in the Header varies depending upon the number of elements in this color map. A chunk containing a color map with 256 elements is 788 bytes in size and therefore ChunkSize contains the value 788.

294

Graphics File Formats

FlI (cont'd) ChunkType contains a value of 04h for a COLOR_256 chunk or a value of OBh for a COLOR_64 chunk. NumberOfChunks always contains a value of OOh, indicating that this chunk contains no subchunks. NumberOfElements indicates the number of ColorElement structures in the

COLORMAPCHUNK structure. Following this value are the actual ColorElement structures themselves. Each structure contains two fields and one or

more ColorComponent structures. SkipCount indicates the number of color elements to skip when locating the next color map element. ColorCount indicates the number of ColorComponents structures contained within this ColorElement structure. Following the ColorCount field are the actual ColorComponents structures. Each structure is three bytes in size and contains three fields.

The Red, Green, and Blue fields of each ColorComponents structure contain the component values for this color. The range of these field values is 0 to 63 for a COLOR_64 chunk and 0 to 255 for a COLOR_256 chunk. Normally, an image file contains only one color map. A FLI file, however, allows a color map to be defined for each frame in the animation. Storing a complete color map for each frame would normally require quite a bit of data (768 bytes per frame). FLI files, however, have the capability of storing color maps that contain only the colors that change from frame to frame. Storing only the deltas in a color map requires that not only the color values be stored, but also their locations in the map. This is accomplished by using a color index value and a skip count. Before a color map value is written, the skip count of the packet is added to the current color index. This sum is the location of the next color map value to write. The number of entries in the packet are written across the same number of entries in the color map. The color index for each color map always starts with the value 0. For example, the first frame of a .FLC animation always contains a full, 256-element color map. This map is represented by a NumberOfElements value of 1 followed by a single ColorElements structure. This structure will contain a SkipCount value of 0, a ColorCount value of 256, and 256 ColorComponents structures defining the colors in the map. This chunk is 788 bytes in size. Now, let's say that in the next frame the colors 2, 15, 16, 17, and 197 in the color map are different from those in the first frame. Rather than storing fli

295

FU (cont'd) another 788-byte color map chunk, with 251 elements identical to the color map in the previous chunk, we will store only the values and positions of the five color components that changed in the color map. The color map chunk for the second frame will then contain a NumberOfElements value of 3, followed by three ColorElements structures: • The first structure will have a SkipCount value of 2, a ColorCount value of 1, and one ColorComponents structure defining the new color values of element 2.

• The second structure will have a SkipCount value of 14, a ColorCount value of 3, and three ColorComponents structures defining the new color values of elements 15, 16, and 17. • The third structure will have a SkipCount value of 180, a ColorCount value of 1, and one ColorComponents structure defining the new color value of element 197. This chunk will be only 39 bytes in size. The sequence of fields and values for this map is the following: 3

NumberOfElements ColorElement

SkipCount

2

ColorCount

1

ColorComponent R,G,B ColorElement

SkipCount

14

ColorCount

3

ColorComponent R,G,B ColorComponent R,G,B ColorComponent R,G,B ColorElement

SkipCount

180

ColorCount

1

ColorComponent R,G,B As you can see, the location of changed color elements is determined by their relative position from the previous changed elements and from their absolute position in the color map. The SkipCount value of the first element is always calculated from the 0th index position. To change the value of element 2, we skip two places, from element 0 to element 2, and change a single component value. To change the values of elements 17, 18, and 19, we make 14 skips from 296

Graphics File Formats

FU (cont'd) element 2 to element 17 and change the next three component values. We then make 180 skips to element 197 and change the final component value. Note that if the color map for the current frame is identical to the color map of the previous frame, the color map subchunk need not appear in the current frame chunk.

For Further Information Autodesk no longer maintains the FLI format and does not distribute information about it. For further information about FLI, see the articles by Jim Kent and John Bridges (the author of the format,) that are included on the CD that accompanies this book. In addition, see the following article for information on FLI:

Kent, Jim. 'The Flic File Format," Dr. DobbsJournal, March 1993.

fli

297

I GEM Raster Name:

Also Known As:

Type:

Colors:

Compression:

maximum image size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting Applications:

GEM Raster IMG

Bitmap 16,384 RLE, uncompressed 64Kx64K No

Big-endian Digital Research, now part of Novell GEM, MS-DOS, Atari ST GEM-based applications. Versions of the MS-DOSbased Ventura Publisher were distributed bound to

GEM that served mainly to provide GUI services to the application. Many programs on the Atari ST. Specification on CD:

No

CODE ON CD:

NO

Images on CD:

Yes

see also:

usage:

None

Primarily useful in GEM-based application environments.

comments: A poorly documented format (in the sense that documentation is hard to come by) in wide use only on the Atari ST platform. It lacks a superior compression scheme and support for included color information. There are at least two versions in existence.

Overview GEM Raster (also known as IMG) is the native image storage format for the Graphical Environment Manager (GEM), developed and marketed by Digital Research. GEM made its way into the market through OEM bundling deals, special runtime versions bound to products, and as the native operating environment of at least one system, the Atari ST. GEM image files have been important in the PC desktop publishing community due to the bundling deal between Digital Research and the creators of Ventura Publisher, a widely used desktop publishing application. 298

Graphics File Formats

GEM Raster (cont'd) Although GEM was a contender in the GUI wars some years back, Digital Research's fortunes in this arena declined and the company was eventually purchased by Novell. Prior to this, however, GEM was distributed by a number of PC hardware manufacturers along with their systems, and thus enjoyed a certain currency. GEM raster images may be color, gray scale, or black and white and are always read and written in the big-endian format. Note that several different file formats use the file extension .IMG, a fact which causes confusion in some applications designed to read only GEM raster (IMG) files.

File Organization Like many other simple bitmap formats, GEM raster files start with a fixedlength header, followed by bitmap data. File Details GEM raster files use a 16- or 18-byte header in the following format: typedef struct _GemRaster { WORD Version; /* Image File Version (Always lh) /* Size of Header in WORDs */ WORD HeaderLength; */ WORD NumberOf Planes; /* Number of Planes WORD PatternLength; WORD PixelWidth; WORD PixelHeight; WORD ScanLineWidth; WORD NumberOf Lines WORD BitlmageFlag;

*/

/* Pattern Definition Length */ */ /* Pixel Width in Micros */ /* Pixel Height in Micros */ /* Image Width in Pixels */ /* Image Height in Scan Lines */ /* Multi-plane GrayColor Flag

} GEMHEAD;

Version always has a value of one. HeaderLength is either be 8 or 9; if the value is 8, then there is no BitlmageFlag field in the header. NumberOfPlanes contains the number of bits per pixel of the image source device (a scanner, for instance). This value is typically 1. PatternLength contains a run count value, which is usually 1. Any pattern code found in the encoded image data is repeated this number of times.

GEM RASTER

299

GEM Raster (cont'd) PixelHeight and PixelWidth are the pixel size in microns and are often 85 (55h), corresponding to 1/300 inch, or 300 dpi. The scale of the image may also be determined by using these pixel size values. ScanLineWidth and NumberOfScanLines describe the size of the image in lines and pixels. BitlmageFlags indicates whether a multi-plane image is color or gray scale. If the BitlmageFlags field is present in the header (indicated by a value of 9 in the HeaderLength field), and the image data contains multiple planes (indicated by a value of 2 or greater in the NumberOfPlanes field), a value of 0 indicates color image data and a value of 1 indicates gray-scale image data. If a multi-plane image has an 8-field header, then the image is displayed in grayscale from a fixed, 16-color palette by default. If the image has a 9-field header and only a single plane, the value in the BitlmageFlag field is ignored. Image data in GEM raster files is always encoded using a simple run-length encoding (RLE) scheme. Data is always encoded and decoded one byte at a time, and there are always eight bits of image data per pixel. For this reason, scan lines are always a multiple of eight pixels in width and are padded when necessary. If the image data contains two or more bits per pixel, then the image will have multiple bit planes. There are four types of codes in the GEM raster RLE format: vertical replication codes, literal run codes, pattern codes, and encoded runs. Complicating this RLE scheme is the fact that each of these four codes is a different size, as shown below. Vertical Replication Code Literal Run Code

00 00 FF

Pattern Code Black Run Code

00

White Run Code



80

A vertical replication code contains the values OOh OOh FFh, followed by a 1-byte count. The count is the number of times to repeat the line that is about to be decoded. A count of one indicates two identical, consecutive lines. A vertical replication code may only appear at the beginning of a scan line. If a replication code is not present at the beginning of a scan line, the line is not repeated. Literal runs are contiguous lines of pixels that are not encoded. They are written to the encoded data stream as they appear in the bitmap. Literal runs 300

Graphics File Formats

GEM Raster (cont'd) usually appear in encoded image data because data compression had little effect on the pixel data, and it was not efficient to encode the pixels as a run. A literal run code begins with the byte value 80h and is followed by a byte that holds the count value. Following the count are a number of bytes equal to the count value that should be copied literally from the encoded data to the decoded data.

A pattern code begins with the byte OOh and is followed by a byte containing the pattern length. That length is followed by the pattern itself, replicated the number of times specified by the Pattern Length field in the header. Pattern codes are similar to literal run codes, in that the data they contain is not actually compressed in the encoded image data. Encoded run codes contain only runs of either black or white pixels and are by far the most numerous of all the codes in IMG RLE image data. Black-andwhite runs are encoded as a 1-byte packets. Encoded run packets are never OOh or 80h in value. These values are reserved to mark the start of vertical replication codes, pattern codes, and literal run codes. If a byte is read and is not equal to OOh or 80h, the most significant bit indicates the color of the run. If the most significant bit is 1, all the pixels in the run are set to 1 (black). If the most significant bit is 0, the pixels in the run are set to 0 (white). The seven least significant bits in the encoded run are the number of bits in the run. The run may contain 1 to 127 bits. If an image contains multiple planes, each plane is encoded as the next consecutive scan line of data. One scan line of a four-plane image is encoded as four scan lines of data. The order of the planes is red, green, blue, and intensity value. The following segment of an encoded scan line: 00 00 FF 05 07 8A 02 80 04 2A 14 27 C9 00 03 AB CD EF

represents a vertical replication code of five scan lines, a run of seven white bytes (56 pixels), a run of 10 black bytes, (80 pixels), a run of two white bytes (16 pixels), a literal run of four bytes, and pattern code three bytes in length. For Further Information The GEM raster format originated at Digital Research, which is now owned by Novell, and is currently being supported by DISCUS Distribution Services, a service organization. Note that DISCUS will provide support only if you have first purchased the GEM Programmers^ Toolkit from Digital Research.

GEM raster

301

GEM Raster (cont'd) Contact DISCUS at: DISCUS Distribution Services, Inc.

8020 San Miguel Canyon Road Salinas, CA 93907-1208 Voice: 408-663-6966

You may be able to get some information from Novell/Digital Research at: Novell/Digital Research, Inc. P.O. Box DRI

Monterey, CA 93942 Voice: 408-649-3896 Voice: 800-848-1498 BBS: 408-649-3896

302

Graphics file formats

GEM VDI | Name:

Also Known As:

Type:

Colors:

Compression:

Maximum Image Size:

Multiple Images Per File:

Numerical Format:

Originator:

Platform:

Supporting applications:

GEM VDI GEM Vector, VDI, .GDI Metafile 256

Uncompressed 32Kx32K No

Big-endian Digital Research GEM GUI running under MS-DOS, some Ataris GEM Artline, GEM Draw Plus, GEM Scan, others

Specification on CD:

No

Code on CD:

No

Images on CD:

No

See Also:

usage:

GEM Raster

Illustration, drawing, and desktop publishing applications, and some data interchange.

comments: A vector format with good support at one time, associated with the GEM GUI from Digital Research. If you are thinking of supporting this format, be prepared to draw Bezier curves. It also has support for embedded bitmaps.

Overview Although often called the GEM Vector Format, GEM VDI is actually a metafile format and is closely associated with the functioning of the GEM user interface. The GEM system provides a metafile driver which is accessed from within the GEM programming system through a documented API. Display requests to the driver result in items being written to a metafile buffer in GEM's standard metafile format. Metafile elements thus consist of calls to the GEM display system.

Supporting GEM VDI is similar to supporting many other metafile formats. Be prepared to duplicate the functionality of the host system, in this case GEM, or at least a reasonable subset of it, before you're through. GEM VDI

303

GEM VDI (cont'd)

File Organization We would like to have more information on this format. Information provided by DISCUS (see "For Further Information" below) indicates that the file consists of a header, followed by a stream of standard-format metafile items. File Details The structure for the GEM VDI header is shown below. typedef struct _GemVdiHeader { WORD Identifier;

/* Magic number. Always FFFFh */

WORD LengthOfHeader /* Length of the header in 16-bit WORD */ /* Format version number */ WORD Version; WORD Transform;

/* Image origin */

WORD Coords [4];

/* Size and position of image */

WORD PageSize[2]; WORD Bounds[4];

/* Physical page size */

WORD Flags; } GEMVDIHEADER;

/* Limits of coordinate system */ /* Bit image opcode flag */

Identifier is the magic number of GEM VDI image files. This value is always FFFFh.

LengthOfHeader is the size of the header described as the number of 16-bit WORDs it contains. This value is typically OFh. Version is the version number of the file format. This value is calculated using the formula: 100 * major version number + minor version number. Transform is the NDC/RC transformation mode flag. This value is OOh if the origin of the image is in the lower-left corner of the display and 02h if the origin is in the upper-left corner. Coords are four WORDs indicating the minimum and maximum coordinate values of data in the file. These values indicate the size of the image and its position on the display and are stored as: minimum X, minimum Y, maximum X, maximum Y PageSize is the size of the physical printed page the image will cover in 1/10 of millimeters. This value is OOh if the page size is undefined by the application creating the image.

304

Graphics File Formats

GEM VDI (cont'd) Bounds are four WORDs which described the maximum extent of the coordi-

nate system used by the image and defined by the application. These values are stored as: lower-left X, lower-left Y, upper-right X, upper-right Y Flags contains the bit image opcode flag. The values for this field are OOh if no bit image is included in the file and 01 h if a bit image is included. Bits 2 through 15 in Flags should always be set to 0. Standard format metafile items consist of control, integer, and vertex parameters. This structure is described below.

Word

Value

Description

0

control [0]

1

controlfl]

Opcode Vertex count

2

control[3]

Integer parameter count

3

control [5]

Sub-opcode or zero

4

ptsin[0-n] Input vertex list (if provided)

n+4

intin[0-m] Input integer (if provided)

The following shows the correspondence of standard metafile items to the display commands accepted by the GEM display subsystem. Arguments appear to be documented only in the GEM Programmer's Toolkit, but you might be able to recover them through diligent application of trial and error. v_alpha_text v_bar

Output Printer Alpha Text Arc GDP Bar GDP

v_bit_image v_circle

Output Bit Image File Circle GDP

v_clear_disp_list v_clrwk v_ellarc

Clear Display List Clear Workstation

v_arc

v_ellipse v_ellpie v_entercur v_exitcur v fillarea

Elliptical Arc GDP Ellipse GDP Elliptical Pie Slice GDP Enter Alpha Mode Exit Alpha Mode Fill Area

GEM VDI

305

GEM VDl (cont'd) v_form_adv vjustified v_line v_output_window v_pieslice v_pmarker v_qtext v_rbox v_rfbox v_updwk vr_recfl vs_clip vs_color vsf_color vsf_interior vsf_perimeter vsf_style vsf_updat vsl_color vsl_ends vsl_type vsl_udsty vsl_width vsm_color vsm_height vsm_type vst_alignment vst_color vst_effects vst_font vst_height vst_point vst_rotation vswr_mode

Form Advance

Justified Graphics Text GDP Polyline Output Window Pie GDP

Polymarker Text

Rounded Rectangle GDP Filled, Rounded Rectangle GDP Update Workstation Fill Rectangle Set Clipping Rectangle Set Color Representation Set Fill Color Index

Set Fill Interior Style Set Fill Perimeter Visibility Set Fill Style Index Set User-defined Fill Pattern

Set Polyline Color Index Set Polyline End Styles Set Polyline Line Type Set User-defined Line Style Pattern Set Polyline Line Width Set Polymarker Color Index Set Polymarker Height Set Polymarker Type Set Graphic Text Alignment Set Text Color Index

Set Graphic Text Special Effects Set Text Font

Set Character Height, Absolute Mode Set Character Height, Points Mode Set Character Baseline Vector

Set Writing Mode

There are also two non-standard metafile items:

v_opnwk

Open Workstation

v clswk

Close Workstation

306

Graphics File Formats

GEM VDI (cont'd) There are three metafile escape functions: v_meta_extents

Update Metafile Extents

v_write__meta

Write Metafile Item

vm_filename

Change GEM VDI Filename

There are several inquire functions: vq_chcells vq_color vq_attributes vq_extnd

Inquire Addressable Character Cells Inquire Color Representation Inquire Current Polyline Attributes Extended Inquire

There are several metafile sub-opcodes reserved for the Digital Research GEM Output application: Physical Page Size Coordinate Window

There are also several metafile sub-opcodes reserved for the Digital Research GEM Draw Plus application: Group Set No Line Style Set Attribute Shadow On Set Attribute Shadow Off

Start Draw Area Type Primitive End Draw Area Type Primitive Also associated with the GEM VDI format is a standard keyboard mapping. For Further Information The GEM VDI format originated at Digital Research, which is now owned by Novell, and GEM VDI is currently being supported by DISCUS Distribution Services, a service organization. Note that DISCUS will provide support only if you have first purchased the GEM Programmer's Toolkit from Digital Research. Contact DISCUS at: DISCUS Distribution Services, Inc.

8020 San Miguel Canyon Road Salinas, CA 93907-1208 Voice: 408-663-6966

GEM VDI

307

GEM VDI (cont'd) You may still be able to get some information from Novell/Digital Research at: Novell/Digital Research, Inc. P.O. Box DRI

Monterey, CA 93942 Voice: 408-649-3896 Voice: 800-848-1498 BBS: 408-649-3896

308

GRAPHICS FILE FORMATS

GIF I

Overview GIF (Graphics Interchange Format) is a creation of CompuServe Inc., and is used to store multiple bitmap images in a single file for exchange between platforms and systems. In terms of number of files in existence, GIF is perhaps the most widely used format for storing multibit graphics and image data. Even a quick peek into the graphics file section of most BBSs and file archives seems to prove this true. Many of these are high-quality images of people, landscapes, cars, astrophotographs, and anthropometric gynoidal data (you guess what that is). Shareware libraries and BBSs are filled with megabytes of GIF images. gif

309

GIF (cont'd) The vast majority of GIF files contain 16-color or 256-color near-photographic quality images. Gray-scale images, such as those produced by scanners, are also commonly stored using GIF, although monochrome graphics, such as clip art and document images, rarely are. Although the bulk of GIF files are found in the Intel-based MS-DOS environment, GIF is not associated with any particular software application. GIF also was not created for any particular software application need, although most software applications that read and write graphical image data, such as paint programs, scanner and video software, and most image file display and conversion programs usually support GIF. GIF was instead intended to allow the easy interchange and viewing of image data stored on local or remote computer systems.

Although CompuServe allows unrestricted use, provided that you give the appropriate credit to CompuServe as the originator of the format, the compression used by GIF, Lempel-Ziv-Welch (LZW), has been the subject of much legal battle. Therefore, while reading and possessing GIF files (or any other graphics files containing LZW-compressed image data) provides no legal problems, writing software which creates GIF, or other LZW files, may.

File Organization GIF is different from many other commoa bitmap formats in the sense that it is stream-based. It consists of a series of data packets, called blocks, along with additional protocol information. Because of this arrangement, GIF files must be read as if they are a continuous stream of data. The various blocks and subblocks of data defined by GIF may be found almost anywhere within the file. This uncertainty makes it difficult to encapsulate every possible arrangement of GIF data in the form of C structures.

There are a number of different data block categories, and each of the various defined blocks falls into one of these categories. In GIF terminology, a Graphics Control Extension block is a type of Graphics Control block, for instance. In like manner, Plain Text Extension blocks and the Local Image Descriptor are types of Graphic Rendering blocks. The bitmap data is an Image Data block. Comment Extension and Application Extension blocks are types of Special Purpose blocks. Blocks, in addition to storing fields of information, can also contain sub-blocks. Each data sub-block begins with a single count byte, which can be in the range of 1 to 255, and which indicates the number of data bytes that follow the count 310

Graphics File Formats

GIF (cont'd) byte. Multiple sub-blocks may occur in a contiguous grouping (count byte, data bytes, count byte, data bytes, and so on). A sequence of one or more data subblocks is terminated by a count byte with a value of zero. The GIF format is capable of storing bitmap data with pixel depths of 1 to 8 bits. Images are always stored using the RGB color model and using palette data. GIF is also capable of storing multiple images per file, but this capability is rarely utilized, and the vast majority of GIF files contain only a single image. Most GIF file viewers do not, in fact, support the display of multiple image GIF files, or may display only the first image stored in the file. For these reasons, we recommend not creating applications that rely on multiple images per file, even though the specification allows this. The image data stored in a GIF file is always LZW compressed. (See Chapter 9, Data Compression, for a discussion of LZW and other compression methods.) This algorithm reduces strings of identical byte values into a single code word and is capable of reducing the size of typical 8-bit pixel data by 40 percent or more. The ability to store uncompressed data, or data encoded using a different compression algorithm, is not supported in the current version of the GIF format.

There are two revisions of the GIF specification, both of which have been widely distributed. The original revision was GIF 87a, and many images were created in this format. The current revision, GIF 89a, adds several capabilities, including the ability to store text and graphics data in the same file. If you are supporting GIF, you should include support for both the 87a and 89a revisions. It is a mistake to support only the 89a version, because many applications continue to produce only 87a version files for backward compatibility. File Details The "GIF87a" section below discusses features common to both versions; the "GIF89a" section describes only the features added in GIF89a. GIF87a Version 87a is the original GIF format introduced in May 1987 and is read by all major software applications supporting the GIF format. Figure GIF-1 illustrates the basic layout of a GIF87a file. Each file always begins with a Header and a Logical Screen Descriptor. A Global Color Table may optionally appear after the Logical Screen Descriptor. Each of these three sections is always found at the same offset from the start of the file. Each image GIF

311

GIF (cont'd) stored in the file contains a Local Image Descriptor, an optional Local Color Table, and a block of image data. The last field in every GIF file is a Terminator character, which indicates the end of the GIF data stream.

Figure GIF-1: GIF87afile layout Header

The Header is six bytes in size and is used only to identify the file as type GIF. The Logical Screen Descriptor, which may be separate from the actual file header, may be thought of as a second header. We may therefore store the Logical Screen Descriptor information in the same structure as the Header: typedef struct _GifHeader { // Header BYTE Signature[3]; BYTE Version[3];

/* Header Signature (always "GIF") */ /* GIF format version("87a" or "89a")*/

// Logical Screen Descriptor WORD ScreenWidth; /* Width of Display Screen in Pixels */ WORD ScreenHeight; /* Height of Display Screen in Pixels */ 312

Graphics File Formats

GIF (cont'd) BYTE Packed;

/* Screen and Color Map Information */

BYTE BackgroundColor; /* Background Color Index */ BYTE AspectRatio;

/* Pixel Aspect Ratio */'

} GIFHEAD;

Signature is three bytes in length and contains the characters GIF as an identifier. All GIF files start with these three bytes, and any file that does not should not be read by an application as a GIF image file. Version is also three bytes in length and contains the version of the GIF file. There are currently only two versions of GIF: 87a (the original GIF format) and 89a (the new GIF format). Some GIF 87a file viewers may be able to read GIF 89a files, although the stored image data may not display correctly. Logical Screen Descriptor The Logical Screen Descriptor contains information describing the screen and color information used to create and display the GIF file image. ScreenHeight and ScreenWidth fields contain the minimum screen resolution required to display the image data. If the display device is not capable of supporting the specified resolution, some sort of scaling will be necessary to properly display the image. Packed contains the following four subfields of data (bit 0 is the least significant bit, or LSB): Bits 0-2

Size of the Global Color Table

Bit 3

Color Table Sort Flag

Bits 4-6

Color Resolution

Bit 7

Global Color Table Flag

The Size of the Global Color Table subfield contains the number of bits in

each Global Color Table entry minus one. For example, if an image contains 8 bits per pixel, the value of this field is 7. The total number of elements in the Global Color Table is calculated by shifting the value one to the left by the value in this field: NumberOfGlobalColorTableEntries = (1L « (SizeOfTheGlobalColorTable + 1));

The Size of the Global Color Table subfield is always set to the proper size even if there is no Global Color Table (i.e., the Global Color Table Flag subfield is set to 0). If the Color Table Sort Flag subfield is 1, then the Global Color Table

GIF

313

GIF (cont'd) entries are sorted from the most important (most frequently occurring color in the image) to the least important. Sorting the colors in the color table aids an application in choosing the colors to use with display hardware that has fewer available colors than the image data. The Sort flag is only valid under version 89a of GIF. Under version 87a, this field is reserved and is always set to 0. The Color Resolution subfield is set to the number of bits in an entry of the original color palette minus one. This value equates to the maximum size of the original color palette. For example, if an image originally contained eight bits per primary color, the value of this field would be 7. The Global Color Table Flag subfield is set to 1 if a Global Color Table is present in the GIF file, and 0 if one is not. Global Color Table data, if present, always follows the Logical Screen Descriptor header in the GIF file. BackgroundColor in the Logical Screen Descriptor contains an index value into the Global Color Table of the color to use for the border and background of the image. The background is considered to be the area of the screen not covered by the GIF image. If there is no Global Color Table (i.e., the Global Color Table Flag subfield is set to 0), this field is unused and should be ignored. AspectRatio contains the aspect ratio value of the pixels in the image. The aspect ratio is the width of the pixel divided by the height of the pixel. This value is in the range of 1 to 255 and is used in the following calculation: PixelAspectRatio = (AspectRatio +15) / 64;

If this field is 0, then no aspect ratio is specified. Global Color Table

The Logical Screen Descriptor may be followed by an optional Global Color Table. This color table, if present, is the color map used to index the pixel color data contained within the image data. If a Global Color Table is not present, each image stored in the GIF file contains a Local Color Table that it uses in place of a Global Color Table. If every image in the GIF file uses its own Local Color Table, then a Global Color Table may not be present in the GIF file. If neither a Global nor a Local Color Table is present, make sure your application supplies a default color table to use. It is suggested that the first entry of a default color table be the color black and the second entry be the color white.

Global Color Table data always follows the Logical Screen Descriptor information and varies in size depending upon the number of entries in the table. The 314

Graphics File Formats

GIF (cont'd) Global Color Table is a series of three-byte triples making up the elements of the color table. Each triple contains the red, green, and blue primary color values of each color table element: typedef struct _GifColorTable { /* Red Color Element BYTE Red;

*/

BYTE Green;

/* Green Color Element

*/

BYTE Blue;

/* Blue Color Element

*/

} GIFCOLORTABLE;

The number of entries in the Global Color Table is always a power of two (2, 4, 8, 16, and so on), up to a maximum of 256 entries. The size of the Global Color Table in bytes is calculated by using bits 0, 1, and 2 in the Packed field of the Logical Image Descriptor in the following way: ColorTableSize = 3L * (1L « (SizeOfGlobalColorTable + 1));

The Header, Logical Screen Descriptor, and Global Color Map data are followed by one or more sections of image data. Each image in a GIF file is stored separately, with an Image Descriptor and possibly a Local Color Table. The Image Descriptor is similar to a header and contains information only about the image data that immediately follows it. The Local Color Table contains color information specific only to that image data and may or may not be present.

Local Image Descriptor The Local Image Descriptor appears before each section of image data and has the following structure: typedef struct _GifImageDescriptor { /* Image Descriptor identifier */ BYTE Separator; /* X position of image on the display */ WORD Left; /* Y position of image on the display */ WORD Top; /* Width of the image in pixels */ WORD Width; /* Height of the image in pixels */ WORD Height; /* Image and Color Table Data Information */ BYTE Packed; } GIFIMGDESC;

Separator contains the value 2Ch and denotes the beginning of the Image Descriptor data block.

GIF

315

GIF (cont'd) Left and Top are the coordinates in pixels of the upper-left corner of the image on the logical screen. The upper-left corner of the screen is considered to be coordinates 0,0. Width and Height are the size of the image in pixels. Packed contains the following five subfields of data (bit 0 is the LSB): Bit 0

Bit 2

Local Color Table Flag Interlace Flag Sort Flag

Bits 3-4

Reserved

Bits 5-7

Size of Local Color Table Entry

Bit 1

The Local Color Table Flag subfield is 1 if there is a Local Color Table associated with this image. If the value of this subfield is 0, then there is no Local Color Table present, and the Global Color Table data should be used instead. The Interlace Flag subfield is 1 if the image is interlaced and 0 if it is noninterlaced (see the description of Image Data for an explanation of interlaced image data). The Sort Flag subfield indicates whether the entries in the color table have been sorted by their order of importance. Importance is usually decided by the frequency of occurrence of the color in the image data. A value of 1 indicates a sorted color table, while a value of 0 indicates a table with unsorted color values. The Sort Flag subfield value is valid only under version 89a of GIF. Under version 87a, this field is reserved and is always set to 0. The Size of Local Color Table Entry subfield is the number of bits per entry in the Local Color Table. If the Local Color Table Flag subfield is set to 0, then this subfield is also set to 0. Local Color Table

If a Local Color Table is present, it immediately follows the Local Image Descriptor and precedes the image data with which it is associated. The format of all Local Color Tables is identical to that of the Global Color Table. Each

element is a series of 3-byte triples containing the red, green, and blue primary color values of each element in the Local Color Table:

316

Graphics File Formats

GIF (cont'd) typedef struct _GifColorTable { /* Red Color Element BYTE Red; /* Green Color Element BYTE Green; /* Blue Color Element BYTE Blue;

*/ */ */

} GIFCOLORTABLE;

The number of entries and the size in bytes of the Local Color Table is calculated in the same way as the Global Color Table: ColorTableSize = 3L * (1L « (SizeOfLocalColorTable + 1)); ColorTableNumberOfEntries = 1L « (SizeOfLocalColorTable + 1);

A Local Color Table only affects the image it is associated with and, if it is present, its data supersedes that of the Global Color Table. Each image may have no more than one Local Color Table.

Image data GIF files do not compress well when stored using file archivers such as pkzip and zoo. This is because the image data found in every GIF file is always compressed using the LZW (Lempel-Ziv-Welch) encoding scheme (described in Chapter 9), the same compression algorithm used by most file archivers. Compressing a GIF file is therefore a redundant operation, which rarely results in smaller files and is usually not worth the time and effort involved in the attempt. Normally when LZW-encoded image data is stored in a graphics file format, it is arranged as a continuous stream of data that is read from beginning to end. The GIF format, however, stores encoded image data as a series of data subblocks.

Each data sub-block begins with a count byte. The value of the count byte may range from 1 to 255 and indicates the number of data bytes in the sub-block. The data blocks immediately follow the count byte. A contiguous group of data blocks is terminated by a byte with a zero value. This may be viewed as either a terminator value or as a sub-block with a count byte value of zero; in either case, it indicates that no data bytes follow. Because GIF files do not contain a contiguous stream of LZW-encoded data, each sub-block must be read and the data sent to an LZW decoder. Most sub-

blocks storing image data will be 255 bytes in length, so this is an excellent maximum size to use for the buffer that will hold the encoded image data. Also, the LZW encoding process does not keep track of where each scan line GIF

317

GIF (cont'd) begins and ends. It is therefore likely that one scan line will end and another begin in the middle of a sub-block of image data. The format of the decoded GIF image data is fairly straightforward. Each pixel in a decoded scan line is always one byte in size and contains an index value into either a Global or Local Color Table. Although the structure of the GIF format is quite capable of storing color information directly in the image data (thus bypassing the need for a color table), the GIF specification does not specify this as a possible option. Therefore, even 1-bit image data must use 8-bit index values and a 2-entry color table. GIF image data is always stored by scan line and by pixel. GIF does not have the capability to store image data as planes, so when GIF files are displayed using plane-oriented display adapters, quite a bit of buffering, shifting, and masking of image data must first occur before the GIF image can be displayed. The scan lines making up the GIF bitmap image data are normally stored in consecutive order, starting with the first row and ending with the last. The GIF format also supports an alternate way to store rows of bitmap data in an interlaced order. Interlaced images are stored as alternating rows of bitmap data. If you have ever viewed a GIF file that appeared on the screen as a series of four "wipes" that jumped across the screen as the image is displayed, you were viewing an interlaced GIF file. Figure GIF-2 compares the order of rows stored in an interlaced and noninterlaced format. In the non-interlaced format, the rows of bitmap data are stored starting with the first row and continuing sequentially to the last row. This is the typical storage format for most bitmap file formats. The interlaced format, however, stores the rows out of the normal sequence. All the even rows are stored first and all the odd rows are stored last. We can also see that each

successive pass usually encodes more rows than the previous pass. GIF uses a 4-pass interlacing scheme. The first pass starts on row 0 and reads every eighth row of bitmap data. The second pass starts on the fourth row and reads every eighth row of data. The third pass starts on the second row and reads every fourth row. The final pass begins on the first row and reads every second row. Using this scheme, all of the rows of bitmap data are read and stored.

Why interlace a GIF image? Interlacing might seem to make the reading, writing, and displaying of the image data more difficult, and of course it does. Does this arrangement somehow make the image easier to display on interlaced monitors? The answer lies in one of the original purposes of GIF. 318

Graphics File Formats

GIF (cont'd)

> Pass 1 > Pass 2

> Pass 3 '-

> Pass 4

Non-interlaced

Interlaced

Figure GIF-2: Arrangement of interlaced and non-interlaced scan lines GIF was designed as an image communications protocol used for the interactive viewing of online images. A user connected to an information service via a modem could not only download a GIF image, but could also see it appear on his or her display screen as it was being downloaded. If a GIF image were stored in a non-interlaced format, the GIF image would display in a progressive fashion starting at the top of the screen and ending at the bottom. After 50 percent of the download was completed, only the top half of the GIF image would be visible. An interlaced image, however, would display starting with every eighth row, then every fourth row, then every second row, and so on. When the download of an interlaced GIF image was only 50 percent complete, the entire contents of the image could be discerned even though only half the image had been displayed. The viewer's eye and brain would simply fill in the missing half. Interlacing presents a problem when converting a GIF image from one format to another. A scan-line table must be created to write out the scan lines in their proper, non-interlaced order. The following sample code is used to produce a scan-line table of an interlaced image:

GIF

319

GIF (cont'd) WORD i, j; WORD RowTablel[16]; WORD RowTable2[16] ; WORD ImageHeight = 16; /* 16 lines in the GIF image */ for (i = 0; i < ImageHeight; i++) /* Initialize source array */ RowTablel [i] = i; j = 0; for (i = 0; i < ImageHeight; i += 8, j++) /* Interlace RowTable2[i] = RowTablel[j] for (i = 4; i < ImageHeight; i + 8, j++) /* Interlace RowTable2[i] = RowTablel[j] for (i = 2; i < ImageHeight; i + 4/ j++) /* Interlace RowTable2[i] = RowTablel[j] for (i = 1; i < ImageHeight; i + 2, j++) /* Interlace RowTable2[i] = RowTablel[j]

Pass 1 */ Pass 2 */ Pass 3 */ Pass 4 */

The array RowTablel [] contains the mapping of the scan lines in a noninterlaced image, which in this example are the values 0 to 15 in consecutive order. The array RowTable2[] is then initialized by the interlacing code to contain the mapping of the scan lines of the interlaced image: RowTablel[] RowTable2 0 0 1

8

2

4

3

9

4

2

5

10

6

5

7

11

8

1

9

12

10

6

11

13

12

3

13

14

14

7

15

15

We can restore the non-interlaced image by stepping through the values stored in RowTable2[]. The Oth row of the non-interlaced image is the 0th row of the interlaced image. The first row of the non-interlaced image is the eighth row of the interlaced image. The second row of the non-interlaced image is the fourth row of the interlaced image, and so on. 320

Graphics File Formats

GIF (cont'd) Trailer

The Trailer is a single byte of data that occurs as the last character in the file. This byte value is always 3Bh and indicates the end of the GIF data stream. A trailer must appear in every GIF file. GIF 89a

Version 89a is the most recent revision of the GIF image file format and was introduced in July of 1989. Although the GIF 89a format is very similar to GIF 87a, it contains several additional blocks of information not defined in the 87a specification. For this reason GIF 89a image files may not be read and displayed properly by applications that read only GIF 87a image files. Many of these programs do not not attempt to display an 89a image file, because the version number "89a" will not be recognized. Although changing the version number from "89a" to "87a" will solve this problem, the GIF image data may still not display properly, for reasons we shall soon see. Figure GIF-3 illustrates the basic layout of a GIF 89a image file. Just as with version 87a, the 89a version also begins with a Header, a Logical Screen Descriptor, and an optional Global Color Table. Each image also contains a Local Image Descriptor, an optional Local Color Table, and a block of image data. The trailer in every GIF 89a file contains the same values found in 87a files. Version 89a added a new feature to the GIF format called Control Extensions.

These extensions to the GIF 87a format are specialized blocks of information used to control the rendering of the graphical data stored within a GIF image file. The design of GIF 87a only allowed the display of images one at a time in a "slide show" fashion. Through the interpretation and use of Control Extension data, GIF 89a allows both textual and bitmap-based graphical data to be displayed, overlaid, and deleted as in an animated multimedia presentation. The four Control Extensions introduced by GIF 89a are the Graphics Control Extension, the Plain Text Extension, the Comment Extension, and the Application Extension, summarized here and described in greater detail in the sections below.

Graphics Control Extension blocks control how the bitmap or plain text data found in a Graphics Rendering block is displayed. Such control information includes whether the graphic is to be overlaid in a transparent or opaque fashion over another graphic, whether the graphic is to be restored or deleted, and whether user input is expected before continuing with the display of the GIF file data.

GIF

321

GIF (cont'd) Plain Text Extension blocks allow the mixing of plain text ASCII graphics with bitmapped image data. Many GIF images contain human-readable text that is actually part of the bitmap data itself. Using the Plain Text Extension, captions that are not actually part of the bitmapped image may be overlaid onto the image. This can be invaluable when it is necessary to display textual data over an image, but it is inconvenient to alter the bitmap to include this information. It is even possible to construct an 89a file that contains only plain-text data and no bitmap image data at all. Application Extension blocks allow the storage of data that is understood only by the software application reading the GIF file. This data could be additional information used to help display the image data or to coordinate the way the image data is displayed with other GIF image files. Comment Extension blocks contain human-readable ASCII text embedded in

the GIF data stream that is used in a manner similar to program comments in C language code. With only a few restrictions, any number of Control Extension blocks may appear almost anywhere in a GIF data stream following the Global Color Table. All Extension blocks begin with the Extension Introducer value 21h, which identifies the block of data as an Extension block. This value is followed by a Block Label, which identifies the type of extension information contained within the block. Block Label identification values range from OOh to FFh. The Plain Text, Application, and Comment Extension blocks may also contain one or more sub-blocks of data.

Interestingly enough, all of the Control Extension features added by 89a are optional and are not required to appear in a GIF data stream. The only other difference between 87a and 89a is that at least one of the Image Descriptor and Logical Screen Descriptor fields, which are reserved under 87a, is used under 89a. In fact, any GIF files which are written under Version 89a, but which do not use any of the 89a features, should use the version number GIF87a.

Graphics Control Extension block The information found in a Graphics Control block is used to modify the data in the Graphical Rendering block that immediately follows it. A Graphics Control block may modify either bitmap or plain text data. It must also occur in the GIF stream before the data it modifies, and only one Graphic Control block may appear per Graphics Rendering block.

322

Graphics File Formats

GIF (cont'd)

Header and Color Table < Information

Extension Information *

Image 1