Film on Video: A Practical Guide to Making Video Look like Film 9781138603790, 9781138603806, 9780429468872, 1138603791

Film on Video: A Practical Guide to Making Video Look like Film is an accessible guide to making video captured on a cam

184 59 9MB

English Pages 232 [275]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Film on Video: A Practical Guide to Making Video Look like Film
 9781138603790, 9781138603806, 9780429468872, 1138603791

Table of contents :
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
1 Introduction
Film on Video
Format of the Book
Video is Everywhere
This Book is For You
The Look of Film
The Gold Standard
A Brief History of the Filmmaking Process
Enter Video
Cinema-Quality Digital Cameras
The Digital Revolution
Digital Revolution in Big-Budget Movies
The Cinematographer
Activities
Notes
2 Frame Rate
What is Frame Rate?
Minimum Perceptible Frame Rate
Maximum Perceptible Frame Rate
Early Frame Rates Varied
Cinema Standard Frame Rate – 24 Fps
Interlaced and Progressive Frames
Video Standard Frame Rates
High Frame Rate in Cinema
HFR for Slow Motion
Time-Lapse for Fast Motion
Non-Standard Frame Rates
24 or 23.976?
Frame Rate in Animation
Artificial HFR Televisions
Summary
Activities
Frame Rate Rules Recap
Notes
3 Exposure
Exposure
An F/Stop
Fast and Slow
There is no Single ‘Exposure’ Control
Exposure Triangle
Which Part of the Exposure Triangle Should I Use to Control Exposure?
Manual Mode
Activities
Exposure Rule Recap
Notes
4 Grain
Gain and Grain
Grain
Film Stocks
Film Grain
Digital Noise
Native ISO
Low-Light Performance
Is Grain/Noise Good or Bad?
Adding Organic Film Grain in Post
Intentional and Unintentional Grain
Noise Reduction
Activities
Grain Rule Recap
Notes
5 Motion Blur
Motion Blur
Motion Blur in Animation
Shutter Speed
180° Shutter Angle
45° Shutter Angle
Activities
Motion Blur Rules Recap
6 Depth of Field
Depth of Field
Sensor Size
Aperture
Your Hands are Tied
ND Filter
Deep Focus
Activities
Depth of Field Rule Recap
Notes
7 Colour
Colour in Two Parts
colour Part One
Colour Part Two
YUV Colour Space
Activities
Colour Rule Recap
Note
8 Camera Tools
Camera Tools
Cinematographer Craft
External Light Meter
Internal Light Meter
IRE
Histogram
Waveform
RGB Parade
Zebra
False Colour
Focus Peaking
Vectorscope
Activities
Camera Tools Rules Recap
Notes
9 Dynamic Range
Dynamic Range
Intentional Under- Or Overexposure
Contrast Ratio
Luminosity
Linear and Logarithmic
Dynamic Range of Human Vision
Dynamic Range of Celluloid Film
Dynamic Range of Video
Dynamic Range of Raw Video
HDR – High Dynamic Range
In Practice
Overexposure
Activities
Dynamic Range Rule Recap
Note on the Next Three Chapters
10 Image Plane
The Film Gate
Image Plane
Celluloid Film Plane
8mm
Super 8
16mm
Super 16
35mm – Silent Movies
35mm – Academy
35mm – Widescreen
Anamorphic 35mm
Super 35
Techniscope
VistaVision
65mm
IMAX 70mm
Digital Imaging Sensor
Digital Sensor Sizes
DSLR Sensor Size vs DSLR Video Imaging
Global and Rolling Shutter
Activities
Image Plane Rule Recap
11 Aspect Ratio
Aspect Ratios
Early Film Aspect Ratios
Widescreen Aspect Ratios
TV Aspect Ratios
Pillarbox and Letterbox
What Aspect Ratio to Shoot?
Activities
Aspect Ratio Rule Recap
Notes
12 Resolution
Resolution
Pixels
Pixel Aspect Ratio
Standard Definition – SD
High Definition – HD
Ultra High Definition – UHD
Cinema 2K, 4K, 5K, 6K
Over Scan
Which Resolution?
Resolution of Celluloid
Archiving on Celluloid
Activities
Resolution Rules Recap
Notes
13 Lenses
Lenses
Light
Field of View
Optical Compression
Focus
Three Main Categories of Lenses
Lens Mount
Prime Lenses
Zoom Lenses
Fixed Lenses
Contrazoom
Famous Focal Lengths
Specialty Lenses
Crop Factor
Activities
Lenses Rule Recap
Note
14 Codecs
The Versatility of the Celluloid Negative
Cinema Raw
Codec
Containers
Recording Formats
Digital Cinema Packages
Compression Methods
Codecs for Different Purposes
Example Workflow Pipeline
Activities
Codecs Rules Recap
Notes
15 Cinematic Post-Production
Cinematic Post-Production
Native Image Quality
Proxies
Colour Correction
LOG Footage
LUT – Colour Correction
Colour Grading
Secondary Colour Grading
LUT – Colour Grading
Film-Stock Emulation LUTs
Add Film Elements
Third-Party Film-Stock Emulators
Activities
Cinematic Post-Production Rule Recap
Notes
16 Camera Case Studies
Structure of this Chapter
Camcorders
DSLRs
Smartphone Cameras
Action Cameras
Digital Cinema Cameras
Notes
The Rules
Glossary
Index

Citation preview

FILM ON VIDEO

Film on Video: A Practical Guide to Making Video Look like Film is an accessible guide to making video captured on a camcorder, DSLR camera, smartphone, action camera or cinema camera look like it was shot on motion-picture celluloid film. Chapter by chapter, Jonathan Kemp introduces the reader to a key characteristic of celluloid film, explains the historical and practical reasons why it exists, before providing a simplified method for best replicating that characteristic on a digital camera. The book includes various practical exercises throughout that are designed to underline the takeaway principles of each chapter and features case studies on specific cameras including the Sony NX5 Camcorder, Canon 5D Mk IV, Canon 4000D, iPhone X, GoPro Hero 6, Blackmagic URSA Mini Pro 4.6K and Canon C200. Ideal for students studying film and media production and filmmaking newcomers who want to get up to speed quickly, this is an indispensable guide to how the numerous settings on a digital camera can be used to create footage that more closely resembles the film ‘look’. Jonathan Kemp is a Lecturer in Media Production at the University of Central Lancashire and an award-winning short film maker. His first short film Michael won an award at the ‘Unchosen’ charity film competition and screened at the Raindance Film Festival in 2015. To see his work, visit www.jonathankemp.co.uk.

FILM ON VIDEO A Practical Guide to Making Video Look like Film

Jonathan Kemp

First published 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 Jonathan Kemp The right of Jonathan Kemp to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Kemp, Jonathan, 1981- author. Title: Film on video : a practical guide to making video look like film / Jonathan Kemp. Description: London ; New York : Routledge, 2019. Identifiers: LCCN 2018058659| ISBN 9781138603790 (hbk. : alk. paper) | ISBN 9781138603806 (pbk. : alk. paper) | ISBN 9780429468872 (ebk.) Subjects: LCSH: Digital cinematography—Handbooks, manuals, etc. | Video recording—Handbooks, manuals, etc. | Digital video recorders—Handbooks, manuals, etc. Classification: LCC TR860 .K445 2019 | DDC 777—dc23 LC record available at https://lccn.loc.gov/2018058659 ISBN: 978-1-138-60379-0 (hbk) ISBN: 978-1-138-60380-6 (pbk) ISBN: 978-0-429-46887-2 (ebk) Typeset in Bembo by Swales & Willis Ltd, Exeter, Devon, UK

For Charlotte, Ivy, Fox and River.

Contents

  1    Introduction Film on video Format of the book Video is everywhere This book is for you The look of film The gold standard A brief history of the filmmaking process Enter video Cinema-quality digital cameras The digital revolution Digital revolution in big-budget movies The cinematographer Activities Notes   2    Frame rate What is frame rate? Minimum perceptible frame rate Maximum perceptible frame rate Early frame rates varied Cinema standard frame rate – 24 fps Interlaced and progressive frames

Video standard frame rates High frame rate in cinema HFR for slow motion Time-lapse for fast motion Non-standard frame rates 24 or 23.976? Frame rate in animation Artificial HFR televisions Summary Activities Frame rate rules recap Notes   3    Exposure Exposure An f/stop Fast and slow There is no single ‘exposure’ control Exposure triangle Which part of the exposure triangle should I use to control exposure? Manual mode Activities Exposure rule recap Notes   4    Grain Gain and grain Grain Film stocks Film grain Digital noise

Native ISO Low-light performance Is grain/noise good or bad? Adding organic film grain in post Intentional and unintentional grain Noise reduction Activities Grain rule recap Notes   5    Motion blur Motion blur Motion blur in animation Shutter speed 180° shutter angle 45° shutter angle Activities Motion blur rules recap   6    Depth of field Depth of field Sensor size Aperture Your hands are tied ND filter Deep focus Activities Depth of field rule recap Notes   7    Colour Colour in two parts

Colour part one Colour part two YUV colour space Activities Colour rule recap Note   8    Camera tools Camera tools Cinematographer craft External light meter Internal light meter IRE Histogram Waveform RGB parade Zebra False colour Focus peaking Vectorscope Activities Camera tools rules recap Notes   9    Dynamic range Dynamic range Intentional under- or overexposure Contrast ratio Luminosity Linear and logarithmic Dynamic range of human vision

Dynamic range of celluloid film Dynamic range of video Dynamic range of raw video HDR – high dynamic range In practice Overexposure Activities Dynamic range rule recap Note on the next three chapters 10    Image plane The film gate Image plane Celluloid film plane 8mm Super 8 16mm Super 16 35mm – silent movies 35mm – Academy 35mm – widescreen Anamorphic 35mm Super 35 Techniscope VistaVision 65mm IMAX 70mm Digital imaging sensor Digital sensor sizes DSLR sensor size vs DSLR video imaging Global and rolling shutter

Activities Image plane rule recap 11    Aspect ratio Aspect ratios Early film aspect ratios Widescreen aspect ratios TV aspect ratios Pillarbox and letterbox What aspect ratio to shoot? Activities Aspect ratio rule recap Notes 12    Resolution Resolution Pixels Pixel aspect ratio Standard definition – SD High definition – HD Ultra high definition – UHD Cinema 2K, 4K, 5K, 6K Over scan Which resolution? Resolution of celluloid Archiving on celluloid Activities Resolution rules recap Notes 13    Lenses Lenses

Light Field of view Optical compression Focus Three main categories of lenses Lens mount Prime lenses Zoom lenses Fixed lenses Contrazoom Famous focal lengths Specialty lenses Crop factor Activities Lenses rule recap Note 14    Codecs The versatility of the celluloid negative Cinema raw Codec Containers Recording formats Digital cinema packages Compression methods Codecs for different purposes Example workflow pipeline Activities Codecs rules recap Notes

15    Cinematic post-production Cinematic post-production Native image quality Proxies Colour correction LOG footage LUT – colour correction Colour grading Secondary colour grading LUT – colour grading Film-stock emulation LUTs Add film elements Third-party film-stock emulators Activities Cinematic post-production rule recap Notes 16    Camera case studies Structure of this chapter Camcorders DSLRs Smartphone cameras Action cameras Digital cinema cameras Notes The rules Glossary Index

1 INTRODUCTION

Film on video Everyone . . . has in their pockets a movie studio and a distribution studio.1 (J.J. Abrams)

The prevalence of digital video cameras, particularly smartphones with an inbuilt camera and an internet connection, makes it possible for anyone to make a video and share it with the world. However, simply launching a smartphone camera app and commencing filming does not guarantee the production of something that looks like a ‘proper’ film. J.J. Abrams is correct that everyone has in their pockets a movie and a distribution studio, in the form of a modern smartphone, but there is more to consider when seeking to create cinematic and professional-looking moving images. There is a craftsmanship that comes from knowledge and experience that any piece of equipment, however sophisticated, cannot replace. Particularly when the device in question is silently making decisions about exactly how a shot is captured and you have no understanding or ability to change exactly how it is being captured. It is worth considering that a professional digital cinema camera can easily create poor-quality and un-cinematic images when used by someone who does not know how to operate it. Consider then, how much more so a camera that is primarily designed to be a phone can produce poor-quality and un-cinematic images. However, it is possible to use a relatively inexpensive video camera to create cinematic-looking video that resembles a traditional movie, shot on film, but only if you understand how to do so. Any digital camera will not automatically make

video look filmic. Only you as the filmmaker can make the camera produce cinematic images. This book will explore all the issues that govern the look of a digital video image and give you a strategy for how to get the most cinematic images possible from the camera that you have access to.

Format of the book This book will explore the factors that contribute to the film look. Each chapter will explore one issue in theory and technicality, and will then explain how to configure your camera (where possible) to best replicate that particular attribute. Each chapter will end by offering practical exercises designed to help cement knowledge and understanding as well as providing rules to follow. I hope the rules will help you to juggle the various factors into one manageable list that you can follow when you are next behind the camera. Some rules are inflexible; others can be used as more of a guide or a starting point. I have ordered the chapters in this book in a logical way, but due to the interrelated nature of many of the elements, sometimes you will need to also read more than one chapter to fully grasp every element in play. There are a lot of different things to grasp when grappling with this subject. If you are new to this (or even if you are not), you are undertaking a battle to fully understand everything in this book and make it relevant to you. That battle is a good thing; that is learning. Like most things, this is all very learnable to those who are interested. I look at cinematography as a list of very simple ideas, principles and techniques all strung together. It is the amount of simple things involved that can make this seem complicated. If this ever gets overwhelming, it can be approached by looking at each of the smaller sections outlined here. Go through all the chapters in this book (in whatever order interests you the most), wrestle with all the information and complete the exercises. By doing so, you will begin to break down this complicated and expressive art form into a digestible number of bites. This book also provides a case study chapter: a look at a variety of ‘affordable’ cameras (to an academic institution or independent filmmaker) and the practical implications in trying to replicate the film look on that particular camera. It is not possible to fully replicate every aspect on every camera, but the more factors that can be applied, the closer the end result will be to a film look.

Video is everywhere Video content is everywhere. Digital video cameras are getting smaller and cheaper and more capable by the year. Even a run-of-the-mill smartphone can now create images that have as many picture elements (pixels) as high-end cinema projectors. Cameras are built into every laptop, tablet and smartphone and in many television sets and other appliances. They are operational inside and outside many buildings in CCTV and surveillance systems. We are inundated with hundreds of channels of TV and video content and presented with more potential viewing hours online in places such as YouTube than we could ever watch in a lifetime. A lot of this video content does not look particularly filmic though. That is to say, you would never mistake much of it for a ‘proper’ movie, based on the way it looks. There has been a growing trend amongst episodic drama, the big box sets such as Breaking Bad or Peaky Blinders, to reproduce a cinematic aesthetic; to consciously create content that looks like a movie, rather than how a television documentary or news programme might look. When this book refers to a movie image, a filmic look or a cinematic aesthetic, it should be understood that we are also including this new, more cinematic TV programme under the same umbrella. So what is it about the typical movie image that sets it apart from other kinds of video content? What are the aesthetic conventions of a movie as opposed to a television game show? And can the video cameras available to us, independent and student filmmakers, be used to create cinematic images? What are the settings and techniques needed to create filmic images on a typical video camera? There are a number of factors that contribute to this filmic look. Some of the factors are settings within the camera and some are physical characteristics of the camera itself. Because of this, some factors can be controlled directly while others can not. For example, the physical size of a camera’s imaging sensor (where the light from the lens is converted into a digital image) has an effect on the aesthetic quality of the image produced, and there is no way to change that without changing the camera. The size of a smartphone sensor is small by necessity; it is attached to a phone, while the trend is for phones to get smaller and thinner. Some of the other internal settings within the smartphone camera can be changed, however. Every kind of digital video camera has advantages and disadvantages when considering recreating a film look. Part of the purpose of this book is to equip you

with the tools for making the best decisions when selecting from possibly a number of options that are available to you and your unique situation. This book will methodically guide you through each of the various factors that contribute to this film look, to explore the underlying principle and to give practical advice for how to implement each aspect in turn. The more of these factors that you can control, the closer to a film look you will be able to achieve with a video camera.

This book is for you This book assumes a number of things about you, the reader: •    First, that you have limited means. You are either a student at an educational facility, possibly with a selection of a few different video cameras to borrow, or an independent filmmaker, using whatever you have to hand. Maybe you have a modern DSLR – Digital Single Lens Reflex – camera (stills photography camera), a consumer camcorder or a modern smartphone that will also capture video images. •    Second, the book assumes that you want to create video images that closely resemble the look of the big-budget movies that you love; to create video images that appear to have been created on 35mm celluloid film. The principles outlined in this book can set you up for a career in cinematography, on the best cinema cameras available, but with an emphasis on using the more affordable tools that might be within your means to borrow or cheaply own for yourself now.

The look of film The look of film is a delicate thing to describe. There is a number of factors that contribute to the overall aesthetic effect. Regardless of the tone and style of the film, from a big-budget blockbuster to a dialogue-heavy independent film, most ‘proper’ films share common visual qualities. Even a casual viewer can watch a regular movie and notice the difference in aesthetics when compared to a television game show or the news. There is a certain quality to the movie that the television programme just does not have.

I do not mean to imply that there is only one film look. In reality, there is a number of different film looks. You might even argue that each individual movie has its own particular look, and I would not challenge that. However, there is a number of common visual characteristics that can be identified: •    Maybe it is the way objects move within the frame? By comparison, television programmes appear too smooth somehow? •    Maybe it is the way the shallow depth of field can direct the viewer’s eye to specific parts of the image? •    Maybe it is the vibrancy of the colours? •    Maybe it is the way the organic film grain dances on the image, bringing life to every area of the frame? If you are unsure about any of these, don’t worry. This book will fully explore these and many other things. By the time you have worked your way through this book, you will not only understand the reasons why a typical movie looks different to a television programme, but also you’ll be able to re-create those filmic qualities yourself.

The gold standard Real celluloid film is considered by many to be the gold standard of image creation. Filmmakers such as Quentin Tarantino and Christopher Nolan are famously very vocal in their support for celluloid filmmaking as opposed to the digital alternative. There is a lively debate in the film industry about the relevance of celluloid film in an industry largely dominated by digital filmmaking. Digital camera technology has progressed to the point that now we have an acceptable digital alternative to shooting on celluloid film. Many cinematographers who were sceptical have fully converted to shooting on digital cameras.2 The reason for the acceptance of digital cinema technology is because the results are becoming comparable to celluloid and the benefits of working digitally, such as immediate degradation-free copies, outweigh the slight differences in aesthetics. Working on celluloid film is a slightly protracted process. After filming, the celluloid would be sent to a lab for processing overnight, and the footage would only be viewable the next day. This process carried an extra expense (the investment of the raw film stock, as well as the processing and shipping costs) and

meant a delay in identifying any issues in the captured footage. Compare this to the ability to instantly play back a shot captured digitally, as well as the ability to shoot as much footage as you have hard drive space to store. It should be noted, however, that celluloid film is still considered to be the gold standard. People still compare the images created by digital cameras with celluloid film, not the other way around. No-one says ‘that film looks as good as the digital’. This is partly because of celluloid film’s infinite colour space and the unsurpassed resolution, as well as the other technical imperfections that add texture to the moving image. By comparison, digital images can appear cold and lifeless. This is also because every cinematic film seen up until the early 2000s was shot on celluloid film and we are very used to the way it looks, even if we are unaware. Until there is a significant amount of films created on digital cameras (enough to eclipse the century of celluloid cinema that went before it), celluloid will always be the point of reference. Cast your mind forward a few hundred years into the future. Even if celluloid film has not been used for centuries in film production, cinematographers and directors will likely still be consciously or unconsciously conforming to a set of aesthetics that was established in the first hundred years of filmmaking.

A brief history of the filmmaking process The celluloid filmstrip is the traditional and long-standing method for capturing photographic images. In colour photography, it works by means of a chemical reaction that takes place when light projected through a lens reaches the silver halide crystals within three separate emulsion layers that are fixed to the celluloid (a type of plastic). Each of the three emulsion layers is sensitive to one of the three primary colours of light – red, green and blue – making it possible to capture a full colour image in a single strip of celluloid. When the film is developed in a laboratory using additional chemicals, the reaction is made permanent and the pattern of silver halide crystals is changed into silver metal. See Plate 1 for a cross section of celluloid film. The resulting filmstrip contains a negative version of the final image. The brightest areas appear to be the darkest and vice versa. Light can then be projected through the developed negative, the image inverted, and a workprint can be made. The workprint can then be projected or physically edited using a film splicer. The 35mm celluloid film frame was introduced in 1892, a creation of William Dickson and Thomas Edison using film supplied by George Eastman of the

Eastman Kodak Company (Kodak). This was not the only variety of celluloid film in use during that time, but by 1909 it had been accepted as the standard. Celluloid film that was 35mm wide with four perforations per frame was accepted as the international standard gauge for motion-picture film. These early films were short and experimental in nature and often documented ‘actualities’ or a record of daily life. One notable early film is Georges Méliès’ fantastical special-effects masterpiece A Trip to the Moon, which was released in 1902 (see Martin Scorsese’s brilliant Hugo for more insight into that). A Trip to the Moon, and virtually every film that followed for 100 years from Citizen Kane to Kes to Jurassic Park, was shot using the same 35mm film process. There have been filmmakers and films that have deviated away from 35mm celluloid during that time, experimenting with larger format film stocks (for example, 70mm and the IMAX system), the idea being that if a larger surface area of celluloid were used, it would result in a superior quality image. While this general principle is true, the economics of scale meant that the industry standard of 35mm celluloid has always been the rule, and everything else a notable exception to that rule. Camera technology did develop over that century, but it was the technology behind the celluloid film that saw the biggest development. Film stocks evolved from black and white to partial colour to full colour. They gradually became faster and faster, meaning that they became more sensitive to light, in turn meaning the filmmakers needed to use less cumbersome lighting equipment and leading to more versatility in the filmmaking process. Note this irony: a 35mm film camera from 100 years ago could be used with 35mm film from today to create high-quality, better than UHD (Ultra High Definition) 4K images, providing the camera was still in good working order. However, the best digital camera available today will be obsolete in 100 years. Today’s standard digital cameras will probably be obsolete in much less time than that. I note this without any particular bias towards celluloid over digital imaging, rather I am acknowledging the irony that the older technology will outlast the newer technology.

Enter video The CCD (Charged Coupled Device) imaging sensor was created in 1969 at AT&T Bell Labs by Willard Boyle and George E. Smith in the United States. The CCD imaging sensor converts the light projected onto it by the camera lens into a

digital representation of that image. That image is broken down into a number of pixels that contribute to the resolution of the resulting image. The general rule is that the more photo sites a digital sensor has results in more pixels in the final image. Having more pixels equals a higher-quality, higher-resolution image. When the CCD imaging sensor was created, it was used in broadcast TV productions and adopted into professional broadcast and consumer camcorders. The sensors were typically small, and the resolution was low, resulting in an inferior image when compared to what was possible using celluloid film. Camcorders recorded images to a variety of different tape systems such as DVC Pro, DV Cam and miniDV. While early digital video cameras produced a sub-standard quality of image when compared to those created on celluloid film, there was one big advantage in using the technology. It was much cheaper. Tapes were cheap to buy and could be reused. There were no processing costs as the tapes could be logged and captured using a computer without the need to pay expensive laboratory development and digitisation fees. Independent filmmakers could use video to create feature films for much less money than if they had to shoot on film, launching the careers of a number of filmmakers who are rightly considered masters today. Lars von Trier and the Dogme 95 movement began by using both Academy Aspect Ratio 35mm film (more on that in Chapter 11 Aspect ratio) and also relatively cheap commercially available video cameras. Von Trier’s 1998 film The Idiots was shot on DV as well as a slew of other independently produced DV films from around the world. The image quality of the video was unquestionably inferior to that created by celluloid film, but the filmmakers used the inferiority of the image to their advantage. The ‘cheaper’ video look was associated with TV and video documentary and so added an element of authenticity and ‘reality’ to the film. It did not look like a movie, but rather it looked like a TV documentary. If you have not seen these films, watch the trailers for The Idiots, Chuck and Buck and The Celebration and notice this video-like quality. This aesthetic became an established look in its own right. It was in 1999 that Star Wars: The Phantom Menace used digital video for a number of shots, making it the first Hollywood film to use digital video in a Hollywood release. Reportedly, George Lucas had been pushing the camera department at Sony to produce a camera that would shoot a high-quality digital video image that could be used interchangeably with celluloid film.3 In the

documentary of the making of The Phantom Menace, Arriflex celluloid film cameras can clearly be seen in use4 so I would assume that the technology was not deemed to be a sufficient replacement of celluloid film at that time, as the video camera seems to have been used sparingly and with consideration. However, they were used, so it represents a turning point in the adoption of digital video cameras in the established movie industry.

Cinema-quality digital cameras In 2007, Lord of the Rings director Peter Jackson created the short film Crossing the Line using a revolutionary new digital cinema camera by new camera company Red. The camera – called RED ONE – was a 4K digital cinema camera that used the exact same sized sensor as the area of film that is exposed in a S35mm (Super 35mm) cinema camera. It also recorded a very high-quality uncompressed image that was far superior to the compressed video quality of other types of camera (more on this in Chapter 14 Codecs). The Red cameras that have followed, as well as a move into digital from legendary camera manufacturer Arri, have coalesced to the point where today, in 2019, the majority of big-budget feature films is shot on these expensive high-quality cinema cameras. The number of mainstream movies shot on celluloid film today is in decline to the point of scarcity. Unfortunately, these digital cinema cameras are still out of the realms of affordability of most educational institutions and independents.

The digital revolution In 2008 Canon released their DSLR camera, the Canon EOS 5D Mark II. Because of the large photography sensor (even larger than the S35mm size of motion-picture celluloid film), this photography camera could be used to create cinematic-looking video images, superficially superior to other video cameras, which at this time used much smaller imaging sensors. The large sensor in the Canon 5D Mark II could be used to create images that had a very shallow depth of field, meaning that the foreground and background can be blurry and out of focus, while keeping the subject in focus. This shallow depth of field brings with it an association to a certain cinema aesthetic, and so heavily contributed to these cameras being used to create a filmic look. One chapter of this book will look specifically at shallow depth of field and how to achieve it (Chapter 6 Depth of field).

FIGURE 1.1  Comparison between S35mm film and the Canon 5D digital sensor (full frame). Actual size. Source: Copyright © 2019 Sam Pratt.

FIGURE 1.2  Shallow depth of field image. Source: Copyright © 2019 Jonathan Kemp.

The sudden availability of this relatively affordable camera (around £2,500 at the time of launch, without lens) and the ease at which it could be made to create a shallow depth of field image led to an explosion of DSLR filmmaking. The price might have been out of the range of the average hobbyist but it was within the realms of affordability to many people including educational institutions, independent filmmakers and professional and semi-professional photographers. Coupled with the explosion of video content hosted on the internet (linked to the increase in broadband and fibre internet speeds), suddenly, shallow depth of field videos were everywhere. Other photography camera manufacturers also enabled video capture on their DSLRs. Independent and student filmmakers were using DSLRs to shoot shorts and features for very little money. The digital cinema revolution had arrived. The holy trinity of affordability, cinematic aesthetics and acceptable technical quality led to independent filmmakers feeling as though they could compete with the big players in the film world. This, combined with the increase in capabilities of commercially available computing and the reduction in price of post-production software (editing, colour grading and visual effects), led to independents making a big splash in the filmmaking world. One notable addition to the digital cinema revolution was the release in 2012 of the Blackmagic Cinema Camera (followed by subsequent models). The camera was built as an alternative to the extremely expensive cameras such as the Red and digital Arri cameras, capturing 2.5K (better than HD – High Definition) and later 4.6K (better than 4K) images in a high-quality uncompressed format and at a fraction of the price. At launch, the Blackmagic Cinema Camera sold for £2,500 without a lens. So far the Blackmagic cameras have been popular with independent filmmakers and educational institutions, but have not been adopted large-scale into big-budget productions.

Digital revolution in big-budget movies The adoption of cheaper video cameras by low-budget filmmakers has filtered into all areas of the filmmaking world, meaning that relatively inexpensive cameras have been, and continue to be, used in big-budget TV shows and major Hollywood movies. So far these cheaper cameras are not often used as a main camera, but are often used as a supplementary or crash camera: •    The inexpensive ‘action cam’ (a new classification of camera with a wide field of view and designed to be small and light enough to attach to a helmet or

other extremity) from the company GoPro has been used in a variety of films, including the barrel sequence in The Hobbit: The Desolation of Smaug.5 •    Sequences filmed in the New York City metro in Black Swan used a Canon 7D (a cheaper, smaller version of the Canon 5D Mark II).6 •    Certain shots in Mad Max: Fury Road used a Blackmagic Pocket Cinema Camera (a smaller version of the Blackmagic Cinema Camera).7 Two notable exceptions to this trend of only using a less expensive video camera as a supplementary camera are: •    The season 6 final episode of the Fox series House 2010 that was shot exclusively on the Canon 5D Mark II.8 •    Hollywood veteran Steven Soderbergh used an Apple iPhone 7 Plus and a Moment Lens9 to shoot the 2018 film Unsane.10 The big-budget film world’s general reluctance to use the cheaper video cameras in place of the more expensive cinema cameras11 should not be seen necessarily as a denigration of the cheaper technology. That big-budget productions use these cameras at all is an indication that they are good enough. The difference between the expensive digital camera tools that the professional film industry uses and the cheaper tools that we have access to can be put down to three factors: 1    Ergonomics. The expensive tools are designed to be used on large-scale film productions and are set up to be worked on by a film crew. Some video cameras that we use might be designed primarily for a different purpose e.g. the DSLR is designed primarily to be used as a stills camera, so the form is small. A smartphone is primarily designed to be used as a phone and carried in a pocket, so the form is smaller still. 2    Durability. The more expensive tools are built to be used a lot and sustain multiple uses without damage. 3    Budget. Budget does come into this, of course. The cheaper cameras have cheaper components, smaller capacity storage devices and are generally built to be more affordable. This results in a compromise in the functionality. The compromise involved in using the cheaper camera equipment can mostly be navigated with some thought and careful use, and should not be used as a reason not to create using these less than perfect but nonetheless adequate tools. In fact, navigating the quirks of the equipment we have access to will teach us to be

even more respectful of the more purpose-designed tools as and when we ‘graduate’ to using them.

The cinematographer If you are interested in this book, it is likely that you are on the path to becoming a cinematographer. It is important to know the role of the cinematographer and what things fall under their jurisdiction. The cinematographer is responsible for the look of the film. The cinematographer is also referred to as a director of photography, or in the video world may also be called a lighting cameraperson. The cinematographer is the head of the camera department. The camera team reports to them, including camera operators (although cinematographers do operate themselves sometimes, like Steven Soderbergh) and the camera assistants (including focus pullers and clapper loader). They work with the lighting team to produce interesting lighting schemes and they also work with the production design team to create interesting sets to film in. In post-production, the cinematographer will also work with a colourist to manipulate the colour of the film to bring it in line with the vision for the look of the film that has been deliberately created during the filming process. This book will make reference to the cinematographer, but you should probably consider that this role refers to you.

Activities Most of the chapters in the book give you specific instructions for activities and camera tests. These activities are probably the most important thing for you to do, to help cement the theory in this book into practical experience for you. Without the practical reinforcement at the end of each chapter, you are much less likely to retain this information either in theory or in practical application. A camera test is something that is meant to target a specific outcome, so knowing what you are testing and what you are looking for is essential. Try to create ‘lab-like’ conditions where you are only changing what needs to be changed. Usually this means the same shot compositionally but with different camera settings.

Rule: Follow as many rules as possible to achieve a cinematic film look.

Notes   1    www.engadget.com/2016/03/14/jj-abrams-sxsw-filmmaking-technology/. All websites cited in this book were last accessed on 14 September 2018.   2    www.slashfilm.com/roger-deakins-digital-35mm-im-ill-film/.   3    Star Wars: The Phantom Menace Documentary (2004). (DVD) Lucasfilm.   4    Which inspection of the IMDB technical specifications page also confirms – www.imdb.com/title/tt0120915/technical?ref_=tt_dt_spec.   5    www.imdb.com/title/tt1170358/technical?ref_=tt_dt_spec.   6    Pizzello, S. (2010). Danse Macabre. American Cinematographer, Vol. 91, No. 12, pp. 30–47.   7    Gray, S. (2015). Max Intensity. American Cinematographer, Vol. 96, No. 6), pp. 32–49.   8    Bloom, P. (2010). Interview with Greg Yaitanes about Season Finale of House. (Podcast). Available at: http://philipbloom.net/blog/other-stuff/case-studies/greg-yaitanes-house-interview-transcription/.   9    www.shopmoment.com/. 10    www.imdb.com/title/tt7153766/technical?ref_=tt_dt_spec. 11    Aside from Soderbergh, who I suspect was acting as something of a provocateur.

2 FRAME RATE

The rules for setting a cinematic frame rate are as follows: Rule: For a cinematic look, always shoot at 24 frames per second. Rule: Avoid interlaced footage when creating cinematic images. The rest of this chapter will look into why these rules exist and offer activities to help cement your understanding of these rules.

What is frame rate? Film and video consists of a series of still images played in quick succession. The speed at which the images (or frames) are played together is known as the frame rate. Frame rate is one of the most fundamental components involved in achieving a filmic quality in your video images. To fully understand this, you also need to have read Chapter 5 Motion blur. Frame rate will primarily effect the perception of motion and whether motion (an object or camera move) is perceived as smooth, or ‘jumpy’.

FIGURE 2.1  Simplified camera 1 – lens, shutter, gate and film. Source: Copyright © 2019 Sam Pratt.

From the beginning of cinema in the late 1800s until the mid-2000s, movies were shot almost exclusively on celluloid film. The strip of celluloid has perforations running along both sides to enable a mechanism inside a motionpicture film camera to advance onto the next image. The traditional amount of space per image is four perforations in height, although cameras have also been configured to run a slightly smaller exposed celluloid area at a rate of three perforations per frame and a smaller celluloid area still at two perforations in height. See Chapter 10 Image plane for more on this. Inside the camera there is a film gate that is attached to a lens and a claw mechanism, which advances the film onto the next frame once the current frame has been exposed to the light from the lens. In a digital video, television or cinema camera there is no celluloid film moving at a specific rate. Rather there is a digital imaging sensor that electronically converts the projected image into a digital one. A small onboard vision processing unit (a dedicated microprocessor) then converts the succession of digital images into a digital video file.

FIGURE 2.2  Simplified camera 2 – lens and sensor. Source: Copyright © 2019 Sam Pratt.

The frame rate is usually displayed using the abbreviation fps – frames per second. Some digital cameras and television sets also refer to the frame rate of a video in Hertz (Hz), which is a measurement of frequency per second. So a frame rate of 24 fps could also be described as 24Hz (24 times per second) and 50 fps could be 50Hz (50 times per second).

Minimum perceptible frame rate The phi phenomenon is an optical illusion of perceiving a series of still images as one continuous movement. This illusion was first described by Max Wertheimer in Gestalt psychology in 1912. It was noted that the human brain can perceive of about 10–12 individual frames per second. At speeds faster than 12 frames per second, the phi phenomenon optical illusion will activate and our brains will blend the images together into one continuous motion image. So the minimum potential frame rate for continuous motion is 12 fps.

Maximum perceptible frame rate There is a general principle underlying frame rate in film and video: the more frames recorded and played back per second, the more realistic the motion in the image will look.

And so, film and video makers looking to create the most realistic viewing experiences have been trying to increase cinema and video frame rates to make motion appear more life-like. However, the human vision system (our eyes and brain) cannot perceive of a difference in frame rate above 120 fps. Therefore, anything above a playback frame rate of 120 fps would be wasteful, and so could be considered the maximum potential frame rate for cinema and video. There has been a push in recent years for higher frame rates in video games. The more powerful games consoles boast higher frame rates, which help to replicate a more natural and smooth motion, which is similar to how we experience reality. So, in conclusion, potential video playback frame rates vary between 12 fps and 120 fps.

Early frame rates varied When the standards of cinematic motion pictures were being set in the early 1900s, there was a debate about how many frames per second was necessary to create a sufficiently smooth moving image that looked natural, but that was not needlessly wasteful with the expensive celluloid film stock. Some early experiments in frame rates saw lower frame rates, ranging from 14 fps to 26 fps, but 16 frames per second was a commonly used frame rate. The early silent movies such as Buster Keaton and Charlie Chaplin films were shot at a variety of different slower frame rates. The cameras on some of those early films were ‘hand cranked’ meaning that the camera operator was manually turning a handle that was advancing the film through the camera at a variable rate. The result was an inconsistent frame rate when viewed back at a constant speed, i.e. through a modern projector. Some of these early films invariably look like they are speeded up slightly when viewed on modern standard and consistent frame rate cinema projectors, which are often a frame higher rate than the original recording.

Cinema standard frame rate – 24 fps Early experiments into frame rate were primarily concerned with balancing smoothness of the motion in the resulting image with the most cost-effective use

of film i.e. a balance between cost and finding a frame rate that was ‘good enough’ to provide smooth motion. When audio was introduced into the film process, it was optically printed onto the celluloid. In order to ensure that the audio quality was high enough, the usual frame rate of 16 fps was not enough. The frame rate needed to be increased, but still keeping an even number of frames so that editors (who were physically cutting films) could divide a shot exactly in half or in quarter etc. Therefore a mathematically even number of frames that is easily divisible for editors was needed. The magic number was established internationally in 1929 as 24 fps. This frame rate is still used in cinema today, even using video and digital cinema cameras. One thing to notice is that the traditional cinema frame rate is much slower than the maximum perceived ‘frame rate’ of our human visual system, 120 fps. Cinema’s 24 fps is slow compared with the effective frame rates of 50 fps and 60 fps in European and American television. This slow frame rate results in a layer of artificiality; an element of unreality that cinema possesses that real life and the higher frame rates of video and television do not share. It is precisely this level of unreality inherent in the slow frame rate of 24 fps that contributes to the look of film. The slow frame rate in combination with the relatively large amount of motion blur (see Chapter 5 Motion blur for more on this) is a huge part of creating a cinematic aesthetic.

Interlaced and progressive frames Video cameras can capture individual frames using two different methods. The most straight forward method is by using progressive frames. A progressive image is captured from the top of an image to the bottom, progressively. This is how digital cinema cameras capture single images. This may be the way you instinctively imagine images to be captured. Another method is by using interlaced frames and this is used exclusively for broadcast television. The interlaced method was created independently by German Telefunken engineer Fritz Schröter in 1930 and in the USA by RCA engineer Randall C. Ballard in 1932. The interlaced method was created to avoid a flicker issue and conserve bandwidth while also operating at a higher frame rate (for smoother motion).

The interlaced method works by splitting one frame into two fields: field A and field B. Field A consists of every odd line of pixels from the full image and field B consists of every even line of pixels, taken a fraction of a second later. When the two fields are played in quick succession, the resulting effect is that of a higher frame rate image. The interlaced image is actually a clever way of doubling up the perceived frame rate of a moving image (resulting in more realistic movement) while keeping the underlying frame rate the same. It is an efficient way to achieve a higher perceived frame rate. It is a clever cheat. Interlaced frame rates are usually indicated with the letter i. For example 50i is 50 interlaced frames per second. A frame rate indicated as 50p would be 50 progressive frames per second.

FIGURE 2.3  Interlaced fields. Source: Copyright © 2019 Jonathan Kemp. Note: These two fields are effectively two separate frames within the same frame. Each field consists of 50% of the pixels, on alternate odd and even lines. These images appear quite faint in print because the missing lines of pixels are represented as white in print.

FIGURE 2.4  Interlaced combing artefacts. Source: Copyright © 2019 Jonathan Kemp. Note: When the two fields are combined together without any software blending or interlacing, the result is this ‘combing’ effect, where both fields blend into each other.

When interlaced images are played back on anything other than a television set (i.e. a computer monitor or tablet) interlaced combing artefacts are apparent on the footage, as the software displays the two slightly different images at the same time. These combing artefacts can often be seen on footage created by people who are unaware of the interlacing process and have not implemented the correct procedure for correcting it. I recommend always avoiding interlaced frame rates when creating cinematic work. Combing artefacts look very unprofessional and should be avoided. It is possible to deinterlace a video on the timeline of a non-linear editing (NLE) system such as Avid Media Composer, Adobe Premiere Pro, Apple Final Cut Pro or DaVinci Resolve but the perceived frame rate will revert to the true underlying frame rate (i.e. it will lose the smooth motion effect). It is much better to avoid the interlaced frame rates to begin with, when undertaking cinematic film work on video. If you intend to shoot something and make it look like a film, but you would also like it to also be broadcast on television, you should still shoot progressive. It is always possible for a television station to broadcast progressive footage on an

interlaced television channel (think about any popular movie you have ever watched on terrestrial TV stations).

Video standard frame rates With the invention of television and electronic frame rates in the mid 1930s, three different systems emerged over the next couple of decades: 1    The NTSC (National Television System Committee) system is used mainly in North America and Japan. NTSC was the original TV standard and was made to be used in North America with the 60Hz (cycles per second) A/C (alternating current) electricity system. The frame rate was designed to match the frequency of the electricity, so the frame rate was 60 fps. The frames were interlaced to save bandwidth, so the NTSC uses a base frame rate of 30 fps, with an interlaced frame rate of 60i. 2    The PAL (Phase Alternation by Line) system is used in UK, most of Europe and Australia. PAL was designed to solve some of the colour issues that plagued the NTSC system and to work with the 50Hz (cycles per second) electricity used in Europe and other countries. The frequency of the electricity lent itself to using 50 fps. The frames were interlaced to save bandwidth, so PAL uses a base frame rate of 25 fps, with an interlaced frame rate of 50i. 3    The SECAM (Séquentiel Couleur Avec Mémoire or Sequential Colour with Memory) system is used in France, Russia and parts of Africa, and uses the European 50Hz power, and so 25 fps or 50i. Sometimes these frame rates are described as 25p and 50i, or 30p and 60i. By definition NTSC, PAL and SECAM standards refers to standard definition (SD) footage. High definition (HD) footage can have a variety of frame rates; 24p, 25p, 30p, 50p, 50i, 60i and others.

High frame rate in cinema The higher frame rates of 50i and 60i have been used exclusively on television shows since the standards were developed. The high frame rate (HFR) look has become associated with television content (such as news programmes, game shows and daytime soap operas) and the comparatively slower 24p has remained the

standard frame rate for cinema. As such, two very different aesthetics have been established in the minds of viewing audiences, whether they appreciated it or not. Hollywood film studios have tried to provide more immersive experiences for movie going hence the re-introduction of 3D films that made their way back into cinemas in recent years (believe it or not, 3D films are almost as old as cinema itself. Even Alfred Hitchcock filmed and screened Dial M For Murder in 3D in 1954). One big problem with 3D films in the cinema is that fast-moving motion becomes stuttery and difficult to focus on. The relatively slow frame rate 24 fps is adequate for normal motion, but is inadequate to properly display a lot of movement, particularly in 3D movies. Hollywood film producer and director Peter Jackson shot and theatrically released The Hobbit trilogy (2012 to 2014) in 3D and using the HFR branded process of 48 fps. This 48 fps is exactly double the frame rate of regular cinema and is very close to the UK TV and video standard frame rate 50i. The HFR process uses regular progressive frames however, not interlaced frames. The Hobbit trilogy used the 48 fps to greatly improve the 3D viewing experience, particularly during the complicated shots that featured a lot of movement. The stuttering out of focus artefacts were dramatically reduced. The film received a lot of criticism though. While the HFR had smoothed out the 3D issues during the more visually impressive shots, the relatively normal shots were accused of looking too much like a television soap opera.1 The movement of the 48 fps cinematography looked too much like the 50i and 60i images from the world of TV for audiences to accept them as truly cinematic. This is a really important lesson to bear in mind: choosing the filmic frame rate of 24 fps is important when trying to create something that looks like a movie. Another Hollywood producer and director to experiment with HFR is James Cameron. When considering how to create the most realistic and believable looking worlds for his upcoming series of Avatar sequels, Cameron has decided to use HFR, but only for the sections of the movies that feature a lot of movement. Speaking to the Hollywood Reporter, Cameron said: I think [HFR] is a tool, not a format. I think it’s something you want to weave in and out and use it when it soothes the eye, especially in 3D during panning, movements that [create] artifacts that I find very bothersome. I want to get rid of that stuff, and you can do it through high frame rates . . . I feel you still have to have a little bit of that veil of unreality that comes with 24 frames per second. This is my conclusion now. I don’t think you do it wall-to-wall, I think you do it where you need it.2

Cameron seems to have taken on board some of the criticism of the HFR as used in The Hobbit and decided to stick to the cinematic 24 fps as much as possible. As he says in the above quote, it is precisely the unreality of 24 fps that helps give cinema its unique aesthetic quality.

HFR for slow motion We have talked about a variety of different frame rates already in this chapter. Every frame rate so far has assumed that playback frame rate would be the same as the recorded frame rate, but that is not always the case. For a long time, camera operators have recorded at a HFR, with the intention of watching the footage back at a normal frame rate. This creates a slow-motion effect. For example, footage that was recorded at 50 fps, but played back at 25 fps would appear to be running at half speed. Footage shot at 100 fps but screened at 25 fps would appear to be running at a quarter of the speed. This is the principle behind slow-motion cinematography. In the past there were physical limitations as to the maximum speed that celluloid film could be pulled through a traditional HFR camera. These physical restrictions have been removed now, as there is nothing physically moving through a digital camera system. The only restrictions on the frame rate on a digital camera are the available light through the lens to achieve a proper exposure (higher frame rates need more light) and the processing speed of the camera (the microprocessor). As such, very high-speed digital cameras have been developed. Cameras such as the Phantom Flex can record 4K images at 1000 fps, which would produce an image that appears to be slowed down about 40 times. Even commercially available smartphones, such as the Samsung S9 can shoot at 960 fps for slow motion (in a burst of 0.2 seconds). The standard Apple iPhone can record 240 fps video for playback at 30 fps resulting in video that appears to be eight times slower.

Time-lapse for fast motion The opposite of this HFR, slow-motion effect is time-lapse photography, where images are taken with seconds, minutes or hours in between, resulting in video images that appear to have been speeded up.3

Non-standard frame rates Occasionally cinematographers will shoot at non-standard frame rates to achieve specific effects. One such technique has been used during certain modern action movies. Cinematographers are playful and experimentations with frame rates have resulted in this technique. During Captain America: The Civil War, the cinematographer and director favoured a technique where they would film certain moments of action at a slightly slower frame rate i.e. about 22 or 23 fps. When viewed back at the standard 24 fps, those shots seemed very slightly speeded up. Not enough that the resulting image was overly noticeable, but subtly adding the perception of speed to the superhero characters.4

24 or 23.976? You may notice that on some cameras that will shoot 24 fps, they may also give you the option to shoot at a frame rate of 23.976. This is so close to 24 fps that it is virtually the same thing. So what is the difference? The slight variation in frame rate was devised to solve an issue to do with audio not being in sync with the NTSC video system, which technically runs at 29.97 fps and 59.94i. While most digital projects shot for cinema or broadcast shoot at 23.976 in the USA, it is not necessary to do so. There certainly is no harm in shooting 23.976 as all NLE systems will work with either, although a DCP (digital cinema projection) package runs at 24 fps. The aesthetic difference between the two frame rates is negligible, so there is no noticeable difference. The important thing to consider is that you should keep the frame rate consistent in the same project. For the sake of simplicity I will always refer to 24 fps throughout the book.

Frame rate in animation Because traditional animation is hand-drawn, animators have tended to use a lower frame rate to minimise the amount of drawings to be created. Traditionally hand-drawn Disney animated films have been drawn using 12 fps, the minimum potential frame rate for smooth motion.

Computer-animated films can in theory be rendered at any frame rate that the filmmakers desire (rendering time is the only consideration) but they are usually rendered at 24 fps like their photographic counterparts.

Artificial HFR televisions Some modern television sets proudly proclaim a feature that will (they say) improve your viewing experience by adding smooth motion to films and television. Manufacturers offer such gimmicks as 100Hz motion smoothing, which is a process of artificially adding frames between the existing frames to television and video content. This is based on the technical and somewhat logical idea that if a higher frame rate is more realistic, then why not offer a HFR for everything, even if it was shot at a slower frame rate? I can see the logic here, but personally I think it does not look good (there are usually artefacts present in the images when the motion smoothing effect is used) and I would rather see a film or video at the frame rate the creator intended it to be seen at. These features can be turned off.

Summary When looking at the possible variations of the different frame rates, it is worth noting that the 24 fps cinema frame rate is a lot slower than almost every other option. In the years after the standardisation of cinema production, higher and higher frame rates became possible. However, we still perceive 24 fps to be a cinematic frame rate purely because almost every piece of cinema we have ever watched has been shot at 24 fps. It is a slow frame rate, but we are used to seeing it in the cinema.

Activities •    Devise a shot that has camera movement and movement of a character or object within the frame. •    Take the shot at 50 or 60 fps (progressive or interlaced). •    Take the same shot at 24 fps (or 25 fps is 24 is not available).

•    Play back the two shots on the largest screen you have and notice the difference in ‘smoothness’ between the motion in the two shots. (Tip: If you shot using an interlaced frame rate, you may need to view the resulting footage on a TV, or using video editing software that can display the interlaced fields. Any professional software such as Adobe Premiere Pro, Final Cut Pro X or DaVinci Resolve will be able to do this.) •    If your TV has a motion-smoothing effect built in, turn it on and off and notice the difference this has on the motion.

Frame rate rules recap Rule: For a cinematic look, always shoot at 24 fps. Rule: Avoid interlace footage when creating cinematic images.

Notes 1    The term ‘soap opera effect’ is an internet meme in its own right. 2    www.hollywoodreporter.com/behind-screen/james-cameron-promises-innovation-avatar-sequels-as-hesfeted-by-engineers-942305. 3    See the excellent poetic films Baraka (1992) and Samsara (2011) for beautiful time-lapse imagery. 4    Dillon, M. (2016). Heroes Divided. American Cinematographer, Vol. 97, No. 6, June, p. 46.

3 EXPOSURE

The fundamental rule for setting a correct exposure is: Rule: Always record using full manual mode. The rest of this chapter will look into why this rule exists as well as laying the foundational knowledge for the following chapters, specifically chapters 4 Grain, 5 Motion blur and 6 Depth of field.

Exposure Exposure is a fundamental concept in both cinematography and photography. In simple terms, it can be thought of as how bright or how dark an image is captured. This is probably the most important aspect of image creation to get right. The task of the person responsible for setting the exposure level (usually the cinematographer) is a simple one: to record an image that is bright enough so that the subject of the shot is not underexposed (too dark) and can be clearly seen, but is not overexposed (too bright) and detail lost in areas that appear white. This is a simple problem, but is one that requires constant consideration. Consistently achieving an image that is neither under- or overexposed requires a lot of attention to detail, a thorough understanding of the theory (covered in this book and elsewhere), a practical working knowledge of the

equipment you are using and a lot of practice. It is simultaneously straight forward and complicated.

FIGURE 3.1  Underexposure – overexposure – correct exposure. Source: Copyright © 2019 Jonathan Kemp.

I am sure every professional cinematographer or camera operator could tell you stories of times where they were horrified as they reviewed footage and noticed that they had recorded an image that was either too bright or too dark. It is easy to do if you are not concentrating. To make this already-tricky job even trickier, less expensive nonprofessional camera equipment (such as the kind you may be using) can often be less forgiving than professional cinema cameras. Images created using less expensive cameras tend to under- or overexpose more easily. See Chapter 9 Dynamic range. Discovering and using the tools that your camera may have to help you gauge your exposure levels is an essential component to incorporate when setting exposure. See Chapter 8 Camera tools for more on this.

An f/stop An f/stop, otherwise known as a ‘stop’, is an important concept and piece of terminology in photography and cinematography. A stop is a measurement of light doubling or halving in intensity. To decrease the exposure level by one stop is to reduce the light level by half. To increase the exposure level by a stop is to double the light level. It is a simple concept, but will keep being relevant as we go forward.

Fast and slow In photography and cinematography, reference is often made to speed, either fast or slow. The general principle is that something fast results in a brighter image, and something slow results in a darker image. For example, it can be used to describe a celluloid film stock (‘This film stock is fast’) or a lens that lets a lot of light through and is particularly good in low-light situations (‘That is a very fast lens’).

There is no single ‘exposure’ control Sometimes I hear a student say something like ‘turn the exposure down’ or ‘the exposure needs to come up’. This is not specific enough to be correct. While it is clear that the student has made an assessment that the image is too bright or dark and has thought logically that the corrective step is to reduce or increase the exposure level, there is no single way to reduce or increase the exposure level. There is no single button, switch or slider marked ‘exposure’. There are actually three different ways to control the exposure level. These three elements make up the exposure triangle.

Exposure triangle The term exposure triangle is a handy mnemonic that was created to remind you that there are three factors to consider when setting exposure levels. Triangles have three sides, therefore there are three factors to control exposure.1 Each of these factors can have the same overall effect: to reduce or increase the exposure level by a number of stops, or by a percentage of a stop. The three parts of the exposure triangle are: 1    The ISO or gain. This describes and measures the sensitivity of the imaging plane (celluloid film or digital imaging sensor) to the light. 2    The shutter speed. This describes and measures the length of time each frame is exposed to the light for. 3    The aperture or iris. This describes and measures the amount of light that is allowed to pass through the lens and onto the imaging plane of the camera (celluloid film, or digital imaging sensor).

FIGURE 3.2  Exposure triangle 1. Source: Copyright © 2019 Sam Pratt.

Each of these factors can and will effect the exposure level of the created image (how bright or how dark it is) and so each of the three must be considered. They work together like an equation. For example: you can open up the aperture to let more light in through the lens, but in so doing you might need to reduce the shutter speed or the ISO/gain (sensitivity) to ensure that the overall exposure level is not too bright, resulting in an overexposed image. Let us now look at the three exposure triangle elements in more detail.

ISO Your camera may use the terms ISO or gain depending on what kind of camera it is. Both terms are rated in different ways, but both are referring to the same effect. You could look at it in the following ways: •    They are both referring to how sensitive the film plane (the celluloid film or the digital imaging sensor) is to the light. •    They are both referring to how much the camera is boosting the signal information, resulting in a brighter image. The term ISO2 was originally found on celluloid film and so now is also found on DSLRs and cinema cameras as well. In some American publications you may find reference to ASA rating.3 The terms ISO and ASA can be used interchangeably. Something rated in ASA will be the same in ISO rating. An ISO rating of 100 is usually a digital camera’s lowest ISO setting. ISO 100 is the darkest, least sensitive setting, which requires the most light to properly expose an image. The signal can be increased by one f/stop when it doubles in number to ISO 200. It can be increased another full f/stop to ISO 400 and so on. Different digital cameras have different maximum possible ISO settings and these are a measurement of their sensitivity, or how much of a signal boost is being given to the image. Table 3.1 shows the increase in f/stops when doubling the ISO sensitivity. For example, doubling the ISO sensitivity from 400 to 800 results in an extra f/stop of light in the resulting image. Or, to put it another way, the doubling of the ISO sensitivity has doubled the light level of the image.

Gain Gain is an electronic term that relates to how much signal amplification is being used. Typically camcorders and other video cameras use the gain terminology, rather than ISO. A gain level of 0dB (decibels) would mean no boost to the signal level, resulting in no boost to the brightness of the resulting image. A gain setting of +6dB would be an increase in exposure by

one stop. For every 6dB increased of gain, the exposure will double. Also, an increase of +3dB gain would be an increase of half an f/stop. TABLE 3.1  ISO stop chart

Table 3.2 shows the increase in f/stops when the gain is increased in +6dB increments. In this way, ISO or gain can be used to control the overall exposure level of an image.

Shutter speed Most cameras use the terminology of shutter speed when describing the duration of the exposure of each frame. Shutter speed is usually referred to in fractions of a second. For example 1/200 is one 200th of a second. Some camcorders simply display it as ‘200’. Other cameras, usually those specifically marketed as cinema cameras, have the option to refer to the shutter duration in measurements of shutter angle, such as a 180° shutter angle. For more about the shutter angle, please read Chapter 5 Motion blur. TABLE 3.2  Gain stop chart

In simple terms, the longer the shutter is open, the more light reaches the image plane (celluloid or digital imaging sensor) and the brighter the resulting image will be. Or, the other way around, the less time the shutter is opened, the less light reaches the image plane and the darker the image will be. However, in most cases, a digital camera only simulates the effect of a shutter. In reality, the smartphone or camcorder does not have a physical shutter continuously blocking and then revealing light to the imaging sensor. What happens with most digital cameras is that the photosites on the digital sensor collect light only for the shutter speed duration before discharging them. The result is comparable to a physical shutter blocking and revealing light to a strip of celluloid film.4 A camera can be set to a relatively fast shutter speed of 1/200 (fast for film and video) or it can be increased in duration to 1/100 resulting in a shutter that is open for twice as long and has also increased in exposure level by one full f/stop. It is important to bear in mind that we are dealing with fractions here, so the larger the number, the shorter the duration is. Table 3.3 shows that halving the shutter speed duration (doubling the number) results in the decrease of one f/stop of light. So, for example, an increase in shutter speed from 1/50 to 1/200 would be a decrease of two f/stops. Conversely, doubling the shutter speed duration (halving the number) results in the increase of one f/stop of light. In this way, shutter speed can be used to control the overall exposure level of an image.

Aperture and iris

The terms aperture and iris are often used interchangeably in photography and cinematography. This is the most applicable method for controlling the exposure level for the cinematographer. Aperture or iris refers to the amount of light that is allowed to pass through the lens and reach the image plane (celluloid or digital imaging sensor). This is usually controlled by a series of small metal blades inside the lens that expand and contract to increase or decrease the light that passes through the lens. TABLE 3.3  Shutter speed stop chart

This can be thought of like the iris in our own eyes. We have an iris that expands to let more light into our eyes when in dark conditions, or contracts to let less light into our eyes in bright conditions. When it is dark you might notice the pupil seems to get larger to let more light reach the retina of the eye. Conversely, in bright sunlight the pupil gets smaller to restrict light from reaching the back of the eye. Our eyes and brain have automatic aperture or iris control built in and we don’t even notice it working. The camera uses a sequence of measurements rated in f/stops to gauge how open or closed the aperture or iris is and subsequently how much light will be let onto the imaging plane. The way f/stops are rated, the lower number represents a wider aperture, which allows more light to pass through the lens. For example, a lens that is set to f/1.4 is faster and lets in more light than a lens that is set to f/8. Table 3.4 shows the f/stop settings that result in an increase or decrease in stops. For example, changing the aperture/iris from f/1 to f/8 would be a

decrease of six f/stops; changing from f/5.6 to f/2 would be an increase of three f/stops. In professional publications such as the American Cinematographer magazine, and on expensive Cine lenses, you may find reference to t/stops, instead of f/stops. For practical purposes, these can be considered similar enough to be the same thing, while it should be noted that the t/stop is the more sophisticated and accurate measurement of light.5 In this way, aperture or iris can be used to control the overall exposure level of an image.

Which part of the exposure triangle should I use to control exposure? Each of the three elements of the exposure triangle will control the exposure level of an image. They work in balance, so if you had an image that you thought was exposed correctly, but you wanted to increase the aperture for creative reasons, you would have to decrease either the shutter speed or the ISO/gain by an equal number of stops to compensate. TABLE 3.4  Aperture stop chart

The questions is, why would you want to favour any one particular element of the triangle over another? The answer is that as well as controlling

the exposure, each element of the exposure triangle produces a different byproduct. These byproducts can be used for creative as well as technical purposes: •    The ISO or gain also controls how much noise or grain there is on an image. See Chapter 4 Grain. •    The shutter speed also controls how much motion blur there is on an image. See Chapter 5 Motion blur. •    The aperture or iris also controls the depth of field of an image. See Chapter 6 Depth of field. Consideration must be given to the methodology you use for deciding how to combine each of these three elements of the exposure triangle. However, for the Activities at the end of this chapter, disregard the byproducts of each method of control until you come to it in the relevant chapters.

FIGURE 3.3  Exposure triangle 2. Source: Copyright © 2019 Sam Pratt.

Manual mode Take your camera and discover how to put it into full manual mode. This is hugely important. You need to have your camera in manual mode virtually all the time, certainly while you are learning to control it. I would be tempted to operate a video or cinema camera in automatic (or semi-automatic) mode only in a situation where I had to be reactive and refocus the camera a lot in a changing light environment. In short, I would only consider automatic mode if I really had to, for practical reasons.

It is very easy to spot when people have filmed something using an automatic exposure mode because the image automatically adjusts to be brighter or darker depending on what is in the shot. For example, in a shot where the camera is exposed for an interior scene, and then moves to a part of a room where there is a lot of light coming in from a window, the image will darken as the brighter part of the room comes into shot and the camera automatically reduces the exposure level to accommodate the changing lighting conditions. This does not look professional and should be avoided if at all possible. You will be able to gain full manual exposure by reading the relevant parts of the operation manual and through experimentation. Usually, you will only have manual control of all three elements of the exposure triangle when you can see a value for each one on the digital display. If you do not see a display for the f/stop (aperture/iris) or the shutter speed, or the ISO/gain then you might not have control of it. Cameras differ, so just make sure that you are in control of each element of the exposure triangle. A good method for determining whether you have control of each of the three exposure triangle elements is to take one of the elements and incrementally stop down until the image is very dark. Then reverse the process and stop up until the image is very bright and as overexposed as it will get. If one of the exposure triangle elements is still in automatic mode the camera will fight you and try to balance the exposure and keep the image exposed ‘properly’ (according to the camera and its built in light meter). If you are in full manual mode, you will be able to really mess the image up by drastically under- or overexposing the image. You will have control of the exposure though, which is the point.

Activities •    Examine your camera to see whether it uses the ISO or gain rating and then determine your camera’s minimum and maximum ISO or gain setting. •    Examine your camera to determine the minimum and maximum shutter speeds that can be achieved.

•    Examine your camera to determine the minimum and maximum f/stop that can be achieved. For those using cameras with interchangeable lenses (such as DSLRs), the aperture can and will change with each lens. Some lenses will be fast and others slower. Determine which is your fastest lens and which is the slowest. Next, using your camera, go and record video images in a variety of different lighting situations and try to expose correctly, using only your eyes and the digital screen (if the camera has a ‘live view’ mode where you can see the changes you are making on the screen). Try to film a subject, ideally a person, although an inanimate object like a toy or a bottle would be fine. It is important to identify the subject that you are trying to expose correctly, rather than just trying to expose for the full scene. Try filming: •    outdoors in the brightest area you can find; •    indoors in a dark area; •    indoors, in front of a window that is letting in a lot of light; and •    beside a lamp or other directional light source in close proximity to your subject. Then return to your optimal viewing environment (ideally a large monitor, TV or projector) in a room suitable for viewing. Review your shots and be critical of the exposure. Ask yourself: •    How well did I think I had exposed the shot while I was filming it? •    How good a job have I done on review? •    Is the subject of the shot bright enough to see clearly? •    Are any parts of the subject under- or overexposed? Look closely. •    Are any parts of the background under- or overexposed? Honestly reflect on your performance and repeat the exercise often.

Exposure rule recap

Rule: Always record using full manual mode.

Notes 1    Other fields of study use triangles as mnemonics to remind you that there are three things to consider. For example, the fire triangle of oxygen, heat and fuel. 2    ISO rating takes its name from the standards body the International Organization for Standardization – www.iso.org/home.html. 3    ASA rating takes its name from the American Standards Association. 4    An exception to this is when you take a still photograph with a DSLR camera and you feel the vibration of the mechanical shutter clicking open and closed. 5    F-stop, which is widely used, is a measurement of light going into the lens and then the light losses inherent in the absorption properties of the lens are calculated to determine the f-stop value. The tstop measurement is taken at the imaging plane, and so more accurate. Burum, S.H. (2007). American Cinematographer Manual. 9th ed. Hollywood, CA: The ASC Press, p. 59.

4 GRAIN

The rule for controlling cinematic grain is: Rule: Always try to use the native ISO of the camera or the lowest possible ISO or gain setting. Increase this as a last resort. This chapter will look into why this rule exists and offer activities to help cement your understanding of this rule.

Gain and grain In this chapter we are talking about gain and grain. The two words are just one letter apart so if read quickly they can look the same. They are related in this context, but are very different things. Gain is the degree to which a video signal is boosted. The more the signal is boosted, the more visible grain (a texture on the image) is noticeable.

Grain Notice the grainy elements on this still image from Killer of Sheep in Figure 4.1. The grain here is the random speckled texture visible on every frame of the film. This is a byproduct of the celluloid film stock.

FIGURE 4.1  Frame from Killer of Sheep 1978. Directed by Charles Burnett. The analogue grain that comes from celluloid film stock can introduce a desirable aesthetic textural quality. The grain dances over the image in a way that can bring life to the image, something that is difficult to appreciate on a still frame, but will become more obvious when you watch a grainy film. This effect also brings with it a certain nostalgia for grainy film stocks used in many older and classic films.

Film stocks In the early days of cinema, celluloid film stocks were very slow, meaning they required a lot of light to produce a well-exposed image. Film stocks have evolved to become more sensitive, requiring less light and also exhibiting less visible grain. The grain was not originally a desirable quality and the intention of film stock developers was to both increase sensitivity to light and reduce visible grain.

However the grain associated with celluloid film has become a desirable quality for digital filmmakers, because of the nostalgia associated with it. When looking at film grain, we should take a bit of time to decode the information on the motion-picture celluloid film stocks available. Celluloid film stocks are chosen for three reasons: 1    sensitivity to light (practicality); 2    colour and contrast; and 3    film grain size and quality. Kodak is the most well-known manufacturer of film stocks in the world. Its current range of colour film stocks available can be found on its website,1 and is reproduced below. Note that there are only four standard colour film stocks available currently. Let us look at a stock from the list and decode all of this information:   Kodak Vision3 500T Color Negative Film 5219/7219  Vision3

This refers to the product line. Vision3 is Kodak’s most recent film-stock range. 500T This refers to the film speed (ISO) and the intended shooting environment. This film stock is rated at ISO 500 (note the available ISO range is from 50 to 500. This is much slower than what is available on a modern DSLR). The T shows that the film stock is calibrated for use with tungsten light bulbs (traditionally coloured film-set lights). The other option is D for daylight shooting conditions. Color negative film This is a negative film. This the most common kind of celluloid film, producing an image that has the brightest parts and the darkest parts of the image reversed. From this negative, a positive print is made. 5219/7219 This refers to the catalogue number of the film stock. 5219 is for the 35mm variety (the standard movie celluloid size) and 7219 is for the 16mm variety (the cheaper variant, which was often used in journalism, due to its smaller size).  

Film grain Film grain is caused by the film stock used and by the formation of metallic silver particles within the exposed film stock. The principle that higher ISOs bring more visible grain is the same too for celluloid. Film stocks rated at ISO 50 have much less visible grain than film stocks rated at ISO 500. Older film stocks were slower (less sensitive to light) and had more visible grain. This film grain was simply a part of the chemical process, but the grainy look of film has become a part of a greater cinematic language. This film grain constitutes a part of the cinematic film look that you might want to adopt into your own work. Organic film grain has a painterly quality. It adds a level of unreality that brings an artificial yet artful texture to an image. Also, the grain structure differs from frame to frame, creating a ‘dancing’ grain effect that is not apparent when only viewing a still image. This analogue film grain can be a big part of the filmic look that you are trying to replicate. Not all films shot on celluloid film have noticeable film grain, but certain films use it as a part of their individual aesthetic look. Different ‘families’ of film stocks from different manufacturers have and had different identifiable grain structure patterns, giving options to the cinematographer who was trying to settle on a particular look for the film. For example, the grain of a particular film stock by Kodak would look different to a particular film stock by Fuji.

Digital noise In contrast with the aesthetically pleasing organic film grain, boosting the signal from a digital imaging sensor will result in a grain that is often seen as unattractive. I am referring to this digital grain as noise, with a deliberately negative connotation. When the term grain is used in this book, it is referring to analogue celluloid film grain. The grain is a physical property of the photochemical process. This photochemical process is missing from a digital camera as images are created into digital images instantly. However, the digital process (all digital cameras and smartphones etc.) has a similar effect, known as digital noise.

FIGURE 4.2  Digital noise. Source: Copyright © 2019 Ethan Rogers.

Digital noise is directly related to the sensitivity of the imaging sensor. If the signal from the digital sensor is boosted, then the result is a noisy image. Most people are in agreement that this digital noise is to be avoided if at all possible. Ideally an image from a digital camera should be noise free. This means that this element of the exposure triangle is not one that can be freely used, as you should be trying to keep the ISO or gain as low as possible. You should even consider bringing in external lights to boost the light level in a scene, or moving the subject to a better-lit area before increasing the ISO or gain too much. Compare the noisy image of Figure 4.2 with Figure 4.3. Figure 4.2 has been taken with a high ISO/gain setting and Figure 4.3 has been taken with a low ISO/gain setting. The constant principle that can be observed is that the higher the ISO or gain level, the more grain or noise will be present in an image.

Native ISO

The imaging sensors of digital cameras have a ‘native ISO’, which is the ISO setting that the camera uses before it boosts the image, introducing noise. Using your camera’s native ISO is the best way to achieve a noise-free image. If you boost (or even reduce) your ISO, you will start to introduce some digital noise.

FIGURE 4.3  Noise-free image. Source: Copyright © 2019 Ethan Rogers.

Low-light performance Increases in the efficiency of digital imaging sensors mean that digital camera imaging sensors are getting better and better at shooting at higher and higher ISO or gain settings while producing less and less visible noise. Part of what makes certain digital cameras better than others is their low-light performance. This comes down to how much noise is visible on the image when shooting with a high ISO or gain setting.

Is grain/noise good or bad? There is some debate about whether grain/noise is a desirable thing or not, but I will summarise my understanding of the debate.

The digital noise that comes with boosting the image from a digital sensor can be ugly, cheap-looking and aesthetically undesirable. However, organic film grain can be desirable. Therefore: digital noise = bad (almost always) film grain = good (depending on personal preference)

Adding organic film grain in post While it is impossible to achieve the desirable film grain when shooting digitally, it is possible to artificially add organic film grain to an image in post-production. The strategy is: 1    Shoot digitally at the native ISO (or a low ISO) or a low gain setting, to achieve an image with no or minimal digital noise. 2    Add a commercially available high-quality scan of real organic film grain as an overlay. This is very easy to do with any NLE system. You can find free or payable film grain online. Try searching for ‘film grain overlay’. This is a way of retaining this positive attribute of celluloid film (film grain) but using a modern digital camera.

Intentional and unintentional grain Grain can be thought of as either intentional or unintentional. Intentional grain would be if it were being used deliberately for aesthetic or storytelling reasons. Unintentional grain would be when grain is not being used by the filmmakers for an aesthetic reason, rather it is noticeable only during certain scenes that obviously contained less light than other scenes. During some scenes of a film or TV programme, it is apparent that the filmmakers have had to increase the ISO or gain for a certain scene or shot, as the visible noise suddenly increases. This is not desirable, but sometimes unavoidable if the ability to bring in external lights was not an option for the filmmakers due to time or budgetary constraints.

Noise reduction

Many NLE2 systems have built-in (or separately purchasable) noise-reduction applications. These work to smooth out visible grain. In my experience, some applications are better at this task than others. Noise reduction can easily start to make an image look artificial and plastic. Noise reduction can be used lightly in some situations without being overly noticeable, but it should not be used as an excuse to not capture a noise-free image in the first place.

FIGURE 4.4  Very noisy image and overly noise-reduced version.

Source: Copyright © 2019 Ethan Rogers.

Activities •    Watch any film or TV programme looking for visible grain. Is it detectable in certain kinds of films? Watch for unintentional moments of visible grain. Watch older films for comparison. Go onto imdb.com and look at the ‘technical specs’ page for a film you are interested in and see what (if any) film stocks were used.3 Also: •    Using your camera, determine how high you can set your ISO or gain before grain becomes noticable. Create a logical ISO or gain test by capturing the same shot content, in the same lighting conditions but ranging through your ISO or gain settings from least to most sensitive. •    Critically observe in your ideal viewing conditions. (Tip: To help make the results of your experiment clearer, make specific note of the ISO or gain settings of each shot, as it will be very easy to lose track of which shot was using which setting. Maybe hold up a piece of paper with the settings written on at the beginning of each shot.)

Grain rule recap Rule: Always try to use the native ISO of the camera or the lowest possible ISO or gain setting. Increase this as a last resort.

Notes 1    www.kodak.com/GB/en/motion/Products/Production/default.htm. 2    NLE systems such as Premiere Pro, Final Cut Pro X, Avid or Resolve. 3    You can find out which specific cameras, lenses, film stocks etc. were used on each film. For example; a look at the IMDb technical specifications page for Star Wars: The Force Awakens reveals that each of the four film stocks that Kodak currently sell were used on that film, in the 35mm as well as in 65mm variant (rarely used and for very high resolution IMAX images) – www.imdb.com/title/tt2488496/technical? ref_=tt_dt_spec.

5 MOTION BLUR

The rules for creating cinematic motion blur are: Rule: Always observe the 180° shutter-angle equivalent (1/48 at 24 fps or 1/50 at 25 fps). Rule: During moments of intense action, consider a reduced shutterangle equivalent, such as a 45° angle (1/200). This chapter will look into what these rules mean and also offer activities to help cement your understanding of them.

Motion blur Pause any mainstream movie when there is movement in the frame; either when the camera or an element of the image is moving and you will notice a significant amount of motion blur on those moving elements. This motion blur goes almost unnoticed by the casual viewer, but subconsciously we recognise that it is conforming to a film ‘look’ that we have seen in almost every cinematic film we have ever watched. Look at this image from the film Fish Tank in Figure 5.1. Notice that most of the image is in sharp focus, but the character’s arms are moving quickly as she dances and are blurry. The camera is in focus here as we can see the other details sharply. What we can see on the arms is motion blur caused by the camera’s shutter speed, which is too slow to capture the movement without it blurring.

FIGURE 5.1  Frame from Fish Tank 2009. Directed by Andrea Arnold. Motion blur is caused when objects move within a frame, while light is being let into a movie camera to expose the celluloid or digital sensor. That moving object leaves a streaky trace of blurry motion (motion blur) on that frame, relative to the amount of movement within the frame, over the duration of the time the shutter was open for. The longer the shutter is open for each frame, the more motion blur is evident on each frame. Understanding and replicating this motion blur and how it contributes to the film ‘look’ is the main focus of this chapter. The movie camera captures each frame using a fixed shutter speed that produces a consistent amount of motion blur. This is the same amount of motion blur on moving objects and camera movement, from shot to shot and film to film, from the French New Wave to the Marvel Cinematic Universe.

Motion blur in animation

This filmic amount of motion blur is so indelible that it is even added to films that have not used a camera in their creation such as computer-animated films like Toy Story. In practical terms, adding motion blur to an animation modelled inside a computer is as easy as changing a few settings. However, motion blur is even added to traditional stop-motion animation. This is something that physically was not present during the taking of each of the individual still images that play together to create the illusion of movement in the stop-motion process. This motion blur was added to certain parts of the stop-motion film Wallace and Gromit: The Wrong Trousers (1993) using computer software that analyses the movement of pixels from frame to frame and artificially creates motion blur intelligently. Notice the motion blur evident in the background and foreground objects in Figure 5.2. The motion blur here is not possible to achieve with stop-motion alone and was presumably added later using post-production software. Various types of animators know how important it is to add motion blur to create traditionally filmic-looking motion in their work, so as creators of video, we should ascribe importance to it as well. In order to replicate that aspect of the film look, we need to know exactly how to control the motion blur and how much to add.

Shutter speed Shutter speed is the primary method for controlling the motion blur of your video image. In combination with frame rate, it is one of the most important factors to get right when trying to replicate the look of motion-picture film on video.

FIGURE 5.2  Frame from The Wrong Trousers 1993. Directed by Nick Park. In stills photography, changing the shutter speed from one shot to the next is a perfectly valid way of controlling the exposure of an image. In stills photography, there is no standard shutter speed. It changes depending on the context, and generally speaking motion blur is seen as a negative in stills photography. Because the still image does not move, we have time to study that one image inspecting it for its visual qualities, of which focus (or sharpness) is one. Any motion blur evident in the still image is usually considered to be a negative thing, unless used for a specific effect. The long exposure in Figure 5.3 deliberately uses a very long shutter speed to create a blurry and streaky effect. Notice how all the lights from the fairground ride have blurred together as the ride spins while the shutter remains open. This is in contrast to motion-picture film, where we are viewing a relentless series of images playing one after another. We don’t view any single image in isolation, rather we consider the motion ‘effect’ that is comprised of a series of

images, creating the ‘persistence of motion’: an illusion of motion in our minds. In actual fact, the motion blur helps to ‘smooth out’ the relatively low frame rate of 24 fps by blurring the differences between fast-moving objects.

FIGURE 5.3  Long exposure. Source: Photo by Alen Rojnić on Unsplash.

We filmmakers working in the video medium are concerned primarily with achieving a specific amount of motion blur (the ‘correct’ amount) so the shutter speed should not be used to control the exposure.

180° shutter angle Some questions to consider: Why is there a specific amount of motion blur on the images created by the movie camera? What is that amount and how is it measured? Let us first look at the difference between frame rate and shutter speed, as the two are intimately related, but separate things. Frame rate is a measurement of the number of frames (individual images) that pass through the motion-picture camera per second. We already know that we should choose a frame rate of 24 fps (frames per second) on our digital camera to replicate the look of film (see Chapter 2 Frame rate).

Shutter speed refers to the amount of time that each frame is exposed to the light from the lens for. The longer the shutter speed is, the more light will reach the film frame and the brighter the resulting image will be. The shutter speed also has a big implication on the amount of motion blur within an image. For video to replicate the ‘look’ of film, it has to have exactly the same amount of motion blur as the movie camera has, which means having the same shutter speed as the movie camera. How does the shutter work? The movie camera uses a rotating disc shutter (a circle, 360°). The disc shutter spins around and around continuously, letting light in and blocking it off in a continuous cyclical pattern. Half of the disc shutter is opaque (letting no light through) and the other half is not there (letting light through). This spinning disc shutter is referred to by its angle as a portion of its potential maximum value in degrees. This standard movie camera shutter angle (producing the ‘correct’ amount of motion blur) is using a 180° shutter angle, taking its name from the transparent (or missing) half of the spinning disc shutter. The 180° of the shutter that is opaque is physically necessary to block the celluloid and for the camera to advance the film onto the next frame. To put it another way, half of each frame is exposed, and half of each frame is in darkness while the mechanism of the movie camera advances the reel of film onto the next still image. This is a physical limitation of the movie camera, something that was developed for practical reasons. This process happens 24 times a second. Even though this time is not physically needed in the digital camera (there is no film to advance onto the next frame), it looks ‘odd’ to choose another shutter speed. This 180° shutter angle produces the amount of amount of motion blur, that we are used to seeing in movies. Anything else looks wrong.

FIGURE 5.4  180° shutter angle. Source: Copyright © 2019 Sam Pratt.

In the world of digital cameras there is no spinning disc shutter (although some higher-end camcorders can display shutter-angle equivalents), rather the shutter speed is measured in fractions of a second. 1/24 of a second is the full duration of each frame (at 24 fps). We know that half of each frame is in darkness and half is exposed. This means that the 180° shutter equivalent on a digital camera is 1/48 of a second. 1 frame = 1/24

Which is comprised of: 1/48 exposed 1/48 darkness Resulting in: a ‘correct’ filmic amount of motion blur. At 24 fps (the movie camera), the 180° shutter equivalent is 1/48. At 25 fps (PAL video), the 180° shutter equivalent is 1/50. At 30 fps (NTSC video), the 180° shutter equivalent is 1/60. Digital cameras have two kinds of shutter: mechanical and electronic. DSLRs have a mechanical shutter that can be used when taking stills. The tactile ‘click’ feeling when taking a still image is the mechanical shutter working. It comprises of a series of blades that physically open and close, letting light reach the sensor for an amount of time as specified by setting the shutter speed. Camcorders and DSLRs in movie mode use an electronic shutter, in which nothing physically happens. Rather the sensor charges and discharges, passing the resulting image onto the image processor in the camera for a specific duration (the shutter speed).

FIGURE 5.5  Image taken at 1/48 shutter and 1/24 shutter. Source: Copyright © 2019 Ethan Rogers.

Because there is no moving film in the digital camera (just a sensor that charges and discharges), it is possible to slow the shutter speed to longer than is physically possible in the movie camera. For example, it is possible to have a 1/24 shutter speed in which the entire duration of the frame is exposed to light, but the image is twice as bright (one stop brighter) and the motion blur is twice as noticeable.

Some peculiarities: some digital cameras in the UK will not shoot at 24 fps, In which case, the standard PAL frame rate of 25 fps is close enough. Alternatively, some DSLRs will shoot at a frame rate of 24 fps, but only have the option of 1/50 of a second shutter, which is so close that it is not noticeable.

45° shutter angle

FIGURE 5.6  45° shutter angle. Source: Copyright © 2019 Sam Pratt.

There are memorable uses of a faster than ‘normal’ shutter speed in certain mainstream movies, but only used in special circumstances. For example, in Steven Spielberg’s Saving Private Ryan, cinematographer Janusz Kaminski uses a faster than normal shutter speed during the opening scene on the beaches of Normandy. This use of a faster shutter speed has the effect of reducing the motion blur apparent in the image. This results in movement that looks less smooth than the standard 180° shutter angle. This was used to great effect in the opening scene of Saving Private Ryan, where the filmmakers were trying to make the audience empathise with the fear that the soldiers would have felt at that time. The shaky handheld camerawork and the reduced motion blur produce a jumpy, jittery image, designed to simulate the state of mind of the soldiers. In practical terms, the filmmakers used a narrower shutter angle, letting less light into each frame. This was probably using a 45° shutter angle (a 45° wedge cut out of the 360° spinning shutter), which has a shutter speed equivalent of 1/200 of a second. Notice how the particles of dirt from an explosion in Figure 5.7 do not have much noticable motion blur on them. Taken at a regular 180° shutter angle, these pieces of soil would be streaked almost to the point where they were hard to make out. The fast 45° shutter angle captures each frame with less motion blur, creating a more intense and stuttery moving image. This technique is used often in films and TV programmes during moments of intense action.

Activities •    Devise a shot that utilises movement, either with camera movement or movement within the frame (or both).

FIGURE 5.7  Frame from Saving Private Ryan 1998. Directed by Steven Spielberg. •    Create this shot using 24 fps and a 180° shutter-angle equivalent (1/48). •    Create the exact same shot using 24 fps and a 45° shutter-angle equivalent (1/200). (Note that the image will be significantly darker by reducing the shutter speed. This will need compensating for using the iris/aperture and/or ND (neutral density) filter.) •    Watch both shots back on a large screen and notice the difference in effect between the normal amount of motion blur and the reduced amount of motion blur. Also: •    Watch films and TV programmes, paying attention to motion blur on movement. Pause a film at various places to notice the motion blur. Strange how you never noticed it before, but it is very apparent when you pause a scene in a film at a number of points then re-watch the scene, looking for the motion blur. •    An example of this technique can be found during one of the first scenes of zombie attack in the film 24 Weeks Later (directed by Juan Carlos Fresnadillo, 2007).

Motion blur rules recap Rule: Always observe the 180° shutter-angle equivalent (1/48 at 24 fps or 1/50 at 25 fps). Rule: During moments of intense action, consider a reduced shutterangle equivalent, such as a 45° angle (1/200).

6 DEPTH OF FIELD

The rule for creating cinematic depth of field is: Rule: Use the aperture/iris in conjunction with ND filters to control the depth of field and exposure level of the video image. This chapter will look into what this rule means and also offer activities to help cement your understanding of it.

Depth of field Depth of field is an important concept in photography and filmmaking. The term refers to the amount of an image that is in focus. Depending on how you set the camera, you can create an image where everything is in sharp focus, or where only a very narrow band is in focus, with the foreground and background out of focus. A shot where everything is in focus is said to exhibit deep focus, see Figure 6.1. A shot where only a narrow band is in focus is said to have shallow depth of field, see Figure 6.2. Shallow depth of field is used most effectively in movies where it aids the storytelling by drawing the audience’s attention to specific elements within the frame, to make a subject stand out from the background and foreground. Rather than letting an audience look anywhere in the frame, a filmmaker can direct them towards certain details using a shallow depth of field.

FIGURE 6.1  Deep focus. Source: Copyright © 2019 Jonathan Kemp.

FIGURE 6.2  Shallow depth of field. Source: Copyright © 2019 Jonathan Kemp.

A shallow depth of field is noticeable in many cinematic films and can be used as a quick way to creating a filmic quality in video images. However, one of the most celebrated movies of all time, Citizen Kane is praised for its deep focus. Depth of field is a tool for a filmmaker to use with consideration.1

Shallow depth of field is something that is very easy to achieve using a 35mm movie camera. When cinematographer Gregg Toland created the deep-focus images in Citizen Kane, it took considerable effort to achieve the deep-focus look. It was not a happy accident. It was created deliberately using a lot of light on set, which enabled a high f/stop value, which, in turn, produces a small aperture size. This deep-focus look was desired because it would give the audience the ability to inspect different areas of the frame, looking for details relevant to the story.

FIGURE 6.3  Frame from Citizen Kane 1941. Directed by Orson Welles. Because shallow depth of field is easy to achieve with a 35mm camera, it is apparent to some degree in possibly every cinematic film ever made. The issue we have using modern digital cameras is that we might have the opposite problem. Depending on the camera we are using (camcorders, phone cameras and action cams), it might be difficult for us to achieve a shallow depth of field. A lot of digital cameras are created (intentionally or not) to achieve a deepfocus look more naturally. So when we are working with whatever kind of camera we have access to, we need to know how to maximise the potential to achieve shallow depth of field when we require it.

Sensor size

There is a general principle that the larger the physical sensor size, the easier it is to achieve a shallow depth of field. The Super 35 celluloid image plane (the area of exposed film) is 24.89mm × 14mm. The digital sensor size of a modern professional camcorder might be 4.8mm × 3.6mm (1/3 inch), depending on the camera. The sensor size of even an entry-level Canon DSLR is 22.3mm × 12.5mm (APS-C). It is harder to achieve a shallow depth of field using the 1/3 inch sensor that might be found on a professional camcorder as opposed to the entry-level DSLR, which has a sensor more comparable with 35mm motion-picture film.

FIGURE 6.4  Sensor size comparison – 35mm, 1/2.8 inch sensor, APSC, full frame. Actual size.

Source: Copyright © 2019 Sam Pratt.

The physical size of even the entry-level DSLR’s sensor is much closer in size to the Super 35 celluloid film frame and so will produce a shallow depth of field image almost as easily as a 35mm camera. The reason why the DSLR filmmaking revolution happened (around 2008– 2012) was because 35mm digital stills cameras, many of which had a sensor size of 36mm × 20.3mm (full frame), were being used to record video. This was actually larger than the effective area of 35mm motion-picture film (because of the vertical orientation of movie-camera film) meaning that a shallow depth of field look was even easier to achieve on a full-frame DSLR than on a movie camera. Camera manufacturers realised that there was a market for video cameras that replicated the sensor size of celluloid 35mm film. This lead to cameras that are marketed as ‘large sensor’ video cameras usually referencing their ‘S35mm sized sensor’ as a selling point. It was precisely the improved and easy access to shallow depth of field images that these large-sensor camcorders offered, and that filmmakers wanted. In my view, it is this single development that has most significantly led to the explosion of inexpensive cinematic images being produced on a large scale: the availability of large-sensor cameras. More will be said about this subject in Chapter 7 Image plane.

Aperture The sensor size of the camera you are using will have a big effect on your ability to achieve a shallow depth of field. However, it is possible to achieve a relatively shallow depth of field using a smaller sensor camera (although it takes more work). It is also possible to achieve a deep-focus look on a large-sensor camera.

The factor that controls the depth of field is the aperture/iris The general principle is that the wider the aperture is, the shallower the depth of field will be. Conversely, the smaller the aperture is, the deeper the focus will be. It is a simple concept shown in Figure 6.5. Just notice the f/stop numbers. Some people get confused and think that the higher number should be a wider aperture,

which is not the case. The format is: the lower the f/stop number, the wider the aperture or iris will be. It is possible to get relatively inexpensive lenses for a DSLR that open up to something like f/1.4. This is a nice fast aperture. Possibly the fastest lenses ever made were the Carl Zeiss Planar 50mm, which opened up to f/0.7. This lens was used by NASA for space photography and was also used by Stanley Kubrick in his film Barry Lyndon. Kubrick used these very fast lenses to enable him to shoot using only candlelight.2 Notice how shallow the depth of field is in Figure 6.6. The candelabra (which is lighting the scene) is out of focus and the actors are in focus. The background is also very out of focus. The filmmakers worked to a principle of getting the eyes and the mouths in focus as this is where the audience will be concentrating. Getting anything else in focus was not possible with this lens and aperture.

FIGURE 6.5  Depth of field chart. Source: Photographs copyright © 2019 Ethan Rogers; graphics copyright © 2018 Sam Pratt.

Your hands are tied At this point, you may be noticing a problem with using the aperture/iris to control the depth of field as well as the exposure. If you have not noticed the potential problem, I’ll try to illustrate it.

FIGURE 6.6  Frame from Barry Lyndon 1975. Directed by Stanley Kubrick. We know from previous chapters that we want our image to be noise free. That means that we want to be working at the camera’s native ISO (or close to 100 ISO) or 0dB gain. This element of the exposure triangle is as dark as it can be, only to be increased as a last resort. We know that the shutter speed is to be set at the 180° shutter angle equivalent. At 24 fps, this is 1/48 of a second. To change this would be to change the motion blur and the perception of motion in the video. The only element of the exposure triangle that we can change freely then is the aperture/iris. We have to bear in mind, though, that this also controls the depth of field. Let us say then that we are filming outside on a bright day. Our ISO/gain is set to ISO 100/0dB (for a grain-free image) and our shutter is set to 180° shutter angle (for regular cinematic motion blur). We need to stop down the aperture/iris to F8 to get a good exposure on our subject, and avoid overexposure. But we also want to achieve a shallow depth of field look so we need to open the aperture/iris up to achieve a shallow depth of field. But this would drastically overexpose the image.

We cannot reduce the ISO/gain at all (it is a low as it will go) and we do not want to increase the shutter speed (because we do not want the Saving Private Ryan or action-scene staccato motion). Overexposing the image is unacceptable, so our only option is to forget about our vision for a shallow depth of field for this shot. Or is it? Is there anything else we could do?

ND filter The answer to this comes from another piece of kit found in the cinematographer’s tool bag; the neutral density (ND) filter. The ND filter can be thought of as sunglasses for the camera. The filter reduces the intensity of light by a stop or by a number of stops but it does not alter the colour of the image, hence the ‘neutral’ part of the name. ND filters are rated at as shown in Table 6.1. ND filters come in a variety of different forms and sizes. There are fixed ND filters that can be stacked together or swapped for another strength of ND. There are also variable ND filters that can be adjusted in strength. There is a general opinion that the fixed ND filters give better results than the variable ND filters, but many respected people prefer the versatility (and cost saving) of the variable ND filters. On a movie camera, the ND filter is usually a rectangle of glass that is placed inside a matte box at the front of the lens. The matte box is designed to reduce lens flares from the sun or other light sources shining towards the lens. The matte box can also house a number of different filters, which would typically be coloured filters or ND filters. TABLE 6.1  ND filter stop reduction chart

FIGURE 6.7  ND filter. Source: Copyright © Robert Emperley, https://en.wikipedia.org/wiki/Landscape_photography#/media/File:Neutral_density_filter_de monstration.jpg.

DSLR camera lenses often have a thread around the outside of the front that is designed to attach various filters to. ND and variable ND filters can be screwed onto the end of many lenses. Not all lens sizes are the same though, so care must be taken when selecting an ND filter that will attach to all the lenses you have, even if using filter adaptor rings. Some camcorder-style cameras have internal inbuilt ND filters. These can be selected with the flick of a switch on the side of the camera and are usually fixed rather than variable. This is a particularly useful feature and it is well worth looking for a camera that has this built in, as the cinematographer does not have to concern themselves with attaching the ND filter to the front of the lens. So, going back to our problem: our ISO/gain and shutter speed is fixed, it is a bright day and we want to achieve a shallow depth of field. We can use an ND filter, which will make the image darker. We can then open up the aperture/iris, achieving a shallow depth of field while maintaining the correct exposure.

In summary, using an ND filter as well as the aperture/iris, we can control the depth of field as well as the exposure level.

Deep focus The deep-focus look used in Citizen Kane and elsewhere can be created using the same aperture control principles. To achieve a deep-focus look, the aperture needs to be very small; a larger f/stop number, which produces deeper focus. The only issue with this is that the image will become very dark. The solution to this is to introduce a lot external light to the set. The sets on those deep-focus shots in Citizen Kane must have been very bright indeed.

Activities •    Identify the sensor size of the camera or cameras that you have access to. •    Identify how fast or slow each lens you have is (if you have more than one). Note the f/stop values. •    Research what ND filters are available for your camera and how they attach to the lens. •    Create a camera test where you shoot progressively through each marked f/stop on your lens (or on each lens if you have more than one). (Tip: For the purposes of the test, you can change the ISO/gain and shutter speed if you do not have access to an ND filter, but be aware that normally you would fix the ISO/gain and shutter.) You should try to achieve: •    a shallow depth of field image; and •    a deep-focus image. View your shallow depth of field and deep-focus images back in your preferred viewing environment.

Depth of field rule recap

Rule: Use the aperture/iris in conjunction with ND filters to control the depth of field in the video image.

Notes 1    You have to find your own way with this, but consider not having wall-to-wall shallow depth of field, just because you know how to do it. 2    Kubrick shot Barry Lyndon in 1975 when film stock was slower than it is today. He used Eastman 100T film stock. It is perfectly possible to film in candlelight today using a digital camera (which will perform better in low light than celluloid ever did) as long as you use a reasonably fast lens and have a camera that can shoot at a higher ISO without producing too much noise.

7 COLOUR

The fundamental rule for achieving an image with realistic colours is: Rule: Always perform a custom white balance for every lighting change. The rest of this chapter will look into why we should observe this rule and look at some other colour considerations.

Colour in two parts Part one of the colour chapter will concentrate on working with colour in a practical in-camera way. These are practical concerns that you definitely need to know to make good-looking images. Part two of the colour chapter is an introduction to more of the in-depth technical considerations that will help you understand how digital cameras record colour. This content is important for understanding the forthcoming chapters, particularly Chapter 14 Codecs.

Colour part one Warm and cool – colour temperature

All light is coloured. The colour differs depending on its source. For example, indoor light that comes from lightbulbs is more orange in colour and natural daylight is more blue coloured. Orange light can be described as warm and blue light can be described as cool. For a comparison of a warm and cool image please see Plate 2. There is also variation within these two broad colours of light; for example, light from candlelight is more red than orange. The colour of light is rated on what is known as the Kelvin scale. Colour is described in colour temperature and rated in degrees Kelvin. For Kelvin scale image, please see Plate 3. The most common film and photography extremes in terms of colour temperature are daylight (5600 K) or tungsten (3200 K). Daylight is comparatively cool (or blue) and tungsten is comparatively warm (or orange). Memorise these two colour temperature values. These will crop up a lot.

Colour temperature of film lights Film lights come in one of these two colour temperatures. Some modern lights also have a helpful option to blend between the two primary options. Lightbulbs that you come into contact with in day-to-day life have a variety of different colour temperatures, but in the realm of film lighting, daylight and tungsten are the main two. Accuracy of the colour of light produced is a particular selling point of specialist film lights, which, in part, contributes to their often increased cost.

White is white Our eyes are pretty good at automatically adjusting to the colour of light, to the point where you might not have noticed variations in the colour of light. Celluloid film stocks cannot automatically adjust to differences in light colour, so these differences in colour can be very noticeable on film footage. Video cameras can automatically adjust how they perceive colour, but it is not a good idea to let the camera make that decision for you. In manual

mode you are in control of where the camera sets its white value on the Kelvin scale. A white piece of paper might appear to be white, but it will reflect the colour of the light source that illuminates it. The camera is designed to replicate the colours faithfully, so that colour bias will be noticeable. In order to have a neutral film or video image, the film stock or video camera needs to be able to compensate for differences in lighting colour temperature and offset the colour. This offsetting essentially shifts the colour of the image in the opposite direction on the Kelvin scale. For example: •    In warm tungsten lighting conditions (a scene lit with tungsten bulbs), the image should be shifted towards the blue end of the spectrum in order to keep white objects appearing white. •    In cool daylight lighting conditions (outside on a cloudy day), the image should be shifted towards the orange end of the spectrum in order to keep white objects appearing white.

Daylight and tungsten celluloid In traditional celluloid photography and cinematography, film stocks are balanced to counteract the differences in light colour temperature, with only two variations; daylight (5600 K) or tungsten (3200 K). With celluloid cinematography, everything must be made to match whichever of these two film stock options is being used. These are represented on the film stock labels with the letters D (daylight) or T (tungsten). The cinematographer must consider the colour temperature of all the lights that they are using. If the lighting is intended to be neutral in colour, then all sources of light must conform to the same colour temperature settings.

White balance

While there are only two colour balance options for celluloid cinematography, the video camera has much greater variation. Video cameras usually have number of ways to calibrate the colour sensitivity of the camera to any consistent lighting condition. If the colour temperature of the lights is significantly different then the difference will be apparent, because the camera must be set for one specific colour value on the Kelvin scale.

Automatic white balance (AWB) Sometimes displayed as AWB, the automatic white balance option is to be avoided. White balance can change during a shot, which is really not acceptable. Stick to one of the other manual modes where you are controlling the colour balance.

Custom white balance The custom white balance option involves taking a white piece of paper or 18% grey card, placing the paper or card in the place where you will be filming, filling the frame by either zooming or moving closer and setting the white balance using the paper or card as a reference. Different cameras have different procedures for doing this, but most involve pressing a button at the point when the screen only contains the paper or card. This needs to be performed in the desired lighting conditions, and with every lighting change to remain accurate. White balancing a camera outside and then moving indoors would be no good. The lighting situation will have completely changed. Depending on the camera, this procedure can be tricky for some students. When using a camcorder, I have noticed some students getting confused about whether they have set the white balance or not. The colour seems to have changed, so they think that they have set the white balance. Sometimes what has happened is that they have only changed the camera from automatic mode into manual mode. They are in manual mode, but set at whatever the last person set it to, or maybe the factory default colour temperature. Be sure that you fully understand the procedure for custom white balance, as this is the most useful option in a lot of cases.

White balance preset Some cameras (DSLRs) have presets for general lighting conditions. For example, indoor light, outdoor light on a cloudy day, direct sunlight etc. They usually have an indication of where each preset falls on the Kelvin scale also. Usually the most appropriate preset for your lighting condition is good enough, but let your eye be your guide.

Kelvin scale Some cameras let you dial in the precise colour temperature you want, by variations of 100 degrees Kelvin. If your camera has this option, it is worth spending some time cycling through from one extreme to the other and observing the differences. Your eye is your guide here, so I would recommend that you experiment with this option, but consider relying on the custom white balance until you are more experienced with setting white balance.

Coloured gels There is a number of colour correction filters that can be used to alter the colour of film lights.

CTO – colour temperature orange The CTO filter is accurately designed to convert daylight-coloured light (5600 K) into tungsten-coloured light (3200 K). This can be useful to unify the colour temperature of lights used if you are working with a mixed collection of lights.

CTB – colour temperature blue The CTB filter is accurately designed to convert tungsten-coloured light (3200 K) into daylight-coloured light (5600 K). This can be useful to unify

the colour temperature of lights used if you are working with a mixed collection of lights. Each of these filters will also reduce the intensity of the lights effected, making the lights appear darker.

Creative colour balance Most of the time it will be your preoccupation to capture a neutrally coloured image; however, there may be times when you may not want to remain neutral. In the example from Plate 4, a still taken from the film Fargo (1996), there is a deliberate disparity between the light colour temperatures of the indoor and outdoor light. This is a conscious choice on the part of the filmmakers who wish to use colour temperature as a storytelling tool for the visual effect it creates. They are telling us that it is cold outside and warm inside. The cold and the snow are an integral part of the story and this is being reinforced here. This can be done ‘in camera’ using gels and different coloured lights etc. but this kind of work is also increasingly done during post-production, during the colour-grading phase of production.

Colour part two Primary colours of light White light is composed of every colour. When broken down using a prism, white light can be seen to contain every colour of the rainbow. These colours can be created using only the three primary colours: red, green and blue. Different amounts of each work together to create all the colours in between. This is called an additive colour system: •    When red light and blue light are combined, the result is magenta light. •    When green light and red light are combined, the result is yellow light. •    When blue light and green light are combined, the result is cyan light.

•    When red, green and blue light are combined, the result is white light.

RGB colour space All video images are made up of the three primary colours of light: red, green and blue (RGB). This is referred to as the RGB colour space. This means that the colours in this colour space are comprised of only these three colours. On any given video frame, there are three complete images: one for the red channel, one for the green channel and one for the blue channel. See Plate 5 for an example of a three channel RGB image. When combined, these images create a full colour image.

Luma and chroma A way of quantifying the properties of a digital image is to identify, record and manipulate the luminosity and chrominance values of video image. These are often abbreviated to luma and chroma. Luma is the brightness values of an image. This is the black and white part of an image. Chroma is the colour values of an image. This way of quantifying an image is relevant in the next colour space.

YUV colour space The YUV colour space is another popular video colour space.1 This method records the luma and chroma information separately. This is in contrast to the RGB colour space that records full luma information on each colour channel. Here, the Y represents the luma channel and the U and V represent the two-colour channels. A YUV image is broken up as shown in Plate 6. When combined they create a full colour image.

This approach to colour space was originated in the world TV broadcasting. When colour televisions were introduced commercially, replacing the older black and white televisions, a system was needed that would still provide a black and white image for the older sets, but with the additional option of being able to display colour on the new colour television sets. Using the YUV colour space, the older black and white sets would only register the Luma (Y), ignoring the U and V, but the colour sets could interpret the Y as well as the U and V, providing a full colour image. YUV also has a place in modern digital filmmaking. A camera may take the RGB image from the imaging sensor (or sensors) and convert it to a YUV colour space to create the video file. The reason that digital cameras use the YUV colour space to create their image is that it provides a more efficient way of compressing (saving space) the colour within a video image. This system is used by almost all digital video cameras, so is important to understand. See Chapter 14 Codecs for more about this.

Contrast Contrast is the difference between the brightest parts of an image compared to the darkest parts. This is often expressed as a ratio, for example 10:1 or 100:1. A low-contrast image will look flat and grey when compared with a highcontrast image (see Figure 7.1a). The high-contrast image of Figure 7.1b has a bigger difference between the brightest and darkest parts in a way that stands out and is pleasing to the eye.

FIGURE 7.1  Low-contrast image, high-contrast image. Source: Copyright © 2019 Jonathan Kemp.

Hue The word hue refers to the specific colour value used.

Saturation The word saturation refers to amount of that hue that is being used. For example, a high-saturated image will look very vibrant and colourful, whereas a low-saturated image will look greyer. A completely desaturated image is a black and white image.

Sharpness Sharpness is a tool that improves the appearance of the definition of an image. A soft and out-of-focus image can be improved with a little sharpening. The sharpening tool effectively draws a little black line around every identifiable edge. The effect can be very useful if used appropriately, but can be overused, making very harsh-looking images. Sharpness can be used in camera or in post-production, but I would recommend turning the sharpening off in the camera but instead adding it in post-production if needed. The logic is that the sharpening effect cannot be undone once it has been recorded onto the image, but in post-production it can be adjusted or even removed if not needed.

Visible colour spectrum See Plate 7 for the CIE visible colour spectrum. In 1931 the International Commission on Illumination (CIE) defined the CIE 1931 RGB colour space. This colour space is a visual representation of the limits of human colour perception: our native human colour space, if you will. Notice that there is a large amount of green in this colour space. This shows that our eyes are more receptive to the colour green, probably because the majority of the natural world is green and we are designed to live in natural surroundings. See Figure 7.2 for a representation of where the Kelvin scale sits in the CIE colour space.

Colour gamut Cameras cannot observe or record the full range of colours that we can see. Their colour reproduction is limited compared to our ocular system.

FIGURE 7.2  Planckian locus. Source: https://en.wikipedia.org/wiki/Color_temperature#/media/File:PlanckianLocus.png.

The range of colours that a particular system or technology can reproduce is called its colour gamut. The colour gamut of film and video exists within the CIE visible colour spectrum. The colour gamut of film and video differs from system to system. Cameras, screens and monitors all have specific colour gamuts.

REC709 REC709 is the standard colour gamut for standard definition (SD) and high definition (HD) video content. Every TV channel, YouTube video, DVD and HD BluRay use the REC709 gamut. The amount of different colours possible in the REC709 system is less than the colours visible in celluloid film.

REC2020 REC2020 is a larger colour gamut than REC709. The REC2020 standard is the new gamut for Ultra High Definition (UHD) and beyond video content. The standard will replicate a greater breadth of colours than the REC709 standard. REC2020 is comparable to the amount of different colours available with celluloid film. See Plate 8 for a REC709 and REC2020 comparison.

Activities •    When it is approaching dusk, close your curtains and let your eyes get used to the colour of the light from an indoor light or lights. Then open the curtains and notice how blue it looks outside. Go outside and notice how orange the light inside the room looks when used to the colour of the light outside. •    If your camera has the option to manually change by degrees on the Kelvin scale, spend some time cycling through from one extreme to the other. Decide what value looks best for a particular shot. Remember you are trying to make white look white. Then use the custom white balance. Bring the footage back to your optimal viewing environment and review. Which do you like the look of more? The custom is not automatically better than what you decided on. •    Make sure you really know how to perform a custom white balance on your camera. Locate the manual and give yourself a period of time where you can experiment. Be sure you are not simply switching from

automatic white balance to custom white balance. Make sure you are also setting the custom white balance yourself.

Colour rule recap Rule: Always perform a custom white balance for every lighting change.

Note 1    YUV is also often referred to as YCrCb. Cr = Chroma difference red. Cb = Chroma difference blue.

8 CAMERA TOOLS

The rules for using the camera (and, in some cases, monitor) tools to achieve an accurate image are as follows: Rule: Always use exposure tools to help you expose a shot. These may be waveform, histogram, zebra or false colour. Rule: Always use focus peaking to help set focus for a shot. Rule: Never assume that because they look ‘okay’ on a small monitor that the exposure and focus are correct. This chapter will explain what are the relevant tools as well as looking at how they work and how to read them.

Camera tools Why do we need to use camera tools? In short, because it is very easy to get things wrong when relying on our own judgement, particularly when interacting with small, shiny LCD screens. Cinematographers have always used every tool at their disposal to be as sure as possible that the shot they were taking was properly exposed and in focus. No good cinematographer has ever been too proud or in too much of a hurry to consult their tools. In the celluloid world, cinematographers have external light meters for the exposure and measuring tape for the focus.

In the digital world, we have more options. The digital tools are often a simpler and a more accurate way of accomplishing the same thing. Different cameras have different tools available, so knowing how to interpret each is vital. Digital cameras use small LCD screens to compose and set exposure for shots. Using small screens, which are often reflective in daylight, is far from perfect. It is extremely difficult to use your eyes alone to set exposure and focus. Using the various camera tools is essential.

Cinematographer craft If you have ever been on a film set, or watched a behind-the-scenes documentary from a movie, you may have noticed somebody walking around the set with a small device that has what looks like a white ping-pong ball attached to it. This person will usually be the cinematographer and they are taking incident light readings. The incident light reading is telling the cinematographer how bright any particular point of the film set is, so that they can set the exposure level of the camera appropriately. This is made more difficult for the celluloid film cinematographer, because this light meter reading is their only method of gauging how bright or dark something is. They cannot look through a viewfinder or an LCD screen and turn on any digital tools to help them gauge exposure. When shooting on celluloid, the only exposure tools the cinematographer has are their light meter and previous experience. It is only when the rushes (or dailies in the USA) have been processed and a workprint made and viewed that the cinematographer would get to see how accurately they actually exposed the shot. Nerve wracking! This is a real skill and should demonstrate exactly why celluloid cinematographers are held in such high regard. There is such a lot of money associated with shooting celluloid film because of the cost of the film stock as well as the processing and development. This cost means that usually only serious productions will use it, therefore actually exposing and taking shots on celluloid is only for a select few people: the cinematographers. Traditionally, to get to the level of being a cinematographer someone would have had to pay their dues and learn their craft, up from the bottom of a camera department as a camera trainee, through various levels of assistants, operator and then, after a significant number of years, they might reach the top role of cinematographer.

The good news for you is that you can skip over these years of training and start exposing shots and learning your craft right now, with whatever cameras you can get your hands on. A healthy amount of respect is necessary for the traditional cinematography masters, but you can join their ranks quicker thanks to digital video cameras. Specifically, it is the camera tools that let you be sure about exposure and focus and these are your biggest allies in producing consistently accurate images.

External light meter The light meter is often not used on a digital shoot, although some people probably do prefer to use them while shooting digitally. The light meter is a small handheld device, with what looks like half a white table-tennis ball on the top. The light meter is two things at once: •    It is an instrument for measuring the intensity of light at a certain point. For example, how bright it might be in a shadowy part of an image, against how bright it might be at a much brighter part of an image. •    It is also a calculator. If the cinematographer has inputted certain information, such as ISO and shutter speed, the light meter will also calculate what F-stop the aperture or iris should be. Using the light meter the cinematographer can assess the dynamic range of the shot. By checking the various parts of frame, the cinematographer can tell the difference in stops between the brightest and darkest parts of the image. They will also be able to gauge light levels to allow a part of the image to disappear into shadow, should they so desire. Using a light meter is an old-school skill that seems to be disappearing in favour of other digital tools.

Internal light meter All digital cameras (that have automatic exposure modes) come with internal light meters. The camera uses its internal light meter when in automatic mode to try to keep a shot exposed properly. Internal light meters might be measuring light at a specific point, or a group of points, or maybe measuring an average light level across the whole image.

However, we should be manually controlling our exposure so the internal light meters in our cameras should go unused.

IRE The IRE is a unit of measurement in video signals. The name comes from the Institute of Radio Engineers, who developed the scale. The IRE scale runs from 0 to 100: •    0 = a completely black video image. Total underexposure. •    100 = a completely white video image. Total overexposure. Everything that a digital camera captures should sit within these IRE values. Sometimes values that are below or above the IRE ‘legal’ limits are referred to as ‘clipped’. The word clipped refers to any information that is outside of the limits is clipped off and lost forever.1

Histogram A histogram2 is an exposure tool that is often found mostly on DSLRs. The histogram is a representation of the darkest, middle and brightest parts of the image. The dark part of an image is located on the left side of the histogram; the brightest parts of an image are located on the right side of the histogram. Note also that the histogram in Figure 8.1 has a luma histogram, which is the overall brightness level and likely the only histogram available on a camera and monitor. It also contains a separate histogram for the red, green and blue channels.

FIGURE 8.1  Well-exposed histogram, underexposed histogram, overexposed histogram. Source: Created by author; histogram software Color Finesse.

When exposing using a histogram, you should adjust the exposure until the dark part of the image is contained within the histogram, not rising up either side. If the body of the histogram image is ramping up the left side then the image is underexposed. If the body of the histogram image is ramping up the right side then the image is overexposed.

When exposing an image using the histogram as a guide, you should be looking to contain the complete histogram graphic within the area, evenly spread and with none of the histogram touching either side. See Figure 8.1 for a well-exposed image according to the histogram. If you cannot adjust the exposure so that it does not touch one or the other sides then it is likely that the dynamic range of the camera is not wide enough to fully capture the full tonal range on the scene.

Waveform The waveform display is an exposure tool that is found on some camcorders and on some external monitors. The waveform can be interpreted from left to right. The left side of the image being recorded corresponds to the left side of the waveform. The right side of the image being recorded corresponds to the right side of the waveform. The middle of the image corresponds to the middle of the waveform. Notice how the brighter leaf in the centre of Figure 8.2 is represented in the accompanying waveform in the peak in the middle. The waveform displays the brightness level in a value of 0–100: •    100 is at the top of the waveform and is completely overexposed (white). •    0 is at the bottom on the waveform and is completely underexposed (black).

FIGURE 8.2  Waveform. Source: Photo by Emre Gencer on Unsplash.

The waveform in Figure 8.2 displays the signal for the red, green and blue channels of the image all together. Some waveforms only display the luma for the image overall.

The waveform monitor is the most accurate and useful exposure tool as it shows clearly what is happening tonally in every part of the image. This is the professional choice.

RGB parade The RGB parade is an exposure and colour-balance tool that is found on some camcorders and on some external monitors. It can be interpreted in a very similar way to the waveform, except here the RGB colour channels are extracted and placed next to each other. See Plate 9 for an example of an RGB parade and the video image that is represented by it. This can be useful to not only gauge the exposure of each channel, but it can help identify colour imbalance. If one channel is much larger than the others, it could be that the white balance needs changing. It could also be that there is a lot of that colour in the frame. For example, a shot of grass would be much higher in the green channel than the others. A completely white background would display equal levels of each of the three channels. Hence, the terms white balance.

Zebra The zebra function is an exposure tool that is found on camcorders. This function places a strobing striped black and white pattern (like a zebra) onto certain parts of the image. The zebra pattern gives an indication of exposure levels, and is present only on the LCD monitor (or external monitor if connected) and does not record onto the video image. This function causes a lot of confusion for students who do not know how it works. Some people think that seeing the zebra on certain parts of the image is a good thing, others think it is a bad thing. In practice, it can be a good or a bad thing! How you set up the zebra determines how you interpret the information it gives you. Consider the way the waveform interpreted the brightest and darkest parts of the image using the IRE scale 0–100. The zebra function works the same way: •    100+ = overexposed. Completely white. •    80–99 = bright, but not overexposed. •    70–79 = a good exposure level. Not too dark, not close to overexposure.

•    1–69 = underexposed. •    0 = completely black.

FIGURE 8.3  Zebra – well exposed, overexposed. Source: Copyright © 2019 Jonathan Kemp. Note: With the zebra set to 70% IRE, seeing them on the brightest parts of the most important element of frame (usually a face) should be interpreted as a good exposure level. Not too bright, but enough level to create a well-exposed image. With the zebra set to 100% IRE, wherever you see them is an indication of overexposure.

When setting up a shot, you will be primarily concerned with exposing for one aspect of the shot, your subject. The subject is usually a person, but could be anything: a flower, a building etc. To keep things simple, I suggest you think of using the zebra in one of two ways: 1    Zebra = bad a    Set the zebra to be sensitive to 90–100. Then whenever you see the zebra on the image, you will know that the image is close to overexposing or actually is overexposing on the zebra part of the image. 2    Zebra = good b    Set the zebra to be sensitive to 70–80. Then whenever you see the zebra on the image you will know that the image is at a good exposure level. It is not too dark and not too bright. This is only applicable to the zebra part of the image, so adjust the exposure to ensure that the zebra is only on the brightest parts of the subject. With a person, the brightest parts are usually on the forehead or another place closest to the light source. On Figure 8.3 the zebra was set to 70–80 and visible on the brightest parts of the subject.

False colour The false-colour tool is a visual indication of relative luminance levels within an image. Using the IRE scale of 0 (clipped black) to 100 (clipped bright), we can see how bright each part of an image is by interpreting the unnatural but (for the purposes of identification) very clear colours. The false colours and the IRE scale/key indicating which colour represents which IRE level make this a very accurate way to identify relative brightness in a shot. This is useful if you want to make sure a face is brighter than the background for example, or to help you light one side of a face to be twice as bright as the other side. See Plate 10 for an example of a false-colour display. Here we can see that the brightest parts of character’s face is around the 84–93 IRE mark and the darker side of her face is around 54–58 IRE. The brightest parts of the frame are reflections, which are 93–100 IRE in places. Some field monitors have the false-colour indications built in and this is a very useful tool for helping control lighting levels where it is an option.

Focus peaking Focus peaking is a focus aid that is found on camcorders and cinema cameras and some DSLRs. The peaking function highlights areas of the image that are in focus with a coloured outline. Usually the settings allow for a red, blue or white outline. See Plate 11 for an example of a focus-peaking display. The appropriate colour should be selected depending on the colour content of the image. The focus-peaking colour should stand out from the background so it can be identified easily. Focus peaking works better on surfaces that have hard lines (like a bearded face) but not so well on smooth surfaces (like a ball). The peaking also works better in well-lit environments, and not so well in dark environments. This is an extremely useful tool. If you have a camera that has focus peaking available, it is advisable to use it for every shot.

Vectorscope The vectorscope is a way to check the recorded colours on an image. It is found on some external monitors. It only displays the colours that are recorded in an image. It displays which colours are present and the saturation level of each colour. See Plate 12 for an example of a vectorscope and the video image that it represents. The circle represents the colour wheel. The hue changes following the colourwheel pattern around the circular scope: red, magenta, blue, cyan, green, yellow. The distance from the centre represents the saturation of the colour. The centre point is no saturation, so no colour to the image. The midway point where the boxes are is the broadcast legal saturation level. See Plate 13 for an example of a vectorscope and the SMPTE colour-bar test pattern.

Activities Get to know the camera or cameras that you will be using. Explore each of the tools outlined in this chapter and learn how to apply them all.

Camera tools rules recap

Rule: Always use exposure tools to help you expose a shot. These may be waveform, histogram, zebra or false colour. Rule: Always use focus peaking to help set focus for a shot. Rule: Never assume that because they look ‘okay’ on a small monitor that the exposure and focus are correct.

Notes 1    In colour grading, people sometimes refer to ‘crushing the blacks’, which means reducing the black level of an image until some detail has been lost. They do this because it makes for a high-contrast image if the image also has bright areas. 2    Histograms also exist in other fields. In a more general application, a histogram can be a representation of numerical data, usually in a bar graph.

9 DYNAMIC RANGE

Some good guides for achieving the best dynamic range on your camera are: Rule: Consider the dynamic range of the camera you are working on. Make allowances for low dynamic range cameras. Rule: Consider the use of lighting and reflectors to even up lighting levels between internal and external scenes. Rule: Develop a personal understanding of what is an acceptable level of overexposure, and in what circumstance. The rest of this chapter will look into what dynamic range is, and how to make the most of your camera’s dynamic range.

Dynamic range The term dynamic range describes the ratio of brightness and darkness levels that a camera can capture at the same time. Cameras with a low dynamic range cannot accurately capture a large difference in light levels. The result is that areas of an image are either underexposed or overexposed. Cameras with a high dynamic range (HDR) can accurately capture a large range of brightness levels without under- or overexposing parts of the image.

FIGURE 9.1  Low dynamic range image – blown-out highlights. Source: Copyright © 2019 Ethan Rogers.

For a long time, video has typically had a very low dynamic range, meaning that images tended to under- or overexpose easily, as with Figure 9.1. Notice how the windows and everything outside the room is very overexposed. Video cameras have improved in recent years and modern video cameras are making significant improvements in their dynamic range. Celluloid film always had a HDR, meaning brighter areas do not under- or overexpose as easily with film. Classical cinematographers also worked hard, controlling the light levels on a film set to ensure that no light source went beyond the scope of the native dynamic range of the celluloid. Celluloid film has roughly 13 stops of dynamic range, so a diligent cinematographer working with celluloid will control the light levels to ensure all light in a scene is within this range.

Intentional under- or overexposure An exception to this might be if a cinematographer wanted to let a portion of a frame disappear into the silky blackness of underexposure for effect. As in the example of Figure 9.2, David Fincher and cinematographer Harris Savides use underexposure to make certain scenes feel shadowy and uncomfortable in Zodiac. Notice the lack of visible detail in the darkest parts of this image; the side of the

character’s face and his shoulders. Also notice that the brightest part of this image, the strip-lights, is not overexposed. Alternatively, the cinematographer may want part of the frame to overexpose and let the whiteness bloom out over the image for effect. Notice how the bright parts of the windows affect the hair and the faces of the people who are stood directly in front of them. In Figure 9.3, Stanley Kubrick and cinematographer John Alcott deliberately let the windows of the church overexpose in this shot from Barry Lyndon, perhaps to make the wedding that is taking place seem all the more gloomy when compared to the sunshine that is outside.

FIGURE 9.2  Frame from Zodiac 2007. Directed by David Fincher.

FIGURE 9.3  Frame from Barry Lyndon 1975. Directed by Stanley Kubrick. Normally though, it will be the intention of the cinematographer to stay within the confines of the dynamic range of the camera, ensuring that nothing is underor overexposed. Another name for dynamic range is exposure latitude. TABLE 9.1  Contrast ratio and stop difference chart

Contrast ratio In film and video images, the word contrast is used to describe the difference between the brighter and the darker parts of a frame. The term contrast ratio is used to put a measurable numerical value to that difference. A contrast ratio of 10:1 (10 to 1) means that the brightest level in an image is ten times brighter than the darkest level in the image. The contrast ratio is measured in a linear scale, meaning that each unit of measurement is equal. This is

opposed to exponential or logarithmic ‘stop’ difference, which doubles or halves each time. Using Table 9.1 or a similar chart, you can follow or even memorise the relationship between the contrast ratio and the unit measurable with a light meter: the stop. If you wanted a shot of someone’s face to be eight times darker on one side than the other you could identify that you would like an 8:1 contrast ratio on that face. Using the lights at your disposal you would seek to ensure that there was a threestop difference between the two sides, possibly using a light meter, or the falsecolour tool.

Luminosity Luminosity (luma) is a word to describe the light level.

Linear and logarithmic It might seem logical to measure light evenly and linearly, e.g. if you double the light level in a scene, the image is twice as bright, but that is not the way our eyes or the cameras we use interpret light. Our eyes become less sensitive to light, the brighter it becomes.

Dynamic range of human vision The dynamic range of our human ocular system is very wide. We have a maximum of about 30 stops or a contrast ratio of 1,000,000,000:1. It is a complicated system though, and we cannot see the full extent of that range at one time. For example, our eyes adjust to darkness over time and we can see better in the dark after about 20 minutes or so. At any one time, we can see about 10 stops of dynamic range, which is a contrast ratio of 1000:1. However, an estimate of our typical dynamic range is about 20 stops, or a contrast ratio of roughly 1,000,000:1.

Dynamic range of celluloid film

The dynamic range of celluloid film is very high. There can be a 13-stop difference in dynamic range, which is good, but significantly less than human vision.

Dynamic range of video The dynamic range of video is typically very poor. Somewhere between 5 and 7 stops of dynamic range for most kinds of video camera that do not specify a wide dynamic range.

Dynamic range of raw video The new kind of digital cinema camera (of which the Blackmagic range of cameras is probably the most affordable) has been designed to replicate a film-like dynamic range on video. Cameras such as the Blackmagic URSA Mini Pro have 15 stops of dynamic range. This even surpasses the dynamic range of celluloid film.

FIGURE 9.4  20 stops of human eye with 13 stops celluloid film, 5–7 stops video and 15 stops raw video. Source: Copyright © 2019 Sam Pratt.

HDR – high dynamic range HDR is a new kind of computational photography that takes multiple images at the same time for different exposure settings. There are at least two images per frame, identical apart from their exposures. One is exposed to capture the detail in the darker areas, and the other is exposed to capture the detail in the brightest areas. These two images are then ‘stitched’ together using software so that the appearance is of a very wide dynamic range. HDR stills cameras are common. Smartphones have HDR functionality for taking stills (not video). Actual HDR video cameras are currently only in the consumer bracket. This is because HDR video is better than standard video (5–7 stops) but not as wide as digital cinema cameras that record LOG gamma and raw footage.

FIGURE 9.5  HDR image in three parts. Source: Copyright © 2019 Jonathan Kemp.

Note: Notice that information from each of the three exposures has been combined into one single HDR exposure. For example, the sky detail that has been lost in the high and middle exposures has been combined with the darkest shadow details to create a much fuller picture than any of the originals alone.

HDR is more of a consumer standard (like HDR10+ and REC2020), which is designed to keep a lot more of the dynamic range of an original file and display it on a consumer television, as opposed to formatting for a REC709 display (standard HD footage) and simply losing some of the information in the brighter areas.

In practice As a general rule, it a good idea to keep exposure levels similar, so not place the person you are filming directly in front of a window on a bright day, as the difference in light levels between the person indoors and the scenery outside is likely to be too great for the camera to handle. The result will be either that the person inside is underexposed and outside is properly exposed, or that the person is properly exposed and outside is overexposed.

FIGURE 9.6  Inside/outside exposure. Source: Copyright © 2019 Ethan Rogers.

If you need to shoot with the person in front of the window, you could try to minimise the difference in lighting levels by: 1    Blocking off light coming in from outside, either with black flags, or even attaching large sections of ND filter to the window. 2    Adding light to the indoor scene to bring the light level inside up to be closer to the outdoor level. You could use indoor lighting or pop-up reflectors to bounce light back onto your subject.

Overexposure In some situations, it may be impossible to achieve an image that does not contain any areas of overexposure. Some blown-out windows can be seen from time to time on films and television programmes. Professionally acceptable rules about these things are shifting and some overexposure can be seen regularly. Keep a look out for overexposure on the films and programmes that you are emulating and develop a working understanding of what level of overexposure in what circumstances is acceptable to you. Be prepared to evolve your acceptable level as your judgement develops.

Activities Research into the dynamic range of your camera. Try to film an indoor scene where you place a character against a window. This is one of the trickiest situations to expose properly for. Try the following: 1    Controlling the lighting using external lights or by blocking out light until the contrast ratio in the scene is reduced. 2    Change location or angle until the contrast ratio is reduced. 3    Consider losing exposure in a part of the image. Maybe some part of the sky overexposing will be acceptable. Use your own judgement. 4    Carefully review your decisions when you get back to your ideal viewing space. Reflect honestly about whether it was successful or not.

Dynamic range rule recap Rule: Consider the dynamic range of the camera you are working on. Make allowances for low dynamic range cameras. Rule: Consider the use of lighting and reflectors to even up lighting levels between internal and external scenes. Rule: Develop a personal understanding of what is an acceptable level of overexposure, and in what circumstance.

Note on the next three chapters The three chapters – 10 Image plane, 11 Aspect ratio and 12 Resolution – are all interrelated. This whole book is interrelated, but these three chapters in particular are hard to separate from each other. Some of the content for each of the three chapters is the same, but with a different emphasis.

10 IMAGE PLANE

The most significant thing you can do to help give your film a cinematic depth of field is: Rule: Choose a sensor that most closely resembles the dimensions of an S35 image plane, to replicate a similar cinematic depth of field. This chapter will look at some of the different sizes of celluloid and digital image planes; that is, the size of some of the most significant celluloid film stocks and digital imaging sensors.

The film gate The film gate restricts the circle of light projected through the lens onto a specific area of celluloid film. This area could be the maximum amount of space available between the film sprocket perforations, but is sometimes reduced to make room for other things, such as optically printed soundtracks. The film gate determines the image plane on the celluloid, the surface area and aspect ratio of the image.

Image plane The term image plane describes the dimensions of the light-sensitive material or the surface area, and relates to three properties. Two of these properties are technical, and the other is an aesthetic attribute:

FIGURE 10.1  Film gate and image plane with circle of light. Source: Copyright © 2019 Sam Pratt.

1    Sensitivity. Technical. The larger the surface area, the more sensitive the image plane will be to light. All things being equal, a small imaging plane will require more light than a large imaging plane. 2    Resolution. Technical. See Chapter 12 Resolution. To summarise: the greater the total surface area of the celluloid, the higher the potential resolution of the resulting image. Note: the surface area of a digital sensor is independent of the resolution. A large sensor does not necessarily indicate a high resolution. 3    Depth of field. Aesthetic. See Chapter 6 Depth of field. To summarise: the larger the image plane, the shallower the depth of field can be.

Celluloid film plane

Celluloid film stocks come in different sizes. Motion-picture film stocks were created at sizes that were designed to be cut out of the commercially plentiful 70mm stills photography celluloid. Therefore 35mm, 16mm and 8mm are (roughly) subdivisions of the 70mm celluloid negative. The image plane in each case does not extend across the full width of the celluloid. The image plane of the 35mm negative is not 35mm across, because part of the celluloid is given to the sprocket hole perforations that run along each side of the image and are a necessary part of the camera’s process for advancing the film from frame to frame. The big driving forces behind the many different film versions that are detailed in this chapter are improved resolution and the pursuit of the widescreen image (as well as cost and efficiency). It is worth paying attention to the figure in mm2 that accompanies each format. This is directly linked to the resolution of the image and how good it will look when projected on a large screen. The size of the image in mm2 does not directly relate to pixel count, but the general principle can be observed that the greater the surface area of the image, the higher the resolution of the image will be.

8mm The 8mm film format is a pre-video consumer home-movie format. The image plane of the 8mm film is: 4.5mm × 3.3mm (5.58mm diagonal) = 14.85mm2. At one time, 8mm home movies were the only available option before commercially available video cameras became available. The small image plane resulted in quite a deep-focus image and one that did not look great when projected on a large screen.

Super 8 Super 8 film is an advancement of the 8mm home-movie film format that used the same 8mm celluloid, but rearranged size and shape of the film gate and the sprocket holes to accommodate a larger image plane (surface area) on the 8mm film stock, improving the resolution slightly. Super 8 and 8mm film had a distinctive look that can invoke a sense of nostalgia when used today. As well as the deep focus, the images often have a softness to the focus, caused by older vintage lenses, which were typically less sharp than today’s lenses. They are usually very grainy, caused by the reduced sensitivity to light and

the need to enlarge the picture a lot to view at a decent size. There is also often a scratchy and weathered quality to Super 8 movies that are viewed today, caused by wear and tear and multiple projections over the years. The image plane of the Super 8 film is: 5.79mm × 4.01mm (7.04mm diagonal) = 23.21mm2.

16mm The 16mm film format is a low-budget professional celluloid film option, used as a cheaper and physically smaller alternative to 35mm film. The format was often used for journalism and documentary due to the smaller size of the film stock. The 16mm film format was also used on low-budget and student films, again for costsaving reasons.

FIGURE 10.2  Film stock size comparison. Actual size. Source: Copyright © 2019 Sam Pratt.

The image plane of the 16mm film is: 10.26mm × 7.49mm (12.7mm diagonal) = 76.85mm2.

Super 16 Super 16 film is an advancement of the 16mm professional film format that used the same 16mm celluloid, but removed the sprocket perforations on one side and adjusted the film gate to accommodate a larger image plane, resulting in an image with a greater resolution. The image plane of the Super 16 film is: 12.52mm × 7.41mm (14.54mm diagonal) = 92.77mm2. The Super 16 format is not often used in mainstream feature films, but has been used a number of times. The film This is England used the Super 16 format, possibly because of the reduced cost of the format. The Hurt Locker used Super 16, probably because of the association with war cinematography and the documentary shooting style. Also the 16mm film stock runs through the camera at a rate of 36 feet per minute (at 24 fps) and the comparative 35mm film stock runs through the camera at a rate of 90 feet per minute. Using the 16mm film would mean that longer takes could be achieved with less frequent breaks in filming while the camera assistants changed the film magazine.

35mm – silent movies Before movies featured sound, the full available area of the film stock was used to capture the image. The image plane of the silent 35mm film is: 24.89mm × 18.67mm (31.11mm diagonal) = 464.70mm2.

35mm – Academy When sound was introduced and recorded onto the film frame on the optical soundtrack, the same basic aspect ratio (screen shape) was retained by altering the film gate to reduce the size of the recorded image. The 35mm film format is the original standard for motion-picture films with sound.

The image plane of the Academy standard 35mm film is: 22mm × 16mm (27.2mm diagonal) = 352mm2.

35mm – widescreen Televisions became commercially available in the 1950s and, in an effort to attract people out of their homes and back into the cinema, film-production companies wanted to offer extra spectacle to cinema going audiences: big, widescreen images. One simple method for creating widescreen images is to matte or crop off the top and bottom of the regular 35mm image plane. This results in a widescreen image, but the resolution is negatively affected as the surface area of the imaging plane is reduced by about 40%. This reduction is resolution was noticeable when projected at large sizes. The image plane of the widescreen 35mm film without the crop is: 21.95mm × 18.6mm (28.77mm diagonal) = 408.27mm2. The image plane of the widescreen 35mm film after the crop is: 21.95mm × 11.86mm (24.94mm diagonal) = 260.32mm2. To combat the loss in resolution, but to still achieve a widescreen image using the same readily available 35mm film stock, more sophisticated methods were developed.

Anamorphic 35mm Anamorphic is a generic term that describes a process of distorting (or squashing) an image to fit a particular imaging plane. The anamorphic process is twofold. The first stage (filming) uses anamorphic lenses (as opposed to regular spherical lenses) to squash the image horizontally; to make a widescreen image fit a standard, nonwidescreen image plane. The second stage (projection or digital scan for use in an NLE system) un-squashes the image, to re-create the widescreen image. The net benefit is an increase in resolution. To put it another way, the anamorphic process uses the full four perforation celluloid area while creating a widescreen image. It negates the celluloid and resolution wastage that was present in the standard 35mm widescreen. Anamorphic 35mm has about a 40% increase in resolution over the standard 35mm widescreen.

The image plane of the Panavision 35 anamorphic 35mm film is: 21.95mm × 18.6mm (28.77mm diagonal) = 408.27mm2. The process is known more formally as anamorphoscope, but is often referred to simply as ‘scope’. Figure 10.3 features an anamorphic image and the final, corrected (de-squeezed) image.

Super 35 Super 35 (S35) is an advancement of the 35mm motion-picture film format that used the same 35mm celluloid, but removed an area of the film that was reserved for an optically printed soundtrack and adjusted the film gate to accommodate a wider image. This resulted in an image with a slightly wider aspect ratio (screen shape) and a greater resolution (although not as high as anamorphic 35). Super 35 is not a distribution standard (i.e. not for cinema projection) but is only a capture medium. The extra resolution is retained and the image printed back onto a regular distribution celluloid (with optically printed soundtrack) or digitally scanned for editing inside an NLE system.

FIGURE 10.3  Anamorphic image. The above image is the anamorphic image taken directly from a camera using an anamorphic system. The image below is the same image that has been ‘de-squeezed’ in post-production. Source: Copyright © 2018 Mac Nixon. Cinematography by Andy Toovey.

Super 35 narrowed the resolution gap between 35mm widescreen and anamorphic 35mm. It improved the resolution, but without the need for specialist lenses. The image plane of the Super 35 film is: 24.89mm × 14mm (28.55mm diagonal) = 348.46mm2.

Techniscope Techniscope is a lower budget widescreen 35mm format. The process is associated with low-budget and student films. The process is similar to the standard widescreen format, but instead of simply wasting the area above and below the widescreen image on the four-perf frame. Techniscope reduces the height of the image plane to two perforations. This is similar to the post-cropped 35mm widescreen format (although resulting in an even wider image) but is 50% more efficient with the film stock. Half as much celluloid runs through the camera per second, so the process is half as expensive. The film gate and the shuttle claw need to be permanently altered to make a camera shoot in techniscope two perf. This process was used on budget conscious classics such as The Good the Bad and the Ugly (1966), American Graffiti (1973) and, more recently, American Hustle (2013). The image plane of the techniscope film is: 22mm × 9.47mm (24mm diagonal) = 208.34mm2. This is by far the lowest resolution of all the 35mm formats.

VistaVision VistaVision is a film format created in the 1950s by Paramount Pictures. The system used regular 35mm film, but orientated horizontally, instead of vertically. The result is a much greater image plane and resolution. The film stock is the same readily available 35mm, but the resolution is almost 50% higher than even the anamorphic, and the grain structure appears much smaller when compared to 35mm at a regular orientation. The images are also natively widescreen so no cropping is needed. It was a high quality but expensive solution. Some classic films were shot using the VistaVision format, including White Christmas (1954) and Alfred Hitchcock classics Vertigo (1958) and North by Northwest (1959). The format was made obsolete in favour of anamorphic and Super 35, but developed a niche as a special-effects format. Because of the very high resolution, VistaVision was used to capture assets that would be used in special-effects shot composites. The VistaVision process was used in a special-effects capacity on a number of big-budget effects-heavy movies, such as the original Star Wars trilogy (1977–83), Gladiator (2000) and Scott Pilgrim vs the World (2010).

The image plane of the VistaVision film is: 37.39mm × 25.32mm (45.15mm diagonal) = 946.71mm2.

65mm The 65mm film is a very high-quality alternative to 35mm film. There are far fewer films that have used the larger image size, although there have been a number of different variations on the 70mm format. Technically, the film that runs through a camera is 65mm wide, and the finished presentation is projected from 70mm film, with the extra 5mm being dedicated to an audio soundtrack. As such, the terms 65mm and 70mm are used interchangeably. The image plane of the 65mm film is: 52.63mm × 23.01mm (57.44mm diagonal) = 1211.02mm2.

IMAX 70mm IMAX 70mm is the most famous of the 70mm cinema formats and is the largest in cinema history. IMAX film runs horizontally through the camera and projector, similar to VistaVision, only larger. The IMAX film runs at a massive 15 horizontal perforations per frame. The image plane of the IMAX film is: 70.41mm × 52.63mm (87.90mm diagonal) = 3705.68mm2. This is why IMAX film can be projected on the enormous IMAX screens. It is important to note the difference between true projected 15/70mm IMAX and the much more common digital IMAX screens that are in most large cities in the UK. Digital IMAX is simply a larger than normal screen size with a 2K or 4K digital projection. There are currently only three true 15/70mm IMAX theatres in the UK.

Digital imaging sensor The size of a digital imaging sensor is not an indication of the resolution. A very small sensor from a smartphone can have more pixels than a full-frame DSLR sensor (comparable to VistaVision size). For example, a smartphone might have a

4K camera and a small sensor (8mm size or less), but a full-frame DSLR (VistaVision size) might only produce HD images (which are 120% smaller than the 4K image). The two are independent in the digital world. Digital imaging sensors use an array of millions of light sensors (photosites) that create an electrical current when exposed to light. The strength of the current is directly related to the brightness of the light. The sensor and the camera’s imaging system then convert the information from the photosites into an image comprised of individual picture elements: pixels. Digital imaging sensors can discharge this current very quickly, meaning that sensor can be read to capture another frame very rapidly. As such, digital imaging sensors can capture much higher frame rates than celluloid cameras, which are limited by mechanical factors, such as the speed that film can be passed through the camera body.

3CCD A 3-chip charged-coupled device (3CCD) is a very common kind of digital imaging sensor for camcorders. It consists of three separate sensors (chargedcoupled devices), each one recording a separate colour channel: red, green and blue. Before the light from the lens reaches the sensors, it is split into three by the beam splitter, each of which is sent to a separately recording red, green or blue digital imaging sensor.

CMOS sensor The complementary metal oxide semiconductor sensor (CMOS) is a single digital imaging sensor that captures all three colour channels at once. There is no beam splitting with a CMOS sensor. These are the kind of digital imaging sensors used on single chip devices like a DSLR or a smartphone. The CMOS sensor captures the three channels of light (RGB) using a Bayer pattern (or Bayer filter).

FIGURE 10.4  Beam-splitter diagram. Source: Copyright © 2019 Sam Pratt.

FIGURE 10.5  CMOS sensor (artist’s impression). Sensor and film stock are presented as smaller than actual size. Source: Copyright © 2019 Sam Pratt.

Bayer pattern The Bayer filter uses a pattern of alternate red, green and blue sensitive photosites to record the different colour channels. Notice that there are twice as many green photosites than either the red or blue: green = 50% – red = 25% – blue = 25%. This is because our human visual system is more sensitive to the colour green (probably because of the abundance of grass and vegetation etc.) so more

photosites are dedicated to that colour. See Plate 14 for a representation of a Bayer filter.

Digital sensor sizes Small sensor Most traditional video cameras, including a lot of modern camcorders and all smartphone and action cameras, feature imaging sensors that are smaller than S35 film. A common digital imaging sensor size is 1/3”. That’s less one-third of an inch measured diagonally from one lower corner to the opposite upper corner. The imaging plane of a 1/3” sensor is: 4.8mm × 3.6mm (6mm diagonal) = 17.28mm2. The surface area of the digital sensor has no bearing on the resolution of the image produced. The 1/3” sensor could have 3840 × 2160 pixels producing a 4K image. It could also be a lower resolution than that. However, the small imaging plane size does affect the sensitivity and the depth of field possible. The sensitivity will be much less than a larger sensor, meaning (all things being equal) more light will be needed from outside the camera, or more ISO/Gain will need adding to the image to expose properly. The depth of field will also be very deep, meaning everything will be in focus. This is good from a practical point of view, but frustrating for someone wanting to use shallow depth of field in their storytelling.

Micro Four Thirds Some DSLRs (stills cameras) have a Micro Four Thirds-sized sensor, MFT for short. This is a larger sensor than the ones found in smartphones, but not quite as large as a S35-sized sensor. The imaging plane of a MFT sensor is: 17.3mm × 13.0mm (21.6mm diagonal) = 225mm2. Again, this has no bearing on the potential resolution of the images produced, but does affect the sensitivity and the depth of field of the resulting image.

FIGURE 10.6  Digital sensor size comparison. Actual size. Source: Copyright © 2019 Sam Pratt.

The MFT sensor does not achieve a shallow depth of field quite as easily as a S35-sized sensor does, although it is possible to achieve.

APS-C The APS-C sensor is the standard DSLR imaging sensor (not full-frame cameras). There are two variations on APS-C sizes, but both are comparable to S35. The imaging plane of an APS-C (Canon) sensor is: 22.2mm × 14.8mm (26.7mm diagonal) = 329mm2. The imaging plane of an APS-C (Nikon, Sony, Pentax) sensor is: 23.6mm × 15.7mm (28.3mm diagonal) = 370mm2. This image plane is very close in surface area to S35 and so mimics the look of the depth of field well and is also more sensitive to light than smaller sensors because it is larger.

S35-sized sensor

As a response to DSLR filmmaking, which began accidentally, camera manufacturers have begun to offer those wishing to create filmic video images, cameras with a sensor that very closely matches the dimensions and surface area of an S35 camera. This is primarily to replicate a film-like depth of field. The imaging plane of an S35 Sensor is: 24.9mm × 14mm (28.56mm diagonal) = 348.6mm2. Cameras that have a digital S35 sensor are exactly what we are looking for in our quest to create filmic images. They are designed to make video look like film, in so far as the depth of field is concerned.

Full-frame sensor Full-frame sensors are found on the camera that started the DSLR filmmaking revolution: the Canon 5D. This is primarily a stills (photography) camera that has a sensor the same size as the film inside a stills camera, which runs horizontally. As such, the image plane is larger than the motion-picture camera, which runs the film vertically. This makes achieving a shallow depth of field even easier on a full frame camera than on S35 film (or equivalent). This ease of access to shallow depth of field images was the reason for the DSLR revolution. Suddenly, the filmic quality that was very hard to replicate on a smaller sensor camera was very easy to pull off with the full-frame camera. As such, the shallow depth of field look was everywhere. It felt like every video was filmed on a DSLR and used shallow depth of field to the extreme. Indie filmmakers really overused that trick. The imaging plane of a full frame sensor is: 36mm × 24mm (42.3mm diagonal) = 864mm2.

VV sensor The VistaVision image plane has made a comeback in recent years with the digital camera manufacturer Red producing their Monstro 8K VV that has a VistaVisionsized imaging sensor. This is opposed to the majority of their cameras that have a S35-sized sensor. The original VistaVision camera used ordinary 35mm film but oriented horizontally. This makes a VV sensor virtually the same size as a full-frame sensor.

The imaging plane of the Red VV sensor is: 40.96mm × 21.60mm (46.3mm diagonal) = 884.74mm2.

DSLR sensor size vs DSLR video imaging When considering DSLR (stills camera) sensors, there are a couple of things to bear in mind. These are based on the idea that a digital stills camera is designed foremost to take still images. The ability to record video is a secondary consideration. Some issues arise out using a DSLR to make video images, which is not primarily designed for: 1    Aspect ratio. Still images are typically squarer than video images. A typical still image aspect ratio might be 3:2, whereas all HD and beyond video content is an aspect ratio of 16:9. Therefore, the video image area on the sensor will be smaller than the larger stills area. This is a consideration when looking at the size and surface area of a sensor. 2    Aliasing. Stills cameras usually take much higher resolution stills images than they do video. They were built for high-resolution photographs, but a smaller resolution video. For example, the Canon 5D Mk4 will take stills at a resolution of 6720 × 4480 (more than 6K), but will record video at a resolution of 4096 × 2160 (4K UHD). The camera effectively has a micro computer inside, which converts the images into a video file. The camera has to resize the larger images to meet the requirements of the smaller video size. This can result in some unattractive artefacts: aliasing or moire. The camera reduces the size of the image by skipping lines of pixels. This process is mostly unnoticeable, but certain tight patterns of straight lines can cause noticeable artefacts. Brickwork and other tight patterns can cause the effect. Sometimes repositioning the camera can alter or remove aliasing artefacts. Care should be taken with this however, as it is easy to miss when shooting using only a small screen for guidance.

Global and rolling shutter The film camera, as well as more expensive dedicated digital cinema cameras, uses a global shutter; that is, a shutter that opens and closes almost instantly. DSLRs usually have a mechanical shutter for taking stills. That ‘click’ that you hear (and feel) is the mechanical shutter physically opening and shutting. In video

mode however, the camera cannot click away at 24 times per second. It would be noisy and vibrate the camera, not to mention wearing out the shutter very rapidly. Instead, the DSLR uses an electronic shutter in video mode. In electronic shutter mode, the physical shutter remains open and the sensor simulates the shutter speed by discharging and taking the image from the sensor at the frame rate and shutter speed you have selected. It does not take the full image at the same time though, and it works from the top left of the sensor to the bottom right very rapidly. This process is very rapid, but not quick enough to produce flawless video images. The top of the image is older than the bottom by a fraction of a second, and this can result in rolling shutter artefacts, or the jelly effect. If you move the camera from left to right quickly enough, straight objects will appear bendy, as the bottom of the camera image is slightly behind the top of the image. This can be accounted for when filming by reducing the amount of shots that will make the rolling shutter noticeable. There are also rolling shutter repair plugins available in NLE software such as Adobe Premiere Pro.

Activities •    Research the cameras you have access to. Most camera manufacturers will have a PDF manual that you can freely download. Determine the sensor size. •    Experiment with the different cameras you have access to and try to achieve a shallow depth of field (remember what you have learned about aperture control in shallow depth of field). •    Review the footage in your ideal viewing conditions. •    Try to replicate aliasing on your camera or cameras. Film a brick wall square on. Move forwards and backwards, closer to the wall and further away. Review and pay attention to any moving patterns in the brickwork. •    If using a DSLR, replicate the rolling shutter effect. Do this by either by gently swinging the camera from side to side or shooting lampposts from the passenger side of a moving car.

Image plane rule recap Rule: Choose a sensor that most closely resembles the dimensions of an S35 image plane, to replicate a similar cinematic depth of field.

11 ASPECT RATIO

In order to compose your shots in a way that makes the most effective use of the available space, you should consider the following rule: Rule: Consider the stylistic or creative use of aspect ratios when composing your image, using onscreen guides to help compose shots for your chosen aspect ratio. All of the different aspect ratios are available to you as a digital filmmaker, but they will require you to compose your shots with your chosen aspect ratio in mind, and the intention of cropping or masking out the unneeded areas with black bars, during post-production. This chapter will explore some of the different aspect ratios available and explain why they exist.

Aspect ratios The world of aspect ratios is rich and fascinating. An exploration of this area can take you through the full history of filmmaking processes and formats. A lot of the information in the chapter can be read in tandem with information from the previous chapter, 10 Image plane and the next chapter, 12 Resolution. Aspect ratios describe the relationship between the width of an image and the height. They could also be described as the screen shape. Aspect ratios can be long and wide, but they can also be less wide and more square-looking.

FIGURE 11.1  Aspect ratios – 1.33:1, 1.37:1, 2.66:1, 2.39:1, 1.85:1, 4:3, 16:9.

Sources: Film images: frame from The Passion of Joan of Arc 1928. Directed by Carl Theodor Dreyer; frame from Modern Times 1936. Directed by Charlie Chaplin; frame from White Christmas 1954. Directed by Michael Curtiz; frame from Unforgiven 1992. Directed by Clint Eastwood; frame from I’m Alan Partridge 1997. Directed by Dominic Brigstocke; frame from Breaking Bad 2013. Directed by Michael Slovis. Graphics copyright © 2018 Sam Pratt.

They can be expressed as a ratio such as 4:3, 16:9 or 1.33:1, 2.39:1 and sometimes as a fraction, such as 1.33 or 2.39. A 1:1 aspect ratio is a square. It consists of one equal measurement across and one equal measurement up and down. Not many films have been released using a 1:1 aspect ratio, but there have been some. The aspect ratio is completely independent of the resolution (although you can determine the aspect ratio of a film by knowing the number of pixels horizontally and vertically). A 1:1 aspect ratio could be 10 × 10 pixels or 1,000 × 1,000 pixels and it would still be the same aspect ratio. Resolution is not inherent in the aspect ratio; it is only the screen shape.

Early film aspect ratios Film aspect ratios (or screen shapes) have developed first out of practical concerns, such as the shape of the available image plane of the film stock and then moved onto more aesthetic considerations, such as how to improve quality (greater resolution and smaller visible grain) while providing a wider, more immersive viewing experience. The different aspect ratios exist because throughout film history filmmakers and technicians have experimented and developed the film form. Today, film types have settled on widely adopted standards, but we still live with the legacy of the variations that developed during the century in which cinema was developing.

1.33:1 – silent films The 1.33:1 aspect ratio is the first aspect ratio. It is the ratio of width by height of the image plane of the 35mm film. This uses the maximum available celluloid shooting at four perforations high. This was primarily used for silent films.

1.37:1 – the Academy ratio With the introduction of recorded sound into cinema, space needed to be given on the celluloid for an optically printed soundtrack. The top and bottom of the

image was masked off to preserve a similar aspect ratio to the silent film aspect ratio 1.33:1. The two aspect ratios are so similar that they are often considered the same thing. The American Academy of Motion Picture Arts and Sciences1 standardised the film format and aspect ratio in 1932 and named it the Academy ratio.

Widescreen aspect ratios When television was introduced in the 1950s, it was natural to keep the 1.37:1 screen shape that film had developed. Television changed the habits of cinemagoing audiences, who instead of visiting their local theatre, opted to stay at home and watch television. Film studios wanted to offer something extra to audiences, and so put time and money into developing widescreen film formats, which would provide extra spectacle and encourage audiences to visit the cinema to experience something they could not get at home.

2.66:1 – Cinerama Originally created as a means of training anti-aircraft personnel during the Second World War, Cinerama was developed in the early 1950s by American inventor Fred Waller. Cinerama was a novel cinema capture and presentation system designed to thrill audiences by projecting large enveloping images onto a huge 146° curved screen, which mimicked the radius of the human eye. Waller wanted to provide an experience that approximated human peripheral vision. Simply projecting one image onto a larger screen would have meant a degradation in the resolution of the image and the presence of enlarged film grain. Instead, the Cinerama system used three interconnected cameras, each shooting vertical 35mm celluloid in frame-precise synchronicity. On distribution, the system used three projectors to create an immersive viewing experience. The Cinerama process was initially turned down by the major film studios due to the complicated setup and the extra cost, so Cinerama Productions Corp. set up production of its own content, focusing mainly on documentary and travelogue productions. This Is Cinerama (1952) was the first of these films, designed to showcase the immersive potential of the new format. In 1962, MGM produced How the West Was Won (1962), directed by John Ford, which was intended to showcase the potential of the Cinerama process in dramatic

narrative cinema. The process had limitations as every shot was effectively a wide shot, meaning the usual cinematic vocabulary of close ups and mid-shots was not possible to attain. Apparently, John Ford was frustrated by the production process, but audiences enjoyed the film and it was a success. However, the difficulty in presenting the film on its original Cinerama curved screen proved too expensive and cumbersome to warrant the production of other studio films using the three camera Cinerama process.2 How the West Was Won was reformatted to the even wider 2.89:1 aspect ratio for the non-Cinerama release. Though the Cinerama process was soon replaced by cheaper alternatives (such as anamorphic Cinemascope), the widescreen screen concept that had proved so popular was carried forward by the major film studios, which each pioneered a different, cheaper widescreen format of their own. The legacy of Cinerama is that it started the race to widescreen, which is evident in virtually every film and television programme made today.

2.39:1 – anamorphic Cinerama had whetted everyone’s appetites for widescreen movies, but anamorphic Cinemascope was the format that truly popularised widescreen filmmaking. The anamorphic system was introduced in the 1960s as a cost-efficient way of achieving a widescreen image that is still of a sufficiently high resolution when projected on a large screen. The anamorphic process was successfully packaged under the name Cinemascope3 (or ‘scope’) but was later replaced by the Panavision system and others. Billy Liar (1963) is a British social realist film that used the Cinemascope anamorphic film process. This was perhaps an odd choice of film format for a realist film due to the anamorphic’s reputation for slightly distorting and stylising the image. The anamorphic process is designed to use the readily available 35mm celluloid film stock at the regular vertical orientation, but making use of the full height of the four-perforation celluloid area. The technique uses special anamorphic lenses that squash the image horizontally. The resulting celluloid looks distorted, but is designed to be un-squashed on projection. The lens system4 used a 2:1 optical compression, which meant that the resulting aspect ratio when unsqueezed (and

taking into considerations space left for the optically printed soundtrack) was 2.39:1. The aspect ratio 2.39:1 is often referred to as 2.40:1 or 2.35:1. Typically, these all refer to the same 2.39:1; 2.40:1 is a less accurate representation of the 2.39:1 ratio, and the 2.35:1 was an older SMPTE standard from the 1970s. Anamorphic lenses are becoming popular amongst digital cinematographers and some DSLRs now have anamorphic shooting modes.

1.85 – VistaVision VistaVision was created to provide high-quality images that still had a high resolution when presented in a widescreen format. Engineers at Paramount Pictures decided to use the readily available 35mm film stock but create cameras that ran the film through horizontally, rather than vertically, which was the usual orientation for 35mm film. This resulted in a much higher resolution with much smaller visible grain when compared to a similar aspect ratio shot using a single piece of celluloid and cropped to the widescreen shape. The surface area was much greater. The VistaVision format was very popular with filmmakers such as Alfred Hitchcock who shot North by Northwest and Vertigo using the system. The format used a lot of celluloid film stock however, and was ultimately made redundant by the more cost-effective anamorphic film process.

TV aspect ratios While we only scratched the surface of the different celluloid film aspect ratios, there are only really two aspect ratios in the video domain. They are 4:3 (sometimes written as 4 × 3) and 16:9.

4:3 The original television aspect ratio is 4:3 and this was developed to be the same as the 1.37:1 Academy aspect ratio. The 4:3 television aspect ratio is a measurement of four equal parts across and three equal parts up and down. This is the same shape as the Academy ratio. SD television programmes were filmed in the 4:3 aspect ratio.

16:9 When the new HD standards were being designed in the late 1980s, the aspect ratio of 1.78:1 or 16:9 was suggested as a compromise between the two most common aspect ratio extremes: the 4:3 aspect ratios of the past (film and television) as well as the wider 2.39:1 cinema aspect ratio. Content from either aspect ratio would take up a similar amount of screen space when formatted for the 16:9 frame. Television and video aspect ratio and resolutions have been standardised and show no signs of changing anytime soon. Therefore 16:9 is the aspect ratio of the present and the future. All HD, 4K UHD and even 8K UHD screens and cameras use this aspect ratio. Breaking Bad (2008–2013) is a television show that has brought a cinematic production value to the small screen. As much as the programme looks like a film, it uses the television 16:9 (1.78:1) aspect ratio because it has been created for television distribution; both transmitted and on DVD and Blu-ray disc. Breaking Bad was shot on Super 35 film, but was formatted to the 16:9 shape because it was not created to be projected in a cinema.

FIGURE 11.2  16:9 with 4:3 and 2.39:1 aspect ratio overlays. Source: Copyright © 2019 Jonathan Kemp.

Every digital camera you use now (unless it is very old) will shoot a 16:9 image as standard and some more cinematic cameras will also have the option to shoot at a slightly wider aspect ratio, to give added flexibility to scale or reframe an image in post-production.

Pillarbox and letterbox When shooting with a digital camera (unless using anamorphic lenses and shooting mode), the only way we will be able to utilise aspect ratios that differ from the native 16:9 ratio of digital video will be to matte or crop parts of the image to change the screen shape. This is very easy to do in an NLE system and will not lead to any perceptible loss in resolution as the image is not being stretched or scaled up. The matte or crop is simply removing part of the image. A 4:3 image placed in the 16:9 frame uses ‘pillarbox’ black bars to obscure the sides of the image. A 2.39:1 image placed in the 16:9 frame uses ‘letterbox’ black bars to obscure the top and bottom of the image. As easy as it is to add a matte to an image in post-production, it will not work well if you did not compose the shots for the intended aspect ratio. It is vital that you have decided on aspect ratio before shooting and that you use a guide to help with composition. Some cameras have aspect ratio guides to help you compose your shot. I have in the past attached bits of paper to help me on cameras that didn’t have an aspect ratio guide. With a visual aspect ratio guide, you should arrange the elements of the shot so that they are contained within the marked area. The image that is recorded will still be a full 16:9 image with no visible guides, but all the important elements of the shot will be contained within the area that is within the aspect ratio area.

What aspect ratio to shoot? I consider that there are four main aspect ratio options for us to use as standard. You can, of course, use any aspect ratio you would like, based on a pre-existing aspect ratio or one of your own creation. I prefer to use the standard aspect ratios though.

4:3

The 4:3 (Academy) aspect ratio can be used for a nostalgic look. I imagine it was an easy decision for the filmmakers of the Oscar-winning The Artist (2011) as they decided on the Academy ratio for their period film. Some filmmakers choose a 4:3 aspect ratio for films set in modern day. British filmmaker Andrea Arnold has shot the majority of her critically acclaimed films in the 4:3 aspect ratio because she thinks that it results in a film that is more personal, less concerned with the scenery and more focused on the person in the frame: I absolutely loved using it [4:3] in ‘Fish Tank,’ and we tested a lot on ‘Wuthering Heights.’ We shot some video, and then some film. We tried different stocks and things. With some of that original stuff, we had we shot it straight without a matte and projected it, and it was just the film in Academy ratio. I didn’t think it was something that would come up, but when it did, I was like, ‘Oh my God, that looks so beautiful.’ I loved it all over again, even though I knew it would be a provocative choice. It’s a film with a lot of landscapes, so everyone expects you to use a wide screen. I’ve thought about it quite a lot since, and I think why I like it is because my films are mostly about one person. I’m following that one person and I’m keen on that one person. It’s a very respectful and beautiful frame for one person. It gives them a lot of space. You can frame one person in a 4 × 3, and it gives them a lot of – I don’t know – humanity? I’m not sure. I like it as well because it’s the whole negative, and you’re not cutting anything off. Mostly what everyone’s doing it cutting off the top and the bottom, and I love that we don’t do that. We’re using the 35mm film negative and you get more information. It gives you more headroom, and you get quite a lot of sky. Those moors are very green, and if I shot a landscape, I figured the sky would be changing all the time. But that’s not the real reason, I suppose, it’s more of a justification. You know what I mean? You try and justify what you do, but sometimes you just love it and it’s hard to understand why.5

16:9 The 16:9 frame is used as a standard for television programmes. As such it can be used very successfully for documentary. This aspect ratio seems to not be used very often for cinematic films, probably because of the televisual association. It was used well in Hidden (2005), a film that uses television and video as a story component.

1.85:1 – normal widescreen The 1.85:1 is a widescreen aspect ratio that is often used in cinema. Notice that The Selfish Giant (2013) has a slight letterbox when formatted for the 16:9 screen shape.

2.39:1 – Cinemascope widescreen

The 2.39:1 aspect ratio is the widest of the most commonly used aspect ratios. The widescreen shape works well with dramatic landscapes and conveys expansive scenery well. This aspect ratio has been used a lot in recent cinema history, because of the popularity of anamorphic lens systems and so for many people it creates an instantly cinematic impression. Clint Eastwood and cinematographer Jack N. Green used Panavision’s anamorphic process to create beautiful 2.39:1 landscape imagery in Unforgiven (1992).

Activities Explore the available options for using aspect ratio markers to help you compose your shots, in an aspect ratio different to the 16:9 standard. Also: •    Watch films and television programmes paying attention to the aspect ratio. •    Go onto the IMDB ‘Technical Specs’ page of your favourite films and notice the aspect ratios. •    Go to the website: film-grab.com6 and browse the catalogue of films by aspect ratio. •    Discover whether your camera has aspect ratio markers. If so, use them. •    Context and story are important considerations when deciding on the aspect ratio, so consider what is the most appropriate screen shape to tell you story. •    Consider the implicit and explicit implications of an aspect ratio. Consider the focus of the story.

Aspect ratio rule recap Rule: Consider the stylistic or creative use of aspect ratios when composing your image, using onscreen guides to help compose shots for your chosen aspect ratio.

Notes 1    The Academy of Motion Picture Arts and Sciences host the Academy Awards, otherwise known as the Oscars. 2    However, the Cinerama brand continued, including the sci-fi classic 2001: A Space Odyssey (1968), which was presented in 70mm film on a Cinerama curved screen.

3    Cinemascope was considered by some to be the poor man’s Cinerama. 4    Films that are shot using an anamorphic system today can be unsqueezed in an NLE system and then used as normal footage. 5    Andrea Arnold – http://awisemanlives.tumblr.com/post/33463045402/andrea-arnold-on-shooting-in-43. 6    https://film-grab.com/aspect-ratio/.

12 RESOLUTION

The fundamental rules that you should consider to achieve the highest quality images are: Rule: Use the highest resolution possible that you have the capacity to store and work with both on production and in post-production. Rule: Consider ‘over scanning’ the image and shooting for a slightly wider composition for added flexibility in post-production. This chapter will look at some of the fundamentals of resolution and also some of the options available to us.

Resolution The word resolution refers to the potential definition of an image (in this context at least). The quest throughout all of film history has been to achieve the maximum definition of the image. Another way to think about resolution could be with a question: ‘How large can I project or print this image and it still look sharp and well defined?’ We are going to break with the usual format and start with the digital technology and look at how it compares with the celluloid at the end of the chapter.

Pixels

Digital images are broken down into individual picture elements, or pixels. The photosites on a digital imaging sensor are capable of creating pixels that can be any colour within a particular colourspace, using a combination of red, green and blue elements. The number of photosites that an imaging sensor (in a camera) uses to create the individual pixels is a finite measurement of the resolution capabilities of the device. The more pixels that are recorded, the higher the resolution of that image will be. All things being equal, the larger, higher resolution image is usually the better option, in terms of the flexibility that a large image will give an editor. A large image can be resized to fit a smaller screen without a loss of quality, but a small image will become blurry and pixelated if it is increased in size. The pixel count of an image is usually recorded as an equation, such as 1920 × 1080. This means that the image has 1920 individual pixels across and 1080 pixels up and down: 1920 × 1080 = 2,073,600

So a 1920 × 1080 pixel image contains 2,073,600 pixels, or 2.073 megapixels.

Pixel aspect ratio Just as a film or video image has an aspect ratio (or screen shape), pixels also have an aspect ratio. In most cases, pixel aspect ratios are 1:1, or square. Some cameras produce images that have rectangular-shaped pixels, such as 1.458:1.

FIGURE 12.1  Rectangular pixel aspect ratio 1.458:1. Source: Copyright © 2019 Sam Pratt.

So why do some cameras produce square pixels and other produce non-square pixels? The reason comes down to that familiar equation: quality vs cost. This also tends to be a legacy issue, and so does not affect video created in HD or beyond. Rectangular pixels were used primarily as a way to horizontally stretch a standard definition 4:3 aspect ratio video image to fit a 16:9 screen without the need for extra pixels. This would accommodate the wider image without increasing the number of pixels needed, meaning file sizes stayed the same.

Standard definition – SD SD images are the old standard of television broadcasting and video. Non-HD television channels are broadcast in SD, as are VHS cassette tapes and DVDs. There is a lot of variation in the SD category, some of it depending on where in the world you are.

PAL – Europe

The SD resolution in PAL territories is 720 × 576 pixels, which produces a 0.414 megapixel image. For older programmes that used a 4:3 aspect ratio the pixel aspect ratio is 1.094:1, which is almost square.

FIGURE 12.2  SD frame size comparison. Source: Copyright © 2019 Sam Pratt.

For newer SD programmes produced in widescreen 16:9, the pixel aspect ratio is 1.458:1, which is more rectangular. This rectangular pixel aspect ratio stretched the 4:3 image into a 16:9 shape without using any more pixels. The SD widescreen pixel count equivalent using square pixels is 1,024 × 576. You can see that the 720 horizontal pixels have been effectively stretched out by the rectangular pixels over the space normally occupied by 1,024 square pixels. It can be complicated, but, essentially, the rectangular pixels constitute a method for creating a widescreen 16:9 image out of an almost square 4:3 image, without using any more horizontal pixels. It was a fudge, and reduced the clarity of the image slightly.

NTSC – America

The same is true in the American NTSC system, only with slightly different pixel dimensions. The SD resolution in NTSC territories is 720 × 480 pixels, which produces a 0.345 megapixel image. For older programmes that used a 4:3 aspect ratio the pixel aspect ratio is 0.909:1, which is almost square.

FIGURE 12.3  Frame size comparison – SD, HD, UHD. Source: Copyright © 2019 Sam Pratt.

For newer SD NTSC programmes produced in widescreen 16:9, the pixel aspect ratio is 1.212:1, which is more rectangular. This rectangular pixel aspect ratio stretched the 4:3 image into a 16:9 shape without using any more pixels. After being stretched to widescreen using a rectangular pixel aspect ratio of 1.212:1, the effective square pixel equivalent of an NTSC SD widescreen image is 872 × 480.

High definition – HD There are a few different resolutions that can be classed as HD.

‘Standard’ HD ‘Standard’ HD is a bit of an anomaly that was originally a halfway house between SD and ‘Full HD’. The branding ‘HD ready’ was displayed on early HD televisions, indicating the set would accept a full HD image, but display a slightly smaller image. The ‘Standard’ HD resolution is 1280 × 720 pixels, which produces a 0.921 megapixel image. This resolution is popular for internet use, as the files are higher resolution than SD but easier to use upload and stream. The pixel aspect ratio is square, which results in a standard 16:9 image. This resolution is also known and referred to as ‘720’ on cameras and websites.

HDV High Definition Video (branded HDV) was an early forerunner of HD camcorders. It is not quite as good as full HD. HDV uses the pixel dimensions 1440 × 1080, which should produce a 4:3 image. However, the system uses rectangular pixels of 1.33:1, which stretches the image horizontally and results in a 16:9 image.

High definition – HD High definition is the current standard of image resolution. It is often referred to as ‘Full HD’ to differentiate it from ‘Standard HD’ (720). Most cameras, even cheaper ones, will record in HD quality now. The HD resolution is 1920 × 1080, which produces a 2.073 megapixel image. This is the resolution of HD television channels and Blu-ray discs. The pixel aspect ratio is square, which results in a standard 16:9 image. I dare say nothing is produced at a lower resolution than this nowadays, even though the majority of television channels still broadcast in SD (the non-HD ones). Programme makers simply downscale their content to fit the SD screen, but retaining the option to also produce a ‘Full HD’ version.

Ultra high definition – UHD UHD is the emerging standard for better than HD images. There are currently two different variations of UHD, 4K and 8K.

UHD 4K UHD is the future standard of image resolution. This is also often referred to as 4K, referencing the near four thousand lines of resolution along the width of the image. The UHD resolution is 3840 × 2160, which produces an 8.294 megapixel image. This is the resolution of some streaming content, from the likes of Netflix and BBC as well as 4K UHD Blu-ray discs. The pixel aspect ratio is square, which results in a standard 16:9 image. Currently, a lot of digitally shot cinematic content is finished at UHD resolution. While the trend among television manufacturers is to sell higher and higher resolution-capable television sets to customers, on a standard, large home television set, at a standard viewing distance, the benefits of UHD are often not that noticeable when compared with HD. UHD televisions often come packaged with other benefits, such as high dynamic range images (Dolby HDR10) and a wide colour gamut (REC2020) and are often more noticeable to the average viewer. These extra benefits that the UHD platform is providing may lead people to believe that the extra pixels are causing a noticeable increase in quality, whereas in reality the high dynamic range and wider colour gamut that come with the UHD may be providing the most noticeable benefits.

UHD 8K UHD 8K is the future standard of image resolution that is beyond regular UHD. Currently, there are only a few of the most expensive cinema cameras and a couple of pioneering television sets that feature UHD 8K.

The UHD 8K resolution is 7680 × 4320, which produces a 33.177 megapixel image. The pixel aspect ratio is square, which results in a standard 16:9 image. The benefits of 8K images may likely never be actually noticeable to the consumer, even on enormous television sets. UHD 8K images may be of noticeable benefit to cinemas or other large screening venues.

Cinema 2K, 4K, 5K, 6K The HD and UHD formats all deliberately conform to the 16:9 screen aspect ratio. There are a number of cameras that will record high resolution images that do not exactly match the standard 16:9 aspect ratio output. The Cinema 2K image (C2K) is slightly wider than HD and is the standard resolution for creating DCPs (digital cinema packages) that can be played back on digital cinema projectors. The C2K resolution is 2048 × 1080, which produces a 2.211 megapixel image. Similarly, the Cinema 4K image (C4K) is designed to be slightly wider than the UDH frame, for the same reason. The C4K resolution is 4096 × 2160, which produces a 8.847 megapixel image. The same is true of the 5K, 6K and 8K images. These cinema images have an aspect ratio of roughly 1.9:1, which is a slightly wider aspect ratio than 16:9 (1.78:1)

FIGURE 12.4  Cinema frame size comparison. Source: Copyright © 2019 Sam Pratt.

Over scan There is also a trend for filmmakers to shoot at a higher resolution than the intended final outputted version.This process is known as over scanning the image. For example, David Fincher shot Gone Girl (2014) using a 6K Red Epic Dragon1 to be finished at 4K and UHD. He deliberately shot the movie often using locked-off tripod shots that were composed slightly wider than the intended frame size. In post-production, the images could be repositioned as well as adding simulated handheld camera shake and other motion effects that were not present in the original image.2 This technique also allowed Fincher to be able to blend performances from different takes (the camera position had not changed) and then scale in and add camera shake to make the shot seem more spontaneous and documentary like. Over scanning gives extra options to filmmakers, as described above, as well as greater room for stabilisation and for visual-effects work, and can even improve colour resolution. On a more modest level, filmmakers will often shoot a 4K interview for output at 1080 resolution. This large over scan (the 4K image is 120% larger than HD) gives the ability to shoot wide, then digitally scale in during post-production,

effectively giving the filmmaker two shot sizes for one: a mid-shot and a close up in the same take. This type of shot could also use animation keyframes to simulate a zoom from the wide to the close up in post-production.

Which resolution? Camera technology is advancing to the point where almost anyone can produce UHD images. Even a bottom-of-the-range iPhone will shoot a UHD image. The cheap availability of UHD sensors and screens means that manufacturers are trying to gain an edge over their competition by increasing pixels on cameras and television sets faster than the industry can keep up. This leads to a strange situation where we currently have the majority of television channels still broadcasting in SD, but we have relatively cheap UHD television sets, UHD Blu-rays and UHD streaming through Netflix and even the BBC iPlayer. My personal philosophy is that I would never want to shoot anything in a lower resolution than HD (1920 × 1080) anymore, even if it were to be broadcast on SD television. I like the flexibility that over scanning provides, even just shooting at Cinema 2K (2048 × 1080) gives extra reframing options in post-production without the loss in quality. Given the option, I would prefer to shoot at Cinema 4K (4096 × 2160) for output at UHD (3840 × 2160) and HD (1920 × 1080) and also output to SD (for DVD, at 720 × 576 with a pixel aspect ratio of 1.458:1). One thing to note though is that all things are not equal when it comes to cameras. I would not be happy to shoot 4K on an iPhone in favour of a HD cinema camera with an option of lenses and that recorded to a high-quality codec. Everything is a balancing act and you must navigate your way through your options making the best decisions you can, on a case-by-case basis. Not all projects are destined to be seen on a big screen, not all projects justify so much time and expense in storing and processing large files.

Resolution of celluloid I have chosen to explore the digital film and video alternative to film before the celluloid on this occasion. It made more sense to me to understand digital image resolution before we talked about how that relates to celluloid.

The big question is: what is the comparative resolution of celluloid film? With celluloid film, resolution is directly related to the size of the film plane. The larger the film plane, the higher the potential resolution. So 65mm film is a much higher resolution than 8mm film, and looks clearer when projected onto a large screen. The 65mm film is the pinnacle of quality in cinematic images, even when compared with the best digital cameras available today. Since the majority of cinema has been shot with 35mm or S35 film, we should think about what is the resolution of this size of film stock. How does this potential resolution compare with the latest 4K and 8K cameras? Celluloid film stocks do not have pixels. Pixels are digital and immovable, whereas film is analogue and flexible. There are no defined numbers attached to the resolution of film. However, film can be digitally scanned and video files made from the scans. These scans are at specific resolutions, and the point at which no additional benefits can be gained from a higher resolution is the point at which the resolution could be seen to be comparable. Kodak apparently estimate that their 35mm film stock has the equivalent resolution of a 6K image,3 although there is some debate about that. Based on this assumption an IMAX image is considered to be equivalent of a 12K image. Film-production companies with classic movies in their catalogue have been scanning their movies at 8K and downscaling them for commercial release. In 2009, Warner Bros. scanned Gone with the Wind (shot on 35mm) at 8K for the seventieth anniversary release.4 I find it ironic, then, that this 100-year old technology is better than the most current and expensive camera technology today. Film enthusiasts with 35mm film prints and projectors have been watching better than UHD 4K images in their garages for decades.

Archiving on celluloid This part is undoubtedly out of the realms of the non-professional, but interesting nonetheless. A film company’s most precious asset is its back catalogue. The catalogue of films brings in a continued stream of revenue for the company as it continues to add to its archive with new films. The question arises of what is the best way to store a movie long term?

How can a company producing a film completely digitally today ensure that the film will be able to be copied and presented on the very latest technology in, say 100 years time? Digital storage systems change rapidly and older systems become unreadable by the newer replacements. This means that to store large amounts of digital information long term, an archival strategy is needed. This strategy must migrate data (the catalogue of films in their highest possible resolution) every so often to make sure that the storage system is current and readable. Not to mention that hard drives fail regularly and seize up when not used often. This migration and management are costly. With celluloid film, the negative needs to be stored in the right climate, regarding temperature and moisture levels, but is a very good long-term archival medium. Hench why old films such as Gone with the Wind can be rescanned all these years later and a 4K UHD Blu-ray made from it. One option that modern digital film companies have is to transfer their digital creations onto celluloid optically. This means that the movie can be preserved for a long time and brought out for rescanning when needed in the future. Use of celluloid film in film production as a capture medium may be dwindling, but it is still regarded as the best long-term storage medium for those who are serious about maintaining their legacy and catalogue of films into the coming centuries.

Activities •    Determine the resolution options available to you with the camera that you have access to. •    Experiment with over scanning. Even shoot 1920 × 1080 for a final delivery of 1280 × 720. This will give you the chance to experiment with over scan, making use of scale and repositioning options in post-production.

Resolution rules recap The fundamental rules that you should consider to achieve the highest resolution images are: Rule: Use the highest resolution possible that you have the capacity to store and work with both on production and in post-production.

Rule: Consider ‘over scanning’ the image and shooting for a slightly wider composition for added flexibility in post-production.

Notes 1    www.red.com/news/gone-girl-first-feature-film-shot-in-6K. 2    www.youtube.com/watch?v=2o6pjd2AU9c. 3    https://gizmodo.com/5250780/how-regular-movies-become-imax-films. 4    www.motionpictureimaging.com/2009/12/16/film-grain-now-gone-with-the-wind/.

13 LENSES

The fundamental rule for using lenses to create a cinematic image is: Rule: Always consider the correct lens choice for the purpose of the shot in question. This chapter will look into some of the fundamentals of lenses and how they work, as well as exploring some of the reasons why you might choose one over the other.

Lenses Lenses are an essential component of any camera setup. Lenses consist of multiple layers of glass that concentrate light rays onto the camera’s imaging plane. Lenses are primarily categorised by their focal length, and their speed. The focal length refers to the distance from the optical centre of the lens to the imaging plane. Focal lengths are rated in millimetres. The speed refers to how wide or narrow the iris/aperture of the lens will go, rated in f/stops. Lenses control four main aspects of the image: light reaching the image plane; field of view; optical compression; and the focus of an image.

Light The aperture or iris is located within the lens itself. This is the f-stop value that we have looked at in previous chapters. This is one of the main exposure controllers.

FIGURE 13.1  Lens cross section. Source: Copyright © 2019 Sam Pratt.

The degree to which a lens will ‘open up’ is also determined by the lens. A lens that will open up a lot, and needs less light to be able to properly expose an image (with a low f-stop value, like f1.2), is considered to be a ‘fast’ lens. A lens that will not open up as far (say to f4) is a slower lens. It is generally preferable to have lenses that will open up to a nice wide aperture (low f-stop value) as this gives more flexibility and can mean that extra lighting equipment is not necessary. Fast lenses are also usually higher priced lens, due to the added expense of engineering the lenses to be efficient and allow more light to pass through to the image plane.

Field of view The field of view (FOV) of a lens is a measurement of the width and height of the subject of a shot that will be captured, relative to the lens.

To put it another way: a wide FOV will capture a wide vista in a particular location, and a narrow FOV will be effectively zoomed in, capturing a much narrower part of that vista. The FOV of a lens is often expressed in degrees of a circle. The larger the number, the wider the FOV:

FIGURE 13.2  Field of view diagram. Source: Photo by Josh Felise on Unsplash.

•    100 degrees = wide FOV •    12 degrees = narrow FOV

Optical compression Different focal-length lenses provide different amounts of optical compression. This is how close objects in Z space (foreground and background) relate to each other. A large amount of optical compression would make foreground and background objects appear closer together, having the appearance of flattening the image. A small amount of optical compression would make foreground and background objects appear further apart, giving depth to an image.

Optical compression can be used for very practical reasons as well as aesthetic. An example could be a shot where somebody looks like they are close to being hit by a car. A lens with a lot of optical compression (a telephoto or long lens) could be used to compress the space between person and car and make it appear that the two objects are very close.

Focus Lenses also control the focus of the recorded image. This is perhaps the most crucial part of the process and it is controlled by a ring on the lens.

FIGURE 13.3  Optical compression 1 (very compressed). Source: Copyright © 2019 Ethan Rogers.

FIGURE 13.4  Optical compression 2 (not very compressed). Source: Copyright © 2019 Ethan Rogers.

Photography lenses are designed to have a focus point set and usually left alone, so they can be trickier to adjust the focus of than lenses that are designed for film and video. Lenses that are more helpful to the cinematographer have a focus ring that can be accessed easily and adjusted (or ‘racked’) during a shot. In cinematography, because of the need to rack the focus sometimes multiple times during a shot, an extra piece of equipment called a follow focus is often attached to make the job of manually rotating the focus ring easier than reaching over the top of the camera and directly handling the lens.

Three main categories of lenses There are three broad categories of lenses: wide-angle lenses; normal lenses; and telephoto lenses. A general rule of thumb is that the smaller the value in mm, the smaller the size of the lens and the wider the FOV. The larger the value in mm, the larger the lens and the narrower and more magnified the FOV.

Lenses also have an aperture built into them, which control how much light is let through the lens, exposing the projected image onto the imaging plane.

Wide angle Wide-angle lenses have a wide FOV so they are good for using in a tight space. Wide-angle lenses also tend to warp or distort an image. This is particularly noticeable on straight lines, such as on this shot from 2001: A Space Odyssey. Wide angle lenses also appear to expand the perceived depth of an image. Z space (the third dimension) appears stretched compared with telephoto lenses.

FIGURE 13.5  Wide angle close up on face. Source: Copyright © 2019 Ethan Rogers.

FIGURE 13.6  Frame from 2001: A Space Odyssey 1968. Directed by Stanley Kubrick. Lenses usually classified as ‘wide angle’ range from somewhere around 10mm to 30mm.

Normal Normal lenses have a much tighter FOV than their wide-angle counterparts. They are generally considered to be a good neutral focal length, which closely approximates the way we view the world. This is a good focal length for portraying people and faces.

FIGURE 13.7  Normal lens close up on face. Source: Copyright © 2019 Ethan Rogers.

Lenses usually classified as ‘normal’ range from somewhere around 35mm to 60mm.

Telephoto Telephoto lenses produce a narrow and magnified FOV. Telephoto lenses also appear to compress the perceived depth of an image (Z space), making things seem closer together. They also make faces appear flatter than the normal lens. Some filmmakers prefer this compressed look of a telephoto lens on faces.

Lens mount Interchangeable lenses have a specific mechanism or lens-mount configuration that ensures only compatible lenses can fit on a camera. Different camera manufacturers have different lens-mount systems that are designed for different sensor sizes. Some lens mounts also have electrical pins that connect and can communicate metadata (such as aperture) to the created video file. The electronic communication can also control aspects of the lenses performance, such as aperture, focal lengths and focus. On compatible cameras and lenses, these functions can be done electronically. On

some lenses, aperture and focus must be operated manually, by rotating wheels on the body of the lens. Some common types of lens mount are shown in Table 13.1.

FIGURE 13.8  Telephoto lens close up on face. Source: Copyright © 2019 Ethan Rogers.

TABLE 13.1  Types of lens mount Name Manufacturer PL – positive Arriflex lock EF – electro Canon focus EF-S – small Canon image circle MFT – Micro Olympus and Four Panasonic Thirds E-mount Sony

Compatible cameras S35 cameras and digital equivalents: Arri Alexa, Blackmagic Ursa et al. Standard Canon EOS Cine and stills lenses. Also Arri Alexa and Blackmagic Ursa et al. Canon cameras with an APS-C-sized sensor, 7D, 60D et al. Some Blackmagic Design cameras, Panasonic, JVC etc. For use in Sony Mirrorless cameras (with a shorter flange distance)

A-mount

Sony

B4

Broadcasting Technology Association Nikon

 F

For use with Sony DSLRs (cameras with mirror system) For use with broadcast cameras

Nikon DSLRs (cameras with mirror system)

Prime lenses Prime lenses are lenses that only have one focal length. Changing to a different focal length using prime lenses means physically removing the lens and attaching a different prime lens with a different focal length. Performing a zoom is not possible with a prime lens as the focal length is fixed. Prime lenses also tend to be naturally faster (open up to a wider aperture) than similarly priced zoom lenses.

Zoom lenses Zoom lenses have the option to select a number of different focal lengths, depending on the specifications of the lens. Some zoom lenses can be altered from a wide angle to a normal angle. Some zoom lenses can be altered from a normal angle to a telephoto angle. The fixed (not removable) lenses found on a typical camcorder will usually range from a wide to a telephoto focal length, as is often indicated by the letters W and T on the zoom controls. Zoom lenses contain more layers of glass than a prime lens, and so there are more obstacles to light passing freely though the lens. As a result, zoom lenses tend to be slower than a prime lens in the same price range. Faster zoom lenses are possible to acquire, but are more expensive.

Fixed lenses A lot of camcorders have fixed lenses. These are usually zoom lenses that cannot be removed. Although the lens cannot be removed, it is usually fairly fast and has a good range of focal length options using the zoom.

Contrazoom

There is a cinematic trick (which is difficult to demonstrate in still images) that makes the background of a shot appear to be expanding or compressing around and behind a subject. A quick search for video examples online will bring up plenty of examples. The technique is called a contrazoom, or a dolly zoom, and is used famously in the memorable scene from Poltergeist (1982) shown in Figure 13.9. Notice that the character remains more or less the same size and position in the frame while the distance between the character and the door is increased.

FIGURE 13.9  Contrazoom – frame from Poltergeist 1982. Directed by Tobe Hooper. As well as being a dramatic look, and an interesting trick to try yourself, this technique demonstrates the differences between FOV and optical compression between wide and telephoto lenses. The technique involves a track and dolly (a moving platform and track for the tripod and camera) and a zoom lens: •    The start position is with the camera physically far away from the person or object in the shot, and the lens set to telephoto. •    The end position is with the camera physically close to the person or object in the shot, and the lens set to wide. •    Care should be taken when moving from the start to the end position that the person or object in the shot remains the same size and position in the frame. The camera operator and the person moving the platform on the track should work together to disguise the movement of the camera. The intended effect is that the camera and subject of the shot are not moving, but only the background is moving.

Famous focal lengths The following are just a couple of examples of filmmakers who favour specific focal lengths and tend to work with them more than others.

Wide – Terry Gilliam Director Terry Gilliam is well known for his love and use of wide-angle lenses. Sometimes they are used to purposefully distort an image and make it unreal. In The Fisher King, Gilliam uses the wide-angle lens to enable close proximity to the action and still see the full extent of the characters’ bodies. This could have been achieved with a longer lens, but the camera would have been moved further way to accommodate the full scene. In moving the camera further away, the feeling would have changed and the scene would appear to be observed from a distance as opposed to being close to the action.

50mm lens – Yasujirô Ozu

Japanese film director Yasujirô Ozu preferred to use a 50mm lens (with an Academy 35mm-sized frame) because he liked how it felt like real life to him. He is supposed to have used the lens almost exclusively, even going to the trouble of having sets specifically built with the 50mm lens in mind. The sets would have had to be built with enough room to move the camera back far enough to accommodate wide shots. Ozu preferred to use the 50mm (normal) lens because on a 35mm camera system the 50mm approximates a look that is considered to be close to how we experience the world with our own eyes. Certainly to Ozu, the look created by the 50mm lens felt realistic, so he used it a lot.

Long lens – Ken Loach British social realist filmmaker Ken Loach prefers the camera to be at a distance from his subjects, as though the camera is someone observing a scene, far enough away to be a casual observer. He often uses a long lens to create this feeling in the audience.

Specialty lenses Fish eye A fish-eye lens is a very wide-angle lens that has a very wide FOV, and has the effect of distorting the subject. This kind of distortion, particularly on a face, is usually considered unflattering and is often used to denote a deteriorating mental state.

Macro Most lenses are designed to focus from about 30cm in front of the lens all the way to infinity (∞) in the distance. On the occasion that you would like to focus on something extremely close to the lens (on something small like an insect) then a special lens must be used. Macro lenses are designed to focus on things that are very close to the lens. It is possible to change a zoom lens into a macro lens very cheaply using a macro-lens extension ring, which fits between the lens and the camera. The

extension ring moves the lens further away from the image plane changing the magnification and enabling macro photography with non-macro lenses (do your research before buying one).

Crop factor So far I have been careful not to give too many definite values in terms of whether any specific mm lens will be wide, normal or telephoto. This is because a specific focal-length lens, let us say a 50mm lens, will give a different FOV depending on the size and aspect ratio of the imaging sensor that is capturing the image. When professional cinematographers or directors talk about specific lenses, they are usually working within the context of a S35 imaging plane and a PL-mounted lens. If you were to take that 50mm PL-mounted lens for a S35 camera and somehow attach it to a smartphone camera (with a much smaller imaging sensor), the resulting FOV would be very different. The FOV would be more akin to a telephoto lens for the smartphone. This is because the smartphone would only be capturing a small square in the middle of the circular image that the 50mm PLmounted lens was projecting towards the phone. This difference in FOV, depending on the size of the imaging sensor is known as the crop factor, and is a very common issue that digital cinematographers face, because of the wide array of different lens systems and sizes of imaging planes available and in common use.

FIGURE 13.10  Crop factor – S35 and MFT sensor comparison. Source: Photo by John Fornander on Unsplash.

Mathematically, the way to identify the relative focal length when you know the crop factor is to divide the focal length by the crop factor of the image size you want to compare it to. So, for example, if I want to achieve a classic S35 28mm lens look on a MFT camera, I need to divide 28mm by the crop factor (which is 2) to determine the equivalent focal length of 14mm: 28 ÷ 2 = 14

The 14mm lens on a MFT camera system will look the way a 28mm lens looks on a S35 camera, at least in terms of the FOV.

Equivalent to 35mm or S35? There is a lot of information about crop factors and crop-factor calculators available on the internet, which can be a very useful way to engage with the subject. One thing to bear in mind though is that some base the crop factor on stills 35mm (which is larger than S35) and not motion S35. It is important that we base our crop factors on the S35 image size, since that is our frame of reference. We want to be able to reference Gilliam, Ozu and Loach et al. and not the work of stills photographers. The crop factor between 35mm stills (vertical) and S35 (horizontal) is 1.4.

S35 equivalent focal lengths Because of the aesthetic look associated with certain classic focal lengths (relating to S35 cameras and lenses), filmmakers like to simplify discussion by talking about and referencing focal lengths related to classic S35 filmmaking. Then the filmmakers need to quickly calculate what focal length they actually need to achieve a certain look on their particular camera system. For example, a filmmaker using a small-sensor camera who wanted to film using Ozu’s 50mm lens choice would need to know the crop factor of that camera, perform a quick calculation or consult a special calculator or chart like the one in Table 13.2. The filmmaker would conclude that to achieve the 50mm look they would need to divide the focal length by the relative crop factor to discover the correct equivalent. The chart tells us that on a camcorder that has a 1/2.8” sensor (nearly one-third of an inch), the focal length should be around 10.41mm. Some lens manufacturers also display 35mm and S35 focal length equivalents to help speed up that process. TABLE 13.2  Crop factor chart

 

   

Thsi chart is an example of how to use crop factors to determine equivalent focal lengths and is based on the S35 as the reference. The crop factor figures come from Abel Cine.1 I have left the decimals in for consistency.

Other crop factor considerations Because a crop sensor is reducing the FOV by only using a small section of the available image, this has an effect on two other areas: 1    F-stop. The f-stop value is also divided by the crop factor. For example, if using a S35-sized lens on a MFT camera, the crop factor is 2. What the lens is displaying as f2 is actually f4 (twice as dark) on the resulting image because half the light is not reaching the image plane. 2    Optical compression. Using the example of a S35 lens on a MFT camera, the optical compression is relative to the original focal length, not the calculated crop-factor focal length.

Activities Do not become overwhelmed with the potential complexity of different lenses and the added complications of crop factors. Try to engage with that but just try things out for yourself. The relationship between lenses and image planes (sensors) is one that takes time and experimentation to appreciate. Try the following: •    Perform a contrazoom. This is tricky to pull off and look smooth, particularly without a track and dolly, but the general effect is achievable on any camera with a zoom lens. •    Experiment with framing up subjects at different focal lengths, but using the same shot size. So, for example, frame somebody up in a mid-shot, using a wide, normal and telephoto lens. By the time you get to the telephoto lens, you will be physically much further away from the person than the shot using the wide lens. Inspect the results and decide which focal length you like best. Notice the optical compression on the focus of the shot and also on the background and foreground elements. What happens to the environment on each focal length?

•    Take a scene from a film and analyse it for focal lengths. What would this scene feel like if it were using a different strategy of focal lengths? •    Film a scene using only one focal length. Then film the same scene using a very different focal length. •    Reflect upon the practical differences between shooting using the two focal lengths. •    Edit the scene and reflect upon the different feelings that the two focal lengths produced. Does one serve this scene or story more than the other?

Lenses rule recap The fundamental rule for creating a cinematic image is: Rule: Always consider the correct lens choice for the purpose of the shot in question.

Note 1    www.abelcine.com/fov-original/.

14 CODECS

The primary codecs rules to consider when creating cinematic video content are: Rule: Consider the budget, workflow and turnaround time of the video project when deciding on video codec specifics. Rule: Use the best codec available at the highest bitrate possible, within the constraints of the project. This chapter will consider the different methods that can be used to compress a video image.

The versatility of the celluloid negative A movie camera captures a series of still images onto celluloid at a rate of 24 times a second. The film is then processed and edited (to simplify the process) and a copy (print) of the edited film is projected onto a screen. This is a mechanical process and one that is easy to visualise. The frames run through the projector, one at a time. Each image is complete, containing all the information necessary for that frame. The celluloid negative contains more information than is present in the print. When the celluloid is being processed, the lab team and the cinematographer decide how much light to use (as well as the colour of the light) to create the print that will be used in the film. In this way, the celluloid negative is very versatile; changes can be made to the image later (for example, to adjust the exposure or colour balance of a shot) because it holds so much information. It contains more information than is

needed. The celluloid is also analogue, so there is no limit to the variation of colour subtleties possible, nor is there a finite resolution.

Cinema raw A raw file is a kind of digital negative. It contains more information than is needed (all the information from the image sensor) and is designed to be manipulated before it is finished. Photographers use raw files and process the images using software such as Adobe Lightroom, which is deliberately taking inspiration from the traditional photography darkroom. Inferred in the name Lightroom is the idea that this is a similar process to developing and printing celluloid photographs in a darkroom, but the modern digital version of this process. In the digital world, the raw file is the next best thing to the flexibility that celluloid gives over image capture. A cinema raw file is essentially a series of still images (one raw image per frame) and an audio waveform, all packaged together inside a folder. Each raw image contains all the information from the imaging sensor from the red, green and blue (RGB) channels. This is opposed to all other types of video file where the particular blend of the colour-channel information has been decided at the white-balance stage, and fixed into the video image, discarding the rest of the information from the sensor. Shooting digital raw is the best method of capturing video images that have the maximum potential to be manipulated and corrected during the post-production stages of filmmaking.

Codec Thinking of a video file simply as a digital celluloid filmstrip, or even a folder containing raw images is straightforward, but unfortunately not accurate for the vast majority of video that is created and distributed. If this straightforward method were literally the case, then video files would be prohibitively large. It would not be possible to record HD and 4K video streams onto something as small as an SD card or to stream over the internet. For video images to be made small enough to be practical, they need compressing down in size, and then decompressing on viewing. When creating a digital video file, the camera needs to disregard some of the potential information that is projected onto the imaging sensor by the lens. The

imaging system inside a video camera compresses and deletes information as the video file is being created, in order to keep the sizes small enough to be practical. It is this action of compressing the video file and then decompressing the file on viewing that provide the basis for the word codec. Codec: 1    Hardware and software inside the camera compress the video file when it is created. 2    Hardware and software then decompress the video file, as it is being watched. That hardware and software could be a television set (capable of playing a video file from a USB or over the internet) or a phone or computer. The hardware and software of the viewing device re-populate the missing components of the video image, following a specific method as prescribed by the codec. The end result is that you can view the full video file, unaware that there was ever any information missing.1 There are many different methods of doing this. Each one developed by a separate company or industrial body, some more successfully than others. Some codecs are owned by the developer and some are open-source standards that are free to use. Some codecs are old and some are new. The newer codecs tend to be the better ones, and will eventually replace the older ones, although persuading a large group of different people and companies to change takes a long time. As such, there is a number of different codecs being used at the same time. Codecs are always improving, in part because of the advances in computing power. Computers are so powerful now that mobile phones are more capable than desktop computers were 10 years ago. As computers advance, so too does the potential power of the codecs, which can squeeze files down into smaller and smaller sizes without losing much detail in the final image. Compressing video does result in a loss of quality however, and there is no way around that. The question is finding a reasonable compromise between compression (file size) and acceptable image-viewing quality. Some common video codecs are shown in Table 14.1. Some common audio codecs are shown in Table 14.2. TABLE 14.1  Common video codecs

    TABLE 14.2  Common audio codecs

   

Containers Some confusion exists about what is and is not a codec. For example, I often hear students and even professionals (a film festival coordinator, for example) say something like ‘please provide a .MOV file’, or ‘I exported a .MOV so it should work’. Simply saying that a file is .MOV does not actually say very much about what codec is being used. A .MOV file is a Quicktime file type that is developed by Apple. The actual codec used in the .MOV file could be a number of different things. The .MOV file is simply a container, which houses a video stream and an audio stream. There are other examples. If this is confusing, consider that a specific method of compression can be used for a video stream, and a separate method of compressing audio will be used for an audio stream. These two things are separate even though we may think of them as the same thing because they are so intimately linked. The video and the audio play at the same time and there is a direct correlation between the video and the audio, so we tend to think of them as being the same, but they are not. The thing that connects these two separate elements is the container file. Consider too that some files might contain other information, such as metadata (information about the video creator, copyright information and a date etc.) as well as possibly subtitle information that can be turned on or off.

It is the container file that houses these different types of information streams: video, audio, metadata and subtitle. A simple rule is: if it has a file extension, then it is likely to be a container. For example .MOV or .MXF. In some instances, when dealing with a single stream (audio or video or graphic), the codec can be the same as the file extension. For example an MP3 also has the file extension .MP3. This is because the .MP3 file uses an MP3 audio codec. Some common container files are shown in Table 14.3.

Recording formats To complicate the matter further, many professional camera manufacturers have developed specific ways of recording video footage (often using the h.264 codec) but in more complex ways. I am referring to these as recording formats. These recording formats arrange the video and audio streams and other metadata into very specific arrangements of folders. It is strongly advised that if you are using a camera that uses one of these kinds of recording formats (such as AVCHD) that you copy the entire folder structure of the recording card, rather than simply navigating through the folders, locating the video essence and extracting only those files. The other folders within (say) the AVCHD folder structure contain important information that you may need. It is best practice to copy the full folder structure, whether or not you have selected only the video files in the past and been okay. TABLE 14.3  Common container files

  TABLE 14.4  Common recording formats

    Some common camera recording formats are shown in Table 14.4.

Digital cinema packages Digital cinema packages (DCPs) are a standardised method for distributing cinema-quality video and audio for use with projectors. DCPs use a string of individual JPEG 2000 image files and an audio file, housed within an .MXF container. JPEG 2000 is an advanced version of the older JPEG image format that is still widely used today. There are companies who will make a DCP for you if you wish to screen a film at certain festivals or film theatres, and it is also possible to create one using Adobe Premiere Pro or other standalone DCP creators, such as easyDCP. DCPs are usually sized to a cinema frame (C2K or C4K) but can also be outputted in HD or UHD (16:9).

Compression methods We are now going to look at the primary ways that cameras compress the video images into smaller file sizes.

Chroma subsampling Digital video cameras mainly use the YUV colour space, which is often then converted back to RGB inside an NLE system or colour-grading application. Our eyes have a lot more black and white receptors than colour receptors. In the backs of our eyes we have: •    Rods – 120 million. They are receptive to black and white information (luma). •    Cones – 6 million. They are receptive to colour information (chroma). Our eyes are much more sensitive to luma than to chroma information, so we can actually disregard a lot of chroma (colour) information without it affecting how we perceive the image. This is called chroma subsampling. This method subsamples (under samples) the chroma (colour). See Plate 15 for an image that has subsampled chroma. Notice how the resolution of the colour channels is much lower. The Y contains the full resolution luma image, but U and V contain a lot less chroma information. The end result is an image that still looks complete, despite the lower resolution in the colour channels (chroma subsampling).

Chroma subsampling can be slight or it can be more aggressive. The more aggressively the chroma is subsampled, the smaller the images will be, but the compromise is less information in the colour channels. This is usually okay for a video image that is meant to be played back untouched, but if we are thinking about extensive post-production work, which may include some colour manipulation, then we might have to pay more attention to the chroma subsampling. Chroma subsampling is often displayed like this as shown in Plate 16. An area of pixels is divided up into rectangles that are four pixels long, and two pixels high. This method assumes that each pixel has independent luma information, but that chroma information could be shared between a number of other pixels.

4:4:4 The first number (4) refers to the number of pixels being considered. In this case (and in most cases) the set size is 4 × 2. The second number (4) refers to the chroma information contained in the top set of pixels. The third number (4) refers to the chroma information contained in the bottom set of pixels. The 4:4:4 chroma subsampling actually contains no chroma subsampling. All the colour information is contained in the image

4:2:2 The first number (4) still refers to the pixel set size. The second number (2) represents the 2 chroma values in the top row. The second number (2) represents the 2 chroma values in the bottom row. The chroma information here has halved. When the luma and chroma information is combined, this results in an image that is acceptable to watch. The colour resolution is lower than the 4:4:4, but the full-resolution luma information (the black and white image) is detailed enough to fool our brains into accepting the image as complete.

4:2:0

The 4:2:0 image represents a further reduction in chroma resolution. The first number (4) still refers to the pixel set size. The second number (2) represents the 2 chroma values in the top row. The third number (0) has no unique chroma information associated with it so the second row simply inherits the chroma values from the pixel above. This is a 120% reduction in chroma (colour) information from the 4:4:4. When combined with the full-resolution luma channel, the resulting image is still acceptable because of how accurately our brains perceive colour, but is obviously a compromise in quality. A lot of DSLRs use this 4:2:0 chroma subsampling.

Alpha channel An alpha channel is another channel of information that represents the opacity (how opaque or transparent something is) on an image. An alpha channel is usually a black and white image, where white represents transparent, and black represents opaque.

FIGURE 14.1  Alpha channel. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Note: The green screen has been used to create an alpha matte. The white area will be transparent, and the black area will be opaque.

4:4:4:4 You may sometimes see reference to the 4:4:4:4 chroma subsampling ratio and wonder what the extra 4 represents. The extra 4 represents that each pixel also has

its own alpha information, allowing for transparencies within the video file.

Bit depth – colour Colour bit depth relates to how many colours are possible within a video image. The greater the variety of potential colours that can be recorded, the more flexible the video file will be when it is manipulated in post-production. This has the additional consideration that more colour information results in a larger file size.

Intraframe Intra is a prefix that means on the inside; within. For example, an intranet is a local network of computers, usually computers within a company or organisation that are connected together. The network may be connected to the outside world (the World Wide Web) but primarily they are connected together. TABLE 14.5  Bit depth – colour

FIGURE 14.2  Intraframe compression. Source: Copyright © 2017 Jonathan Kemp. Note: Notice the illustrated macro blocks used by an intraframe codec to compress information within one single frame.

The intraframe compression method is a broad strategy that is used by a number of different codecs. The compression strategy looks for ways to compress and generalise information spatially, within one frame. Different codecs have different ways of actually compressing, but the intraframe strategy is to look for ways to compress an image, one frame at a time.

Interframe Inter is a prefix that means between; among. For example, the internet is a network of computers that are connected to each other. A computer in one country can connect to a computer in another country. The interframe compression method is a broad strategy that is used by a number of different codecs. The compression strategy looks for ways that it can compress and generalise information spatially, over a number of different frames.

Long GOP

Long group of pictures (GOP) is a type of interframe compression that is very common and is often referred to as Long GOP. The Long GOP compression will look for elements of one frame that do not need to be repeated over subsequent frames, rather to simply reuse pixels from earlier frames in later ones as much as possible.

FIGURE 14.3  Long GOP. Source: Copyright © 2019 Jonathan Kemp. Note: This is a simulation of a LongGOP series of images. The first frame (the I frame) contains the first image, and the subsequent frames (P and B frames) record only the changes.

The Long GOP contains: •    I-Frame – An intraframe or keyframe (which contains the full image). Each Long GOP starts with an I-Frame. •    B-Frames – There is a subsequent number of bi-directional frames i.e. BFrames (looking forwards and backwards) that only record only the difference between themselves and the previous I-Frame. •    P-Frames – There is also a number of predictive P-Frames sandwiched between B-Frames that predict the content of that frame based on the frames around it. This can save a lot of space if there are portions of the image where not much has changed from shot to shot. Imagine a shot where the camera is on a tripod and locked off, pointing down an empty street. There is no movement in the shot. Nothing is happening. This shot would be able to use the Long GOP method very effectively. Subsequent BFrames would be recording only the difference. There would be no difference in this case, so the B-Frames would be virtually empty. If someone then walked through the frame, only the pixels in and around that the area that the person occupies would need updating in every B-Frame.

Every playback device has to rebuild these Long GOP images, which is usually fine for simple playback, but can be more strenuous for a computer and NLE system if the editor is scrubbing backwards and forwards, with possibly a number of different video streams at a time.

FIGURE 14.4  Low bitrate image. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda. Note: Notice the ‘banding’ artifacts present on the background gradient in this highly compressed image.

This Long GOP type of codec is okay for recording and even playback, but it is usually recommended that video files are transcoded into an edit-friendly intraframe codec for editing work. This will improve performance on a computer and NLE system, and prevent it from having to constantly rebuild multiple Long GOP video streams. On occasion, you might experience a playback error, where the Long GOP has not been rebuilt correctly and see the naked B-Frames that have not been remade. Long GOP compression is typically very good way of compressing the size of a video image. Some people prefer to shoot using an intraframe codec though, because they have noticed subtle imperfections in the Long GOP codec image. Some of the more advanced cameras available have the option to record intraframe (All-I) or an interframe Long GOP (IPB) version of the camera’s principal codec.

Bitrate Bitrate will determine how strictly the codec will follow its pattern of compression. This will specify the size of a video file. A general principle is that higher bitrates result in higher quality video. The trade off is in the file size, which increases along with the bitrate. Really low bitrates result in very compressed and blocky looking images. Bitrate is independent of resolution. A 1Mbps2 video file at 1920 × 720 resolution will be the same size as a 1Mbps file at a resolution on 3840 × 2160, but the compression artefacts will be more noticeable on the larger resolution file. Certain codecs such as Apple ProRes have fixed bitrates depending on the exact variation of ProRes. For example, 1920 × 1080 Apple ProRes 422 HQ is fixed at 27.5Mbps and Apple ProRes Proxy is 5.6Mbps.

CBR and VBR Bitrate needs to be considered when transcoding3 video files or exporting a final edit of a video in an NLE system, as there are option to consider there also. Not just the bitrate, but also the method of encoding. Constant bitrate is often abbreviated to CBR. This is an encoding option that maintains a regular bitrate throughout a clip, regardless of the content of the video image. Even during a section of a video where nothing is happening, let’s say a completely black screen, the bitrate remains constant. Variable bitrate is often abbreviated to VBR. This is also an option when choosing the encoding settings for a video. The variable bitrate will analyse the video looking for portions where the bitrate can be reduced, for example where there is not much change from frame to frame. During these sections of inactivity (let’s use the completely black screen example), the bitrate will automatically drop. The bitrate will fluctuate throughout the complete video file, depending on the complexity of the image content. Typically, a VBR video clip will be smaller in size and more efficient than a CBR clip. When choosing a VBR export bitrate option there will likely be a one-pass and two-pass option. The two-pass option will take two attempts at analysing the

bitrate and varying it accordingly, resulting in a more efficient (smaller) video file. This does take twice as long however.

Codecs for different purposes Different codecs work in different ways, and so some are better at particular tasks than others. For example, some codecs are best suited to editing; others are better suited to uploading to the internet. Others still are suitable for archiving a highquality version.

Camera codecs Camera codecs vary hugely, depending on the manufacturer’s assumptions about the needs of the user. For example, a smartphone manufacturer would assume that a typical user would be someone who is not particularly tech savvy (certainly not in regards to video production) and wanted reasonably small video files that look good. iPhone cameras now record video using the HEVC/H.265 codec, which is the most efficient codec currently available, but is not a good choice for editing as it takes up so much of a computer’s resources decompressing and rebuilding the Long GOP footage. A camera like the Blackmagic Pocket Cinema Camera has a few choices of codec, all very high quality. You could use CinemaDNG Raw or even shoot straight to the intermediate intraframe codec Apple ProRes. This camera is obviously not intended for a casual user so high quality is prioritised over smaller file sizes. Most cameras such as camcorders and DSLRs use some kind of Long GOP video codec, so transcoding to an intermediate intraframe codec would be advised for editing.

Intermediate codecs for editing Intraframe codecs, such as DNxHD, Apple ProRes and GoPro Cineform (at a variety of bitrates), are a good choice for edit-friendly codec. They use an intraframe compression method, so the files are larger than the Long GOP, but they are much simpler for a computer and NLE system to process.

Delivery codecs Depending on where you are delivering the finished film, a variety of different codec choices could be needed. For most purposes a h.264 codec might be appropriate (at an appropriate bitrate). This would be suitable for uploading to YouTube or Vimeo as well as playback on a television screen. Increasingly the HEVC h.265 codec is also becoming popular elsewhere, particularly for 4K projects. For creating a DVD, the MPEG2 codec would be used. For digital cinema projectors, a DCP file would be created, which essentially packages up the film into a series of still images (JPEG 2000 files) along with the audio and wraps them up into an .MXF container.

Archive codecs The archiving process is very important and needs consideration for work that you are proud of. You may not always have access to the NLE software and rushes that you used to create your film, so a very high-quality master version should be created that you can refer back to in future and make lower-quality versions from. For archive, the file type should be intraframe, with a good chroma subsampling ratio and at a very high bitrate. How big the resulting file will be (and how much storage space you have) is your only limiting factor.

Example workflow pipeline Table 14.6 is an example of a workflow using files from a video camera, transcoding to an intermediate editing codec and finally outputting multiple versions for delivery and archive.

Activities •    Discover the codec options for the camera or cameras that you have access to. •    Research exactly what chroma subsampling is used and whether the files are Long GOP. •    Record test footage on the best settings available and see how large the files are. •    Experiment with workflow, transcoding camera files into intermediate files.

•    Export using a variety of codecs and file types and see which settings look best and playback smoothly on a variety of devices. TABLE 14.6  Workflow example

Codecs rules recap Rule: Consider the budget, workflow and turnaround time of the video project when deciding on video codec specifics. Rule: Use the best codec available at the highest bitrate possible, within the constraints of the project.

Notes 1    This process in and of itself is a huge subject, of which many books and articles have been written. 2    Megabits per second. 3    Transcoded video files have been changed from one codec to another, usually changing a Long GOP codec into an intraframe codec to reduce the load on the computer’s central processing unit (CPU) and NLE system.

15 CINEMATIC POST-PRODUCTION

The rule for maximising the potential of creating cinematic images in postproduction is: Rule: Before filming, plan how you will treat your images in postproduction, to add cinematic qualities to your images. This chapter will look at some of the various ways in which the images that you capture on video can be manipulated in post-production to make them more cinematic.

Cinematic post-production There are a lot of different ways that an image can be altered in post-production to make it appear more cinematic. Contrast can be increased, colours can be manipulated and other elements (e.g. imperfections such as film grain) can be added to make the digital video look more like it was shot on celluloid film. This post-production phase is unlike the other chapters of this book, which deal primarily with production issues; however, a working knowledge of what postproduction processes are available is important for a cinematographer to understand. On a typical big-budget film, there will be a colourist who manipulates and crafts the look of the images, usually after the film has been fully edited. The colourist will use a software application, such as DaVinci Resolve1 to alter the colour of the footage and add other manipulations to alter the look of the film.2 A film’s cinematographer will often communicate their intentions to the colourist, who will work to realise the look that the cinematographer had in mind

when they shot the footage. It is important that you have experimented with colour grading and image manipulation, whether or not you intend to colour grade the footage that you shoot.

Native image quality The most flexible kinds of video image files are those that are a high resolution and have the least compression possible, with the largest bitrate available. I think of these as big, fat and juicy video files that will give the maximum potential for post-production manipulation. These large files take up more room on a hard drive, but have more potential to create the best images in postproduction. See Chapter 14 Codecs, for more information on this.

Proxies Proxy video files are usually smaller, more manageable copies of the original video files from the camera. It can be very useful to generate intermediates and work from smaller resolution proxy video files if, for example, you are working with 4K video files on a less than capable machine. Different NLE systems have different processes for creating and linking proxies and original media, but since highresolution files are getting more and more common, NLE systems have streamlined proxy workflow so that it can be a more or less automated process. A good proxy file should be small in terms of resolution, and use an intraframe codec, which places less strain on the computer’s CPU, meaning faster performance. See Chapter 14 Codecs. NLE systems can switch between using the smaller, more edit-friendly proxy files for the edit part of post-production, and then switch back to the fullresolution, full-bitrate file for export and/or colour grading and visual effects.

Colour correction The first colour-manipulation stage is colour correction. This is not where you dazzle the world with a bold and original look, but where you (as the name suggests) correct the footage that has been shot to make it uniform. The intention is to take all the differences out of the different shots.

Perhaps one shot is a little darker than the following shot and the difference in exposure level is noticeable between shots in an edited sequence. Colour correction will even out those two shots, bringing both shots to an optimal luma level. Perhaps the colour balance between two different cameras was not calibrated properly and the colour temperature noticeably shifts slightly between shots in an edited sequence. Colour correction will even out the differences and bring both shots to an appropriate colour temperature.

LOG footage

FIGURE 15.1  LOG footage and waveform. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda. Note: Notice the brightest part of this image is the window behind the actress’s head. Also notice that despite being filmed in LOG mode, the window is ‘clipped’ on the waveform.

Footage that has been captured in a LOG mode will appear flatter and greyer when compared to footage that has been shot in a REC709 colour gamut (e.g. native smartphone footage).

FIGURE 15.2  REC709 footage and waveform.

Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda. Note: This image has a LOG to REC709 LUT applied. Notice how the waveform is spread out wider. Also notice how the contrast and saturation have been increased compared with the original LOG image.

The LOG footage has been shot in a way that compresses the colour space to enhance the dynamic range of the camera, meaning that bigger differences in exposure can be recorded without over- or underexposing the image. This LOG image does not look good unaltered and is intended to be treated during the postproduction stage by the colourist. This wider dynamic range gives the colourist enhanced exposure and colour options when setting the levels for each piece of video. The colourist can use the controls inside the colour-grading application to adjust the levels manually, or they could use a look-up table (LUT).

LUT – colour correction An LUT is a single file that contains a universal settings preset that can be applied to footage within a video editor or colour-grading application to affect the colour and brightness levels. LUTs work in most modern NLE systems and colourgrading applications, such as Avid, Premiere Pro, Final Cut Pro X and DaVinci Resolve.

FIGURE 15.3  LOG + LUT = REC709 – two images and ‘+ LUT’ text. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda.

In a colour-correction context, it is possible to instantly convert the grey, flatlooking LOG footage into the REC709 colour gamut. After the LUT has been applied, it will look more or less normal, but will retain the added benefit of having access to the LOG information, giving more power to the colourist to be able to rescue highlights or shadow areas using the software.

If the camera recorded an image in REC709 mode in the first instance, any areas of over- or underexposure could be lost permanently. The camera would have ‘clipped’ these areas off and all detail would be lost. Recording LOG first, then applying the REC709 LUT afterwards gives the same overall look as it would into a direct REC709 colour gamut, but without clipping off the over- and underexposed parts of the image. Different camera manufacturers use different LOG methods for compressing the colourspace meaning that a specific LOG to REC709 LUT is needed for each manufacturer and camera to uncompress that colourspace properly. One camera can even have more than one LOG profile, meaning care should be taken to ensure the correct colour correction LUT is used. The colour correction LUT can be used as a starting point to address shotspecific colour-correction issues from. LUTs do not always need to be used on LOG footage, but they can be thought of as a shortcut to colour correction. Some colourists use them, others prefer not to.

Colour grading The second colour-manipulation stage is colour grading. This is where things get creative and the colourist can manipulate the colours in an image to aid the storytelling, as well as to simply add visual interest. Specific looks can be found in colour grading that change the feel of the film. For example, in a horror film the colours could be desaturated, while also increasing the saturation of a specific colour. So, for example, the film could appear almost black and white but with exaggerated reds to highlight the blood. Below are some ideas for colour grading and looks that can be tried.

Ungraded This is the ungraded REC709 video file.

FIGURE 15.4  Ungraded REC709 file plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda and Gary Hanks. Note: This is the REC709 image with no additional saturation manipulation.

Desaturated This is a desaturated look is created by reducing the saturation of all colours.

FIGURE 15.5  Desaturated image plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda and Gary Hanks. Note: This image has reduced saturation. Notice how the reading of the vectorscope is closer to the centre on the graph.

Oversaturated

The oversaturated look is created by increasing the saturation of all colours.

FIGURE 15.6  Oversaturated image plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda and Gary Hanks. Note: This image has increased saturation. Notice how the reading of the vectorscope is moving further away from the centre of the graph. Notice that the colour of the image is more exaggerated and is starting to look unnatural.

Day for night The day for night look is used more often than you might think. The technique involves shooting during the daytime (around midday to keep shadows from the sun by the actors’ feet) and being very careful not to overexpose the sky. Then, in post-production the brightness level is reduced and the blue channel is increased. Put simply, the image is made dark and blue.

FIGURE 15.7  Day for night image plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Gary Hanks.

Monochrome A monochrome look is one that features only one main colour. An example could be a sepia-toned photograph look.

FIGURE 15.8  Monochrome file plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Gary Hanks.

Teal and orange – the Hollywood look The teal and orange look is a very popular look in Hollywood films. Clear examples of this are the Transformers films, which use the look extensively. The look works by providing maximum colour contrast between the skin tone of the people in shot and the background. Orange and teal are at opposite sides of the colour wheel. The orange and teal look can be created by pushing the shadows towards the blue (cool shadows), the mid-tones towards the orange (warm mid-tones) and the highlights towards the blue (cool highlights).

FIGURE 15.9  Orange and teal file plus Parade and vectorscope. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Gary Hanks.

Secondary colour grading Secondary colour grading refers to a second stage of colour manipulation, where you might select just one colour and manipulate that. A simple use of a secondary colour grade would be to select one colour from an image and increase the saturation of that colour. This would make only the selected colour more ‘colourful’ and eye-catching, leaving the other colours untouched.

LUT – colour grading LUTs can also be used in a creative capacity for instantly creating one of these colour-grading looks, and more. Colour-correction LUTs will usually have a creative or descriptive sounding name, such as ‘Day for Night’ or ‘Hollywood Look’ as opposed to camera-specific colour correction LUTs, which may say something like ‘Canon C LOG to REC709’.

FIGURE 15.10  Secondary colour grading. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Note: Notice only some elements of the shot have changed colour in the secondary grade.

There is an industry of filmmakers that creates and sells different LUTs online. Many give at least a few away for free for you to try.3

Film-stock emulation LUTs

Some LUTs replicate the look of specific film stocks. You might want to emulate the look of a current motion-picture film stock, such as Kodak Vision 3, or even a classic film stock that is no longer manufactured, such as Kodachrome, a reversal film stock known for its vibrant colours.

Add film elements Other ‘imperfections’ that have contributed to a film look can be added to digital footage to help a film feel more organic and natural than a completely clean digital image.

Grain overlay While it is highly advisable to keep a digital recording noise free during image capture (see Chapter 4 Grain), it can be desirable to add aesthetically pleasing organic film grain onto video in post-production. It is simple to do in most NLE systems and involves placing a special grain video above the desired video image on a timeline allowing only the grain to appear on the image beneath. The grain video is a specially scanned piece of blank film, which contains no recorded image, but does have the grain structure associated with celluloid film. The blend mode4 of the grain video should be changed to something like ‘Overlay’, which allows the grain to be seen but also the recorded image beneath.

Light-leaks overlay Specially recorded light leaks can be found online and perform a similar function to the grain overlay. Flashes of light can be placed above regular footage (using the correct blend mode) and add interest to an edit, or section of video.

FIGURE 15.11  Original image and light-leak image. Source: Light Leak provided by Creative Dojo, https://creativedojo.net/free-light-leaks-pack/. Image copyright © 2017 Jonathan Kemp.

Use of light leaks, and other similar effects, can create a sense of footage that was shot on real film. This could be used (along with some film grain, a colour grade and the use of an appropriate aspect-ratio matte) to create footage that looks like it was shot on a Super 8 camera.

Aspect-ratio matte Most standard aspect ratios are 16:9, apart from the 2K and C4K variety which are essentially 17:9. However, it is possible to shoot for any aspect ratio by overlaying an aspect-ratio matte, which effectively blocks off various parts of the 16:9 image to produce the desired aspect ratio. Aspect-ratio mattes can be found inside some NLE systems or easily made using inbuilt tools (such as crop) or downloaded as PNGs from various places on the internet. A simple search will produce multiple results.5

FIGURE 15.12  Original image with aspect-ratio matte 1 and 2. Source: Copyright © 2019 Jonathan Kemp.

Third-party film-stock emulators There also exists a range of products that can function either as standalone conversion software, or as a plugin that works within your NLE system to alter colour, grain and aspect ratio.6 These are similar to Instagram ‘filters’ and the like, except made for professional NLE systems. These third-party emulators do not do

anything that we haven’t already covered in this chapter, but they do them in combinations that have been designed by someone else. You could look at them as cheating. However, they might be what you need in certain circumstances and can be useful to experiment with.

Activities Try creating the following film looks with the footage that your camera creates. Consider the codec and settings of the codec to maximise the grading results. TABLE 15.1  Example film looks Film look Super 8 film look ‘Orange and teal’ blockbuster Desaturated postapocalyptic Accentuated colour (e.g. inside The Matrix)

Possible method Film grain, light leaks (possibly), oversaturated colours, 1.33:1 aspect-ratio matte. ‘Cool’ shadows, ‘warm’ mid-tones, ‘cool’ highlights, 2.39:1 aspect-ratio matte. Reduced saturation. Possibly use ‘secondary’ colour correction to increase saturation of specific colours. Add a single subtle colour cast to the full image.

 

Cinematic post-production rule recap Rule: Before filming, plan how you will treat your images in postproduction, to add cinematic qualities to your images.

Notes 1    DaVinci Resolve is available to you for free via Blackmagic Design’s website. www.blackmagicdesign.com/uk/products/davinciresolve/. 2    The colourist will typically not add digital visual effects. Compositing and other digital visual effects will usually be undertaken by a separate person or team. 3    One place to look for free and paid LUTs is: https://luts.colorgradingcentral.com/motionpicturefilmluts/. 4    Research blend modes for further clarification as to the different options.

5    The website vashivisuals.com/3k-4k-5k-6k-aspect-ratio-free-templates-premiere-and-after-effects/ is one example. 6    The website www.filmconvert.com is just one example.

16 CAMERA CASE STUDIES

Structure of this chapter This next chapter is broken up into five sections. Each section will look at a couple of examples of cameras within the five different categories of camera with a view to determining how filmic the images they produce can be. The categories are: camcorders; DSLRs; smartphone cameras; action cameras; and digital cinema cameras. This is not intended to be an exhaustive list, but more of a starting point for you to begin assessing cameras based on their technical specifications to determine how useful they will be at helping you to achieve a cinematic film look. The details listed here could change with subsequent model upgrades or firmware updates. Do your own research for cameras that are not listed here.

Camcorders Camcorders are a relatively wide-ranging kind of camera. They range from consumer home-movie cameras to professional documentary-style cameras. They are designed for TV and video and record sound via built-in microphones as well as having the option to add external microphones (professional models). Traditionally, camcorders have not produced particularly cinematic images, mainly because of the small-sensor size (depth of field) and narrow dynamic range.

Sony NX5

The Sony NX5 is a competent handheld camcorder for professional video and TV use. It will attach to all standard tripods and camera mounting systems. The resolution is only 1080p, which may be enough, but ideally 4K would be more flexible and future proof. The camera has lots of nice features such as built-in XLRs for external audio and built-in ND filters. The biggest concern with this camera producing cinematic images is the small sensor, which struggles to create shallow depth of field images. TABLE 16.1  Analysis of the Sony NX5 with ratings where applicable

DSLRs

DSLRs (digital single lens reflex) are primarily stills cameras that have had video functionality enabled. Initially, the video functionality was enabled simply for added value; the cameras could technically do it in addition to the stills functionality, so why not allow that? The large image sensor (comparable to S35) has meant that video and filmmakers have sought these cameras out specifically to make film and video images on. A few hundred pounds of DSLR and lens can produce images that look reasonably cinematic, making particular use of the shallow depth of field that comes from a large imaging plane. DSLRs have the added benefit of having interchangeable lenses, which with regular interaction can help build a greater understanding of how lenses work. The lack of video-specific tools has meant that third-party software developers have created firmware hacks for certain DSLRs that add video and film tools. Using some unlicensed firmware such as Magic Lantern1 can add video functionality, but could invalidate warranty should an issue arise as a result of using the unofficial program. Some camera manufacturers have produced DSLRs (for example, Panasonic with its GH5) that seem to have been deliberately designed with film and video as a priority, including a lot of the tools needed for film and video work, such as zebras and focus peaking. Workarounds are needed to make DSLRs work on a film shoot, but I consider them to be an inexpensive but helpful addition to a filmmaker’s arsenal. They are also very helpful for learning about cinematography. Here are some drawbacks to using a DSLR for film work: •    The form factor is small, and the cameras need additional mounts and rigs to even shoulder mount properly. It is possible to handhold them, but the camerawork can get very shaky. •    Due to the camera’s imaging processor, when downsizing the image from the full resolution of the sensor (for stills) to the lower video resolution video file, there are aliasing artefacts that can be present on the video, particularly noticeable on tight patterns such as bricks. •    Most stills cameras do not have video tools, such as zebras (for exposure) or focus peaking. •    DSLRs do not have XLR inputs for using external microphones. The microphone on a DSLR is usually very small and is designed to pick up everything. Great for a reference, but not good for recording clean sound. Therefore other external and attachable audio recorders with XLR inputs have emerged to patch over this deficiency in DSLR filmmaking. •    Codecs are usually limited to a relatively low bitrate h.264 video codec.

•    Comparing statistics such as dynamic range can be confusing as review and benchmark websites often quote the figure as they relate to stills photography.

Canon 4000D The Canon 4000D is currently Canon’s cheapest entry-level DSLR. Overall, this is a good purchase based on the price, although it has a few limitations in video mode. The resolution is 1080, which is standard HD but does not give flexibility to reframe a shot or futureproof a project with a higher resolution. The camera will take the full-frame EF lenses (with a crop factor) and the smaller EF-S lenses. The dynamic range is hard to work out on all DSLRs unless they specifically state it in relation to the video performance. The dynamic range in video mode is almost always significantly less than in stills mode (raw), which can make comparisons difficult to work out. TABLE 16.2  Analysis of the Canon 4000D with ratings where applicable   Manufacturer Canon Model

4000D (EOS 4000D)

Price

£350 (without lens)

Form factor

Small handheld

Intended purpose

Stills photography

Bad

Camera mounting

Standard

Good

24 fps

24 and 25

Good

Progressive

Progressive

Good

Shutter

Shutter speed

Good

Higher frame rates

60p (at 1280 × 720)

Bad

Sensor size

APS-C (comparable to S35)

Good

Sensor type

Single CMOS sensor

Good

Resolution

1920 × 1080

Average

Lenses

Canon EF, Canon EF-S

Good

Aperture

Fully controllable; in lens

Good

ND filter built-in

None

Bad

Codec

.MOV – h.264 interframe (IBP) – 4:2:0 – 46Mbps

Good

Dynamic range

7–8 stops2

Bad

LOG gamma mode

No

Bad

Media type

SD card, SDHC card, SDXC card

Good

Movie length XLR microphone inputs

Max. duration 29 mins 59 secs; Max. file size 4GB None

Professional connections HDMI mini (type-C) output  

Average Bad Average

Canon 5D Mk4 TABLE 16.3  Analysis of the Canon 5D Mk4 with ratings where applicable   Manufacturer Canon Model

5D Mk4 (EOS 5D Mk4)

Price

£3,500 (without lens)

Form factor

Small handheld

Intended purpose

Stills photography and filmmaking

Good

Camera mounting

Standard

Good

24 fps

23.98, 24 and 25

Good

Progressive

Progressive

Good

Shutter

Shutter speed

Good

Higher frame rates

119 fps (1280 × 720)

Average

Sensor size

Full frame (larger than S35; close to VV frame size)

Good

Sensor type

Single CMOS sensor

Good

Resolution

4K (17:9) 4096 × 2160

Good

Lenses

Canon EF

Good

Aperture

Fully controllable; in lens

Good

ND filter built-in

None

Bad

Codec 1

.MOV – Motion JPEG (4K) – 4:2:2 – 8 bit – 500Mbps

Good

Codec 2

.MOV – h.264 (1080) inter- (All-I) or intraframe (IPB) – 4:2:0 – 8 bit – 90Mbps

Good

Codec 3

.MOV – h.264 (HDR 1080) – interframe – 4:2:0 – 8 bit

Good

Dynamic range

12 stops (LOG mode) – also HDR movie mode

Good

LOG gamma mode

Canon LOG gamma

Good

Media type

SD, SDHC, SDXC card and CF card

Good

Movie length

4K and full HD – max. duration 29 mins 59 secs (excluding high frame rate (HFR) movies); no 4GB file limit with exFAT3 CF card

Good

XLR microphone inputs

None

Bad

Professional connections Camera tools  

Good

HDMI output None

Bad

The Canon 5D Mk4 is the latest iteration of the camera that started the DSLR revolution. Canon has improved the video features on this latest full-frame DSLR. It is still missing tools such as zebra and focus peaking, which seems odd considering how famous this camera is for producing video images. Canon also produces a range of dedicated cinema cameras (rather than stills cameras that can also shoot video) so I assume they are drawing a distinction between the ranges of products. As with all cameras (and particularly DSLRs), there is some functionality that is only available with certain restrictions. For example, it is only possible to shoot 4K video using a Motion JPEG codec at 4:2:2 chroma subsampling. It is not possible to shoot the 119 fps at anything above 1920 × 720 as presumably the processing power of the imaging processor and the media (SD/CF card) would not be able to cope with the amount of information that would result from four times the amount of information per second. The 119 fps becomes less usable in a 4K project because of the almost 90% decrease in resolution.

Smartphone cameras Smartphone cameras are quickly becoming an acceptable format for shooting shorts and even feature on. Prolific filmmaker Steven Soderbergh (Oceans 11) made the feature film Unsane (2018) using an iPhone 7.4 There are a lot of factors that need considering when shooting with such a small device, which is primarily designed to fit into a pocket and to do so many things other than take photographs and video. Smartphone filmmaking may seem simple enough if you already have one, but you should consider the following: •    The lens aperture is fixed. This means that ISO and shutter speed are the only exposure triangle elements that can be used to control exposure. In bright light, the camera will have no choice but to reduce the shutter speed, resulting in the loss of a 180° shutter-angle equivalent. The only option to correct this this would be to use an external ND filter. •    Mounting. Separate mounts need to be purchased to attach to a tripod or shoulder mount.

•    Memory. If shooting using only one phone, you need to factor in time to either swap out Micro SD cards or, in the case of an iPhone, dock and sync footage to a computer before resuming filming. This could eat up a lot of precious filming time. •    Lenses. Smartphones come with one very wide lens attached. Adding other lens adaptors is becoming popular though. •    Sensor size. Smartphone cameras have very small sensors, so cinematic depth of field is tricky to pull off. •    Dynamic range. Smartphone cameras still have the narrow dynamic range of video. Some smartphones shoot raw stills images, but this has not yet been made possible in video mode.

iPhone X TABLE 16.4  Analysis of the iPhone X with ratings where applicable

The iPhone X is currently Apple’s flagship phone, featuring the best camera system to date. Storage is always a consideration on iPhones, since they will not accept memory increases using swappable Micro SD cards like some phones. There

is a choice of two lenses on this camera, which is useful for shot variety. Also, because of the popularity of the phone, there is a slew of relatively inexpensive lens attachments and tripod mounting attachments. The phone has a small sensor though and a shallow depth of field is difficult to achieve. Also, the sensors do not cope that well with low light. You might notice that video taken in daylight looks much less grainy than at night time, even with the lights on. The inclusion of the h.265/HEVC codec makes the 4K image much more manageable in size than with the standard h.264 codec. The h.265 codec is a lot more taxing on a playback device or NLE system though, so transcodes to a more suitable intermediate codec (such as ProRes or DNxHD) should be made for editing. For iPhone cinematography, consider use of the following items: •    FiLMiC PRO – This free app is a must for iPhone filming; it also works with Android phones (www.filmicpro.com/). •    Moment is not the only player in town, but this company makes a range of specialist iPhone lenses that attach to the camera case (www.shopmoment.com/).

Action cameras The action camera is a new type of small-form camera designed to be attached to helmets and out-of-reach places. They can be attached to helmets of cyclists and parachutists or on the underside of vehicles. Their relatively inexpensive price tag means that they are quite disposable for larger productions. They have been used selectively on a number of large-scale Hollywood productions. A small number of shots from the ‘barrel escape’ scene in The Hobbit: The Desolation of Smaug (2013) used the GoPro HD Hero 2. The shots stand out quite a lot and are very noticeable in a bad way. Notice how the very fast shutter speed of the GoPro footage, evident in the suspended water droplets in Figure 16.1, is in contrast with the very apparent motion blur on water droplets of the standard 180° shutter of Figure 16.2 and the rest of the film. This difference in shutter speed is contributing to the shot really standing out. Other factors that may be contributing to why the few GoPro shots stand out could be to do with the colours, and the lower quality codec that would not stand up to much colour-grading manipulation before it started to degrade.

FIGURE 16.1  Frame from The Hobbit: The Desolation of Smaug 2013. Directed by Peter Jackson.

FIGURE 16.2  Frame from The Hobbit: The Desolation of Smaug 2013. Directed by Peter Jackson. The reason for the fast shutter speed being used on the GoPro footage is because of the fixed aperture on the GoPro cameras. The camera cannot reduce the aperture to control exposure, so the shutter is the primary means of controlling exposure. Possibly, attaching an ND filter to the front of the camera, calibrated to bring the exact amount of diffusion, would have meant that the standard film shutter speed of 1/50 (180° shutter angle) could have been used. To complicate things further, the rest of the film was shot at 48 fps to enable the HFR release, which meant using a shutter angle of 270° (1/64 at 48 fps).

A GoPro Hero 3 was used exclusively on the first-person (point of view) feature film Hardcore Henry (2015). The biggest drawback with the action camera is that it is not very flexible. They are great for certain types of shots (wide shots) or styles (such as the first-person style) but other than that they are limited in the shot type that can be achieved. They also have a very small sensor so a shallow depth of field is difficult to achieve.

GoPro Hero 6 TABLE 16.5  Analysis of the GoPro Hero 6 with ratings where applicable

The GoPro Hero 6 is currently the best of the GoPro line of cameras. Features such as the Protune option make LOG-like images possible. If you can make the wide-angle lens work with your filmmaking then this could be an inexpensive

camera system; however, for shot variety and any kind of traditional cinematography, this is not completely useable. It is simply not designed for cinema images. It is perfectly serviceable for point of view (POV) shots and the odd action shot, but getting traditional shot coverage with this camera would be difficult. This is a supplementary camera system in my opinion, one to invest in after you have a regular camera system in place. I would not recommend this type of camera for learning about cinematography.

Digital cinema cameras Digital cinema cameras are designed to replicate technical and aesthetic qualities of a celluloid film camera. They are completely appropriate for what we have been considering in this book. If you have access to one of these cameras, you should use it. They range from relatively inexpensive (for what they are) Blackmagic cameras, to the best of the best digital cinema cameras available, such as Arri and Red cameras. They are deliberately designed (and marketed) with a Super 35mm-sized sensor, for those wanting the depth of field characteristics of motion-picture film.

Blackmagic URSA Mini Pro 4.6K The Blackmagic URSA Mini is one of the most affordable true cinema cameras. This has been specifically designed to be used to create cinematic images and so records using high-quality codecs at high bitrates. The terminology is related to film, so the camera displays the shutter in terms of shutter angle rather than shutter speed. There are cheaper models available, but this is currently its best URSA Mini. Not many large film productions are made on Blackmagic cameras for some reason, but they are just as capable as the best cinema cameras available. The camera uses CFast 2.0 cards, which are currently quite expensive. This should be a factor when considering using the camera. TABLE 16.6  Analysis of the Blackmagic URSA Mini Pro 4.6K with ratings where applicable

Canon C200 The Canon C200 is towards the bottom of the Canon cinema range of cameras. The C100 is a cheaper alternative. The C200 is a great S35-sized imaging sensor

with a range of great features. It records to CFast 2.0 as well as the cheaper SD cards (although not for the best codec options) and will produce modest-sized h.264-based files as well as larger, compressed raw images. TABLE 16.7  Analysis of the Canon C200 with ratings where applicable

Notes 1    https://magiclantern.fm/. 2    Dynamic range estimate worked out through a stills (raw) benchmark and then reduced by four stops: www.dxomark.com/canon-eos-4000d-sensor-review-temptingly-affordable/; https://wolfcrow.com/blog/dynamic-range-comparison-of-raw-vs-video-mode-on-the-canon-550d/. 3    exFAT is a file system that has no limit on file size. It will also work between Mac and PC without issues. Any drive or card can be formatted to be exFAT. 4    According to IMDB, Unsane was shot using inexpensive (for a feature film) lenses by Moment and using a h.264 codec and LOG video profile using the free app FiLMiC Pro.

 

PLATE 1  Cross section of celluloid film. Source: Copyright © 2019 Sam Pratt.

PLATE 2  Warm image and cool image. Source: Copyright © 2019 Jonathan Kemp.

PLATE 3  Kelvin scale. Source: Copyright © 2018 Sam Pratt. Note: The Kelvin colour temperature scale, highlighing the two most common colour temperatures for film and video: 3200 for tungsten balance and 5600 for daylight balance.

PLATE 4  Frame from Fargo 1996. Directed by Joel and Ethan Coen.

PLATE 5  Red, green and blue channel image. Source: Copyright © 2019 Jonathan Kemp.

PLATE 6  YUV image. Source: Copyright © 2019 Jonathan Kemp.

PLATE 7  CIE visible colour spectrum. Source: https://en.wikipedia.org/wiki/CIE_1931_color_space#/media/File:CIE1931xy_blank.s vg.

PLATE 8  Colour gamut comparison: REC709/REC2020. Source: Sakurambo, https://commons.wikimedia.org/wiki/File:CIExy1931.svg; https://en.wikipedia.org/wiki/Rec._709#/media/File:CIExy1931_Rec_709.svg; https://en.wikipedia.org/wiki/Rec._2020#/media/File:CIExy1931_Rec_2020.svg.

PLATE 9  Parade. Source: Photo by Emre Gencer on Unsplash.

PLATE 10  False colour. Source: Copyright © 2019 Jonathan Kemp. Image from the short film Freedom directed by Jonathan Kemp. Cinematography by Andy Toovey. Image of Rebecca Calienda.

PLATE 11  Focus peaking. Source: Copyright © 2019 Jonathan Kemp. Note: The top image features no peaking display. The bottom image has peaking turned on. The peaking highlights the areas that are in focus with a coloured or white line.

PLATE 12  Vectorscope. Source: Copyright © 2019 Jonathan Kemp. Note: Notice how the large amount of green in the image is represented in the vectorscope spreading out mainly towards the green and yellow section.

PLATE 13  Vectorscope test pattern. Source: Copyright © 2019 Jonathan Kemp. Note: Notice the solid colours of the test pattern are represented in the dots within the R, Mg, B, Cy, G and Yl boxes. The boxes represent maximum broadcast saturation levels.

PLATE 14  Bayer filter. Source: Copyright © 2019 Sam Pratt.

PLATE 15  Chroma subsampling, with finished image, four images. Source: Copyright © 2019 Jonathan Kemp.

PLATE 16  Representation of chroma subsampling. Source: Copyright © 2019 Sam Pratt.

THE RULES

•    Follow as many rules as possible to achieve a cinematic film look. •    For a cinematic look, always shoot at 24 frames per second. •    Avoid interlace footage when creating cinematic images. •    Always record using full manual mode. •    Always try to use the native ISO of the camera or the lowest possible ISO or gain setting. Increase this as a last resort. •    Always observe the 180° shutter-angle equivalent (1/48 at 24 fps or 1/50 at 25 fps). •    During moments of intense action, consider a reduced shutterangle equivalent, such as a 45° angle (1/200). •    Use the aperture/iris in conjunction with ND filters to control the depth of field and exposure level of the video image. •    Always perform a custom white balance for every lighting change. •    Always use exposure tools to help you expose a shot. These may be waveform, histogram, zebra or false colour. •    Always use focus peaking to help set focus for a shot.

•    Never assume that because they look ‘okay’ on a small monitor that the exposure and focus are correct. •    Consider the dynamic range of the camera you are working on. Make allowances for low dynamic range cameras. •    Consider the use of lighting and reflectors to even up lighting levels between internal and external scenes. •    Develop a personal understanding of what is an acceptable level of overexposure, and in what circumstance. •    Choose a sensor that most closely resembles the dimensions of an S35 image plane, to replicate a similar cinematic depth of field. •    Consider the stylistic or creative use of aspect ratios when composing your image, using onscreen guides to help compose shots for your chosen aspect ratio. •    Use the highest resolution possible that you have the capacity to store and work with both on production and in postproduction. •    Consider ‘over scanning’ the image and shooting for a slightly wider composition for added flexibility in post-production. •    Always consider the correct lens choice for the purpose of the shot in question. •    Consider the budget, workflow and turnaround time of the video project when deciding on video codec specifics. •    Use the best codec available at the highest bitrate possible, within the constraints of the project.

•    Before filming, plan how you will treat your images in postproduction, to add cinematic qualities to your images.

GLOSSARY

16mm film

The gauge of a semi-professional film stock.

35mm film

The gauge of the predominant-sized film stock of the 20th century. A large-gauge film stock used on a small number of high-quality film productions. A small-gauge film stock associated with home use. A small, lightweight camera that is simple to operate and can be attached to a helmet or other extremity to record footage that traditional cameras would not be able to. GoPro is a famous make of action camera. Aliasing is the visual distortion and irregularities that appear when an image is scaled down. Typical aliasing artefacts are noticeable on brick walls and tight patterns on clothes. The part of a lens that opens or closes to let different amounts of light through. It can also be used to control the depth of field in an image. See also Iris. A common size of digital image sensor on crop sensor DSLR cameras. A description of the shape of a video image. A popular type of compressed but professional video camera format.

65mm 8mm film Action camera

Aliasing

Aperture

APS-C Aspect ratio AVCHD Camcorder CCD

A type of video recorder, usually handheld and portable. Charged coupled device. A type of digital image sensor.

Celluloid Chroma Chroma subsampling Cinematography

Cinerama

CMOS Codec Colour correction Colour gamut Colour grading

Colourist Contrast

Crop factor CTB

A plastic. Refers to the plastic base of a piece of celluloid film. Sometimes also refers to film stock. The colour information in a video image. The compression strategy of lowering the resolution of the colour information in a YUV video image to produce a smaller file. The art of creating a moving image. Relates to the technical qualities as well as lighting, composition and other aesthetic considerations. A pioneering kind of widescreen cinema that used three cameras to record an image, and three projectors to project onto a curved screen. Complementary metal-oxide-semiconductor. A commonly used digital imaging sensor. A method of compressing and decompressing a video or audio image. The craft of balancing and correcting visual images in post-production to create uniformity and compliance to professional standards. A reference to the particular range of colour values. The artistic role of creating a distinctive look, using the manipulation of colour to act as another storytelling tool. The person responsible for colour grading a film or video. The difference in value between the black and white levels in a video image. High contrast has a large difference between the two extremes. The ratio of a camera sensor’s size to a 35mm film frame. Colour temperature blue. A blue-coloured gel that accurately converts tungsten-coloured light to daylightcoloured light.

CTO

Daylight colour temperature DCP

Deep focus

Deinterlace Depth of field Digital camera Digital cinema camera DNxHD DSLR

Dynamic range Exposure F/stop

Colour temperature orange. An orange-coloured gel that accurately converts daylight-coloured light into tungsten-coloured light. Light that is rated around 5600 Kelvin. Broadly speaking, light that is blue coloured. Digital cinema package. A collection of digital files used to store and convey digital cinema audio, image and data streams. The term was popularised by Digital Cinema Initiatives, LLC in its original recommendation for packaging digital cinema contents. A cinematic and photographic technique that creates a large depth of field i.e. all objects in the frame are in focus. The process of converting interlaced fields into one progressive frame. An indication of the amount of foreground and background of an image that is in focus. A camera that uses a digital imaging sensor to create a digital image. A digital camera that is designed to replicate many characteristics of a celluloid motion-picture film camera and create high-quality digital video files. A high-quality intraframe video-editing codec, created for the Avid NLE system. Digital single lens reflex. A digital stills camera that uses a mirror to alternately project an image from the lens to the viewfinder and the imaging sensor. The difference in exposure levels that a camera can capture at the same time, usually rated in ‘stops’. The level of brightness that an image is exposed to. The brightness or darkness level. The measurement of light let in through the lens. A lower f/stop number means more light through the lens

False colour

Film stock FPS

Frame rate Gain Gate (film gate) Grain

H.264 H.265

HD HDR HDV

HFR

and also a shallower depth of field image. An exposure tool that clearly identifies the brightness and darkness levels of a live image. It is a feature available on more cinematic cameras. The celluloid film that is used in a traditional motionpicture film camera. Frames per second. The number of individual images in a film or video image that are captured and displayed each second. See FPS. A measure of the signal amplification of a video or audio image. The rectangular opening in the front of a motion-picture camera where the film is exposed to light. A specular texture that is present on celluloid film. Some consider the organic nature of film grain to be aesthetically appealing. A video codec. Commonly used on the internet, many video cameras and on Blu-ray discs. Also known as HEVC (high efficiency video codec). H.265 is an advanced compression, capable of much smaller file sizes than H.264. High definition. An image that is 1920 × 1080 pixels in size. High dynamic range. An image that has a very large difference between the brightest and the darkest parts. High definition video. An older video standard that featured rectangular pixels and a resolution of 1440 × 1080. High frame rate. A cinematic format that doubled the frame rate of the standard 24 fps cinema frame rate to increase the perception of motion.

Histogram Hue Image plane IMAX Interframe compression Interlaced frames

Intraframe compression IRE

Iris ISO

Kelvin scale

Kodak Lens

Letterboxing

A visual tool that shows the exposure level of a shot. A measurement of colour. The surface that the image projected from the lens exposes onto. Either celluloid film or a digital sensor. The largest format, highest quality celluloid film cinema system. An image-compression strategy that seeks to compress each frame by removing redundant neighbouring pixels. A method of effectively doubling a frame rate (i.e. 25 frames to 50 interlaced fields) by splitting each frame in half and allocating alternate lines of pixels to each field. An image-compression strategy that seeks to compress a video file by recording a keyframe and then only recording the different pixels in subsequent frames. A unit used in the measurement of composite video signals. Its name is derived from the initials of the Institute of Radio Engineers. See Aperture. ISO is the same thing as Gain. The boosting of a video image. ISO is a standard that is comparable across different camera models and manufacturers. The scale at which the colour of light is measured. A lower number is redder (i.e. 3200 is tungsten coloured) and a higher number is bluer (i.e. 5600 is daylight coloured). The most popular manufacturer of celluloid film stock. A glass tool that magnifies a projected image onto an image plane, either celluloid film stock or digital image sensor. The black bars at the top and bottom of a widescreen image to make the aspect ratio up to the standard 16:9 shape.

Light meter LOG footage

Luma LUT

Matte box

MFT

MiniDV Monochrome Motion blur Motion picture Native ISO ND filter

NLE

A tool for measuring the exposure level of a scene. Light meters can be external or inside the camera. Video footage that records the exposure levels using a logarithmic curve. It is a method of squeezing the dynamic range of an image to allow for greater tolerance of brightness and darkness levels. Luminance. A measure of the brightness and darkness levels. Luma contains no colour. Look up table. A small digital file that contains information that an NLE or colour-grading application uses to change colour, contrast and saturation levels. A box on the front of the lens that helps reduce direct sunlight hitting the lens and causing lens flares. Other filters can also be attached to the matte box. Micro Four Thirds. A standard released by Olympus and Panasonic in 2008 for the design and development of mirrorless, interchangeable-lens digital cameras, camcorders and lenses. Camera bodies are available from Blackmagic, DJI, JVC, Kodak, Olympus, Panasonic and Xiaomi. An old type of cassette video tape for consumer and professional camcorders. An image made up of a single colour. The streaky effect of objects moving across the frame when while the shutter is open. A film or a movie. The ISO setting that a digital camera operates at without any signal amplification. Neutral density filter. Reduces the exposure level of a video image by a specific number of ‘stops’. Can be thought of as sunglasses for the camera. Non-linear editing. A software video-editing environment. Examples of NLE systems are Adobe

Noise

Normal lens NTSC

PAL

Premiere Pro, DaVinci Resolve, Avid Media Composer and Final Cut Pro X. Unwanted elements that increase as the level of audio or video is boosted. This could be ‘hiss’ in audio, or specular noise in video images. A lens that looks similar to the magnification as seen by the human eye. Named after the National Television System Committee. The video system or standard used in North America, Japan and most of South America. In NTSC, 30 frames are transmitted each second. Phase Alternating Line. The predominant video system or standard used in most of Europe, the Middle East, South Africa and Australia. In PAL, 25 frames are transmitted each second.

Peaking

A focus tool that indicates areas of an image that are in focus by highlighting those parts in a colour.

Photosite

The area on a digital sensor that captures light information to be turned into pixels in the video image. The black bars at the sides of a square image to make the aspect ratio up to the standard 16:9 shape. The smallest picture element that makes up a digital image. The process of editing, colour grading and adding visual effects to a video project.

Pillarboxing Pixel Post-production Progressive frames

As opposed to interlaced frames. Frames that are captured all at once.

ProRes Proxy REC2020

An intermediate video-editing codec created by Apple. A smaller video file that can be created in an NLE system to reduce the file size and improve performance. A wide video colourspace for UHD content.

REC709

The standard video (SD and HD) colourspace.

Resolution

The number of pixels contained in a video image.

RGB

Red, green and blue. The three image colours that comprise video images. A separate waveform for each of the red, green and blue channels. Super 35. A 35mm film frame that uses the portion of the frame reserved for the optical soundtrack. The amount of colour that is present in an image. No saturation would result in a black and white image.

RGB parade S35 Saturation SD

Standard definition. An image that is 704 × 480 pixels in size (NTSC) and 704 × 576 pixels in size (PAL).

Shallow depth of field

A cinematic and photographic technique that creates a small depth of field i.e. only a narrow part of an image is in focus.

Sharpness

The edge contrast of an image. This can be increased in camera or in post-production to increase the perceived sharpness of an image. The duration of time that a frame is exposed for, expressed in degrees of a circular spinning disc shutter i.e. 180°. The duration of time that a frame is exposed for, expressed in fractions of a second i.e. 1/50 of a second. A camera that is a part of a typical modern smartphone.

Shutter angle

Shutter speed Smartphone camera SMTPE

Society of Motion Picture and Television Engineers. Founded in 1916 as the Society of Motion Picture Engineers (or SMPE). A global professional association of engineers, technologists and executives working in the media and entertainment industry.

Stop

A full ‘stop’ of exposure represents a doubling or a halving of the exposure level.

Super 35mm

An augmentation of the standard 35mm film stock used

in motion-picture cameras. The film gate is adjusted to extend the image plane to use the area that was given to the optical soundtrack. Super 16

Super 8

T/stop

An augmentation of the standard 16mm film stock used in motion-picture cameras. The film gate is adjusted to extend the image plane to use the area that was given to recording the optical soundtrack. An augmentation of the standard 8mm film stock used in motion-picture cameras. The film gate is adjusted to extend the image plane to use the area that was given to recording the optical soundtrack.

Telephoto lens

Can be thought of as the same as an f/stop. The t/stop is a more accurate measurement of light, compared to the f/stop, which is theoretical. Also called 2-perf. A 35mm motion-picture camera film format introduced by Technicolor Italia in 1960. The Techniscope format uses a two film-perforation negative pulldown per frame, instead of the standard fourperforation frame usually exposed in 35mm film photography. A long lens that magnifies objects that are further away.

Tungsten colour temperature

Light that is rated around 3200° Kelvin. Broadly speaking, light that is orange coloured.

UHD

Ultra high definition. An image that is 3840 × 2160 pixels in size. Also referred to as 4K.

Vectorscope

A colour tool that displays the colours in an image and the saturation level of each image. A format of motion-picture cinematography developed by Paramount. The system used standard 35mm film, but oriented horizontally, as opposed to vertically, which resulted in a much larger surface area and higher resolution image. A setting on a digital camera that adjusts the cameras

Techniscope

VistaVision

White balance

Wide lens

sensitivity to colour temperature. A white balance. A short lens that captures a very wide field of view.

Widescreen YUV

A general term that represents a wider screen shape. A video colour space that features one luma channel (Y) for the black and white image and two choma channels (U + V) for the colour component of an image.

Zebra

An exposure tool that displays a striped zebra pattern on an area of a video image that is exposed properly, or overexposed (depending on the settings).

INDEX

8mm film 94–96, 102, 127 16mm film 39, 94–96, 98 35mm film 4, 6–9, 39, 56–59, 94, 95–101, 103, 107, 111–114, 117, 127– 128, 136, 139–140, 142, 193 65mm 97, 101–102, 127 action camera 104, 183, 190, 192 aliasing 107–108, 185 aperture 28–29, 31–35, 54–56, 59–61, 63–64, 77, 108, 130–131, 134, 136– 137, 184, 186–189, 191–192, 194–196 APS-C 57–58, 106, 137, 143, 186 aspect ratio 8, 92–93, 98, 107–125, 127, 141 AVCHD 152–155, 164, 184 camcorder 4, 7, 29–31, 50–51, 57, 59, 62, 68, 79–80, 82, 102, 104, 123, 137–138, 142–143, 163, 183–184 CCD 7, 102 celluloid 4–9, 14–15, 17, 22, 27–31, 37–40, 42, 46, 49, 57, 59, 66–67, 73– 76, 85, 88, 93–95, 100–101, 111–114, 127–128, 146–147, 179, 193 chroma 70, 74, 155, 156, 157, 158, 164, 188, 194 chroma subsampling 155–158, 164, 188, 194 cinematography 2, 4, 21–22, 25, 27, 31, 67, 76, 98, 134, 185, 190, 193 Cinerama 112–113 CMOS Sensor 15, 103, 184, 186–187, 189, 192, 194–195 codec 9, 65, 71, 127, 146–156, 159, 161–167, 181, 184–190, 192, 194–195, 197 colour correction 68, 167–168, 170–171, 178–179, 182, 197

colour gamut 72 - 73, 124, 149, 169, 171 colour grading 10, 69, 83, 155, 167, 170–171, 178, 190, colourist 12, 166, 167, 170, 171 contrast 39, 71, 83, 87, 88, 91, 166, 169, 177, 190 crop factor 140–144, 186 CTB 69 CTO 69 daylight colour temperature 39, 65–67, 69, 76 DCP 23, 125 deep-focus 33, 55–57, 59, 63, 95 deinterlace 18–21, 24 depth of field 5, 9–10, 25, 33, 55–57, 59, 60–61, 63–64, 94, 104, 106, 108, 183–185, 189–190, 192–193 digital cinema camera 1, 8–9, 17, 18, 88–89, 108, 154, 183, 193 DNxHD 149–150, 163–164, 190 DSLR 4, 9–10, 12, 29, 35–36, 39, 51–52, 57, 59–60, 62, 68, 78, 82, 102– 104, 106–108, 113, 137, 149, 157, 163, 183, 185–186, 188 dynamic range 27, 77, 79, 84–91, 124, 169–170, 183–187, 189, 192, 194– 195 exposure 22, 25, 27–35, 41, 48–49, 55, 60–61, 63, 75–82, 84, 86, 89, 90– 92, 130, 147, 168–171, 185, 188–189, 191–192 f/stop 27, 29–32, 34–35, 56, 59, 63, 130 false colour 76, 82–83, 87 film stock 5, 7, 17, 27, 37–40, 44, 64, 66–67, 76, 93–99, 101, 111, 113– 114, 127, 179, 181 FPS 16, 17, 20–24, 45, 48–52, 54, 61, 98, 184, 186–189, 191–195 gain 28–30, 33–35, 37, 41–44, 61, 63, 77, 91, 98, 104, 116, 126–127 gate (film gate) 12, 15, 93–95, 98, 99, 101, 127 grain 5, 25, 33, 37–44, 61, 95, 101, 111–112, 114, 166, 179–180, 182, 190 H.264 148, 152–154, 163–164, 184–190, 192, 194–195

HD 11, 20, 73, 90, 102, 107, 114, 121–127, 147, 154–155, 186–187, 190, 194–195 HDR 84, 85, 89, 90, 124, 187 HDV 123 HFR 20–23, 187, 191 histogram 75, 78, 79, 83 hue 72, 83 image plane 15, 30–31, 57, 59, 92–95, 98–102, 106–107, 109, 111, 130– 131, 140, 143–144 IMAX 7, 44, 97, 102, 127 interframe compression 159, 161, 186, 187 intraframe compression 158–161, 163–164, 167, 187 IRE 77–78, 80–82 iris 28, 31–34, 54, 56, 59–61, 63–64, 77, 130 ISO 28–30, 33–37, 39, 41–44, 61, 63–64, 77, 104, 188 Kelvin scale 66–68, 72, 74 Kodak 6, 39–40, 44, 127, 179 lens 3–4, 6–7, 10–11, 15, 22, 27–29, 31–32, 35–36, 44, 49, 59–64, 94–95, 99–100, 103, 113, 115, 117, 127, 130–142, 144–145, 148, 184–190, 192– 195 letterboxing 115, 117, light meter 34, 75, 76, 77, 87 log footage 89, 168–171, 179, 184 luma 70, 78, 80, 87, 155–157, 168 LUT 169–171, 178–179, 182 matte box 61–62 MFT 104–106, 137, 141–144 MiniDV 7 motion blur 14, 17, 33, 45–54, 61, 190 ND filter 61–64, 91, 184, 186–189, 191–195 NTSC 20, 23, 51, 121–123

PAL 20, 51–52, 121–122 peaking 75, 82–83, 185, 188–189 pixel 3, 7, 18, 47, 95, 102, 104, 107, 111, 120–127, 156–158, 160 ProRes 150, 162–164, 190, 194 proxies 162, 167 REC709 73–74, 90, 169–172, 179 REC2020 73–74, 90, 124, 149 resolution 6, 7, 44, 92, 94–95, 98–102, 104, 107, 109, 112–115, 119–129, 147, 156–157, 162, 167, 184–189, 192, 194–195 RGB 70–72, 80, 103, 147, 155, 158 S35 8–9, 59, 93, 99, 104, 106–108, 127, 137, 141–144, 185–187, 194–195 saturation 72, 83, 169, 171, 172–174, 178, 182 sharpness 48, 72 shutter angle 30, 45, 49–50, 52–54, 61, 188, 191, 193, 195 shutter speed 28–31, 33–35, 45–54, 61, 63, 77, 108, 184, 186–188, 190, 192, 194–195 smartphone camera 2–4, 12, 22, 31, 40, 89, 102–104, 141, 143, 162, 169, 183, 188–189 Techniscope 96, 100, 101 tungsten colour temperature 39, 66–69 UHD 7, 73, 107, 114, 122, 124–128, 149, 155, 189, 192, 194–195 vectorscope 83, 172–177 Vistavision 97, 101, 102, 107, 113, 114 white balance 65, 67–68, 74, 80, 147 widescreen 95–96, 98–99, 100–101, 112–114, 117, 121–123 YUV 70, 71, 74, 155 zebra 75, 80–83, 185, 188–189