Physics in Focus HSC 9780170226776

Physics in Focus has set a new standard for supporting the New South Wales Stage 6 Science syllabuses. Each text aims to

3,747 300 24MB

English Pages [474] Year 2009

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Physics in Focus HSC
 9780170226776

Table of contents :
afrontmatter
Contents
About the authors
To the student
Acknowledgments
List of Board of Studies verbs
Physics skills—an introduction
m1_ch1
CHAPTER 1 Gravity
1.1 Gravitational field and weight
1.2 Universal gravitation
1.3 A closer look at gravitational acceleration
Secondary source investigation: g on other planets and the application of F = mg
1.4 Gravitational potential energy
First-hand investigation: Simple pendulum motion
Chapter revision questions
m1_ch2
CHAPTER 2 Space exploration
2.1 Projectile motion
2.2 Galileo’s analysis of projectile motion
2.3 Circular motion
2.4 Circular motion and satellites
2.5 A quantitative description of Kepler’s third law
2.6 Geostationary satellites and low Earth orbit satellites
2.7 Escape velocity
2.8 Leaving Earth
2.9 Coming back to Earth
Secondary source investigation: The work of rocket scientists
First-hand investigation: Projectile motion
Chapter revision questions
m1_ch3
CHAPTER 3 Gravity, orbits and space travel
3.1 Gravity: a revision
Secondary source investigation: Factors that affect the size of gravitational attraction
3.2 The slingshot effect
Chapter revision questions
m1_ch4
CHAPTER 4 Special relativity
4.1 The aether model
4.2 The Michelson–Morley experiment
Secondary source investigation: The Michelson–Morley experiment
4.3 Frames of reference
First-hand investigation: Non-inertial and inertial frames of reference
4.4 Principles of special relativity
4.5 Impacts of special relativity
4.6 The modern standard of length
Secondary source investigation: Evidence for special relativity
4.7 Limitations of special relativity and the twin paradox
Secondary source investigation: Thought experiments and reality
4.8 Implications of special relativity for future space travel
Chapter revision questions
m2_ch5
CHAPTER 5 The motor effect
5.1 Some facts about charges and charged particles
5.2 The motor effect
5.3 Force between two parallel current-carrying wires
5.4 Torque: the turning effect of a force
5.5 Motor effect and electric motors
5.6 The need for a split ring commutator in DC motors
5.7 Features of DC motors
Secondary source investigation: Applications of the motor effect
First-hand investigation: Demonstrating the motor effect
Chapter revision questions
m2_ch6
CHAPTER 6 Electromagnetic induction
6.1 Michael Faraday’s discovery of electromagnetic induction
6.2 Magnetic field lines, magnetic flux and magnetic flux density
6.3 Faraday’s law: a quantitative description of electromagnetic induction
6.4 Lenz’s law
6.5 Lenz’s law and the conservation of energy
6.6 The need for external circuits
6.7 Another application of Lenz’s law: back EMF in DC motors
6.8 Eddy currents
Secondary source investigation: Applications of induction and eddy currents
Secondary source investigation: Eddy current braking
First-hand investigation: Electromagnetic induction
First-hand investigation: The effects of magnets on electric currents
Chapter revision questions
m2_ch7
CHAPTER 7 Generators
7.1 Generators
7.2 Magnetic flux, changing of flux and induced EMF
7.3 The difference between a DC generator and an AC generator
7.4 The transmission wires
Secondary source investigation: Insulating and protecting
Secondary source investigation: Advantages and disadvantages of AC and DC generators
Secondary source investigation: The competition between Westinghouse and Thomas Edison
7.5 Assess the impacts of the development of AC generators on society and the environment
First-hand investigation: The production of an alternating current
Chapter revision questions
m2_ch8
CHAPTER 8 Transformers
8.1 Transformers: What are they?
Secondary source investigation: Energy lost in transformers
8.2 Types of transformers
8.3 Calculations for transformers
8.4 Voltage changes during the transmission from power plants to consumers
Secondary source investigation: The role of transformers for long-distance transmissions
8.5 The need for transformers in household appliances
8.6 The impact of the invention of transformers
First-hand investigation: Producing secondary voltage
Chapter revision questions
m2_ch9
CHAPTER 9 AC motors
9.1 AC electric motors
9.2 AC induction motors
Secondary source investigation: Energy transformation
First-hand investigation: Demonstrating the principle of an AC induction motor
Chapter revision questions
m3_ch10
CHAPTER 10 From CRTs to CROs and TVs
10.1 A cathode ray tube: the idea
10.2 Electric fields
10.3 Forces acting on charged particles in electric and magnetic fields
10.4 Debates over the nature of cathode rays: waves or particles?
10.5 J. J. Thomson’s charge to mass ratio experiment
First-hand investigation: Properties of cathode rays
10.6 Applications of cathode ray tubes (CRTs): implementation
First-hand investigation: Observing different striation patterns
Chapter revision questions
m3_ch11
CHAPTER 11 From the photoelectric effect to photo cells
11.1 Electromagnetic radiation (EMR)
Hertz’s discovery of radio waves and his measurement of their speed
11.2 Hertz’s experiment: production and reception of EMR
11.3 The photoelectric effect
11.4 Quantum physics
11.5 Black body radiation and the black body radiation curve
11.6 Particle nature of light
11.7 Einstein’s explanation for the photoelectric effect: a quantum physics approach
11.8 Using Einstein’s explanation to investigate the photoelectric effect
11.9 Einstein’s contributions to quantum physics and black body radiation
Secondary source investigation: Applications of the photoelectric effect: the implementation
Secondary source investigation: Can science be set free from social and political influences? Einstein and Planck’s views
First-hand investigation: The production of radio waves
Chapter revision questions
m3_ch12
CHAPTER 12 From semiconductors to solid state devices
12.1 Valence shell and valence electrons
12.2 Metals: metallic bonds and the sea electron model
12.3 The structure of semiconductors
12.4 Band structure and conductivity for metals, semiconductors and insulators
12.5 A closer look: intrinsic and extrinsic semiconductors
Secondary source investigation: Implementations of semiconductors: solid state devices
12.6 Solid state devices versus thermionic devices
12.7 Why silicon not germanium?
Secondary source investigation: More solid state devices: solar (photovoltaic) cells
Secondary source investigation: Integrated circuits and microchips: extensions of transistors
First-hand investigation: Modelling the behaviour of semiconductors
Chapter revision questions
m3_ch13
CHAPTER 13 From superconductors to maglev trains
13.1 Braggs’ X-ray diffraction experiment
13.2 Metal structure
13.3 The effects of impurities and temperature on conductivity of metals
13.4 Superconductivity
Secondary source investigation: More on superconductors
13.5 Explaining superconductivity: the BCS theory
First-hand investigation: The Meissner effect
13.6 Explaining the Meissner effect
Secondary source investigation: Applications of superconductors
13.7 Limitations of using superconductivity
Chapter revision questions
o1_ch14
CHAPTER 14 The models of the atom
14.1 The early models of the atom
14.2 Rutherford’s model of the atom
14.3 Planck’s hypothesis
14.4 The hydrogen emission spectrum
14.5 Bohr’s model of the atom: introduction
14.6 Bohr’s model and the hydrogen emission spectrum
14.7 More on the hydrogen emission spectrum
14.8 Limitations of Bohr’s model of the atom
First-hand investigation: Observe the visible components of the hydrogen emission spectrum
Chapter revision questions
o1_ch15
CHAPTER 15 More on the models of the atom
15.1 The wave–particle duality of light
15.2 Matter waves
15.3 Proof for matter waves
15.4 Applying the matter waves to the electrons in an atom
Secondary source investigation: Pauli and the exclusion principle
Secondary source investigation: Heisenberg and the uncertainty principle
Chapter revision questions
o1_ch16
CHAPTER 16 The nucleus and nuclear reactions
16.1 The nucleus
16.2 The discovery of neutrons
16.3 Radioactivity and transmutation
16.4 Wolfgang Pauli and the discovery of neutrinos
16.5 Strong nuclear force
16.6 Fermi’s artificial transmutation experiments
16.7 The accidental discovery of nuclear fission reactions
16.8 Fermi’s discovery of chain reactions
16.9 Comparing controlled and uncontrolled fission reactions
16.10 Fermi’s first controlled chain reaction
16.11 Mass defect and binding energy
16.12 Energy liberation in nuclear fission
First-hand and secondary source investigation: Observe radiation emitted from a nucleus using a Wilson cloud chamber
Chapter revision questions
o1_ch17
CHAPTER 17 Applications of nuclear physics and the standard model of matter
17.1 An application of nuclear fission reactions—a typical fission
17.2 Radioisotopes and their applications
Secondary source investigation: Uses of radioisotopes
17.3 Neutron scattering and probing
Secondary source investigation: The Manhattan Project
17.4 Particle accelerators
17.5 The standard model of matter
Chapter revision questions
o2_ch18
CHAPTER 18 Ultrasound
18.1 Sound waves
18.2 Ultrasound
18.3 Piezoelectric materials and the piezoelectric effect
18.4 The basic principle behind ultrasound imaging
18.5 A closer look at the reflection and the penetration of ultrasound waves
18.6 Different types of ultrasound scans
Secondary source investigation: The clinical uses of ultrasound
18.7 Doppler ultrasound
18.8 Doppler ultrasound as a diagnostic tool
Secondary source investigation: Ultrasound as a tool for measuring bone density
Chapter revision questions
o2_ch19
CHAPTER 19 X-rays, computed axial tomography and endoscopy
19.1 The nature of X-rays
19.2 The production of X-rays for medical imaging
19.3 Using X-rays for medical imaging: the principle
19.4 The clinical uses of X-ray imaging
Secondary source investigation: Gathering X-ray images
19.5 Computed axial tomography
19.6 The functional principle of CT scans
19.7 The clinical uses of CT scans
Secondary source investigation: Observing and comparing CT scans
19.8 Endoscopy
19.9 Optical fibres
19.10 Endoscopes
19.11 The uses of endoscopes
Secondary source investigation: Observing endoscope images
First-hand investigation: Transfer of light by optical fibres
Chapter revision questions
o2_ch20
CHAPTER 20 Radioactivity as a diagnostic tool
20.1 Radioactivity
20.2 Half-life
20.3 Radioactivity as a diagnostic tool: nuclear medicine
First-hand and secondary source investigation: The uses of radioisotope scans
20.4 Positron emission tomography
20.5 The operating principle of a PET scan
20.6 Applications of PET scans
Secondary source investigation: Using PET scans to detect diseased organs
20.7 Evaluation of the uses of radioisotope scans
Chapter revision questions
o2_ch21
CHAPTER 21 Magnetic resonance imaging
21.1 Nuclear spin
21.2 The nucleus in a magnetic field
21.3 Precession
21.4 Larmor frequency
21.5 Image formation: locating the signals
21.6 Image formation: tissue differentiation and contrasts
Secondary source investigation: Hardware used in magnetic resonance imaging
First-hand and secondary source investigation: Medical uses of MRI
Secondary source investigation: A comparison between the imaging techniques
Secondary source investigation: The impact of medical applications of physics on society
Chapter revision questions
o3_ch22
CHAPTER 22 Observing our Universe
22.1 Galileo’s observations of the heavens
22.2 The atmosphere is a shield
22.3 Resolution and sensitivity
22.4 Earth’s atmosphere limits ground-based astronomy
22.5 Improving resolution
First-hand investigation: The relationship between the size of the instrument and sensitivity
Chapter revision questions
o3_ch23
CHAPTER 23 Astrometry: finding the distance to stars
23.1 Measuring distances in space
23.2 The limitations of trigonometric parallax
Secondary source investigation: The relative limits of ground-based and space-based trigonometric parallax
Chapter revision questions
o3_ch24
CHAPTER 24 Spectroscopy: analysing the spectra of stars
24.1 Producing spectra
24.2 Measuring spectra
24.3 Stellar objects and types of spectra
24.4 The key features of stellar spectra
24.5 The information about a star from its spectrum
First-hand investigation: Examining spectra
Secondary source investigation: Predicting a star’s surface temperature from its spectrum
Chapter revision questions
o3_ch25
CHAPTER 25 Photometry: measuring starlight
25.1 Stellar magnitude
25.2 Using magnitude to determine distance
25.3 Spectroscopic parallax
25.4 Colour index
First-hand investigation: Using filters for photometric measurements
25.5 Photographic versus photoelectric technology
Secondary source investigation: The impact of improvements in technology in astronomy
Chapter revision questions
o3_ch26
CHAPTER 26 Variable and binary stars
26.1 Binary stars and their detection
First-hand investigation: Modelling light curves of eclipsing binaries
26.2 Important information from binary stars
26.3 Classifying variable stars
26.4 Distance and the period-luminosity relationship for Cepheid variables
Chapter revision questions
o3_ch27
CHAPTER 27 The life cycle of stars
27.1 The birth of a star
27.2 The key stages in a star’s life
27.3 Types of nuclear reactions within stars and the synthesis of elements
First-hand investigation: Plotting stars on a Hertzsprung-Russell diagram
First-hand investigation: Using the HR diagram to determine a star’s evolutionary stage
27.4 Determining the age of globular clusters
Secondary source investigation: The evolutionary paths of stars with different masses
27.5 The death of stars
Chapter revision questions
xappendix
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page
Blank Page

Citation preview

SR

Mind map

space

space

CHAPTER 1

Gravity The Earth has a gravitational field that exerts a force on objects both on it and around it Introduction In March 1906, the first (mono)plane was constructed and flew a distance of 12 m. On October 4, 1957, the Soviet Union launched the world’s first artificial satellite, Sputnik 1. On January 31, 1958, the first US satellite, Explorer 1, was launched. On July 20, 1969, Apollo 11 landed two men on the surface of the Moon for the first time. On April 12, 1981, the first reusable manned vehicle, the Space Transportation System (colloquially known as the space shuttle) was launched by the United States. Humans have come a long way in developing technologies and improving instruments for space travel and exploration. Chapter 1 looks at the behaviours and interactions of objects in the Universe under the influence of gravity. The concepts of gravitational acceleration and potential energy will be discussed in detail.

1.1

Gravitational field and weight n

Define weight as the force on an object due to a gravitational field

Any object with mass has its own gravitational field. The analogy is that a stationary electrical charge produces an electrical field and a bar magnet produces a magnetic field. So it is that a ‘mass’ produces a gravitational field.

What is the difference between ‘mass’ and ‘weight’? Definition

Mass is the quantity of matter; it is an absolute measurement of how much matter is in a body or an object. Mass has the SI unit of kilogram (kg). Definition

Weight is the force which acts on a mass within a gravitational field. Weight is proportional to the strength of the gravitational field. The SI unit for weight is the newton (N). Mass and weight can be related by a simple equation:

2

chapter 1  Gravity

F = mg

Where: F = weight or the weight force (usually used interchangeably), measured in N m = mass of the object, measured in kg g = gravitational acceleration on the object due the presence of the gravitational field, measured in m s–2

For example, an object with a mass of 5 kg will have different weights on different planets such as Earth, Jupiter or Saturn, due to the different gravitational field strengths or gravitational accelerations. Note: In everyday life, ‘weight’ can actually refer to ‘mass’. For instance: ‘How much do you weigh?’ ‘70 kg’, might be the answer. Clearly, the unit is in kg, not N. Bathroom scales actually measure the weight force and automatically convert the read-out into the mass equivalent, assuming the scales are on Earth. Note: On Earth’s surface, g can be taken as 9.8 m s–2 downwards.

1.2

Universal gravitation For only two masses separated in space, there is a force of attraction between them due to the interaction of their gravitational fields. This is what Sir Isaac Newton referred to as the law of universal gravitational attraction. The attraction force is acting towards the centre of each mass. The magnitude of the attraction force between two objects is proportional to the product of their masses and inversely proportional to the square of the distance of their separation:

F=G

m1m2 d2

Observing an apple falling inspired Sir Isaac Newton to develop the law of universal gravitational attraction

Where: m1 and m2 = masses of the two objects, measured in kg d = the distance between the two objects measured from the centre of each object (mass) as shown in Figure 1.1; the distance is measured in m G = the universal gravitational constant, that is, 6.67 × 10–11 N m2 kg–2

3

space

Figure 1.1  The distance d is measured from the centres of the masses, so the distance in this case is d not d1

Planet A Planet B d'

d

Note: A common error is to leave out the square sign on d 2. There is a justification

for this square sign: the gravitational field radiates out three-dimensionally, just as light radiates out from a candle. Light intensity follows the inverse square law, where the intensity is inversely proportional to the distance squared. 1 I ∝ d2 1 ∴I = k d2 The gravitational field strength also follows the inverse square law, so the field strength is inversely proportional to the distance squared. Consequently, the attraction force is also inversely proportional to the distance squared.

Example 1

Why don’t two people walking on the street get pulled towards each other due to a mutual gravitational attraction? Solution

Consider the following: two people, one weighing 60 kg and the other weighing 80 kg, are separated by a distance of 5.0 m on the street. Calculate the attraction force between them. F=G

m1m2 d2

Known quantities: G m1 m2 d

= = = =

∴F =

6.67 × 10–11 60 kg 80 kg 5.0 m 6.67 × 10–11 × 60 × 80

= 1.3 ×

10–8

5.02 N Towards each other

The force between them is too small to have any effect, in fact, it is even too small to be detected in any usual manner.

4

chapter 1  Gravity

Example 2

Determine the magnitude of the gravitational attraction force between the Sun and the Earth, given that the mass of the Sun is 1.99 × 1030 kg, the mass of the Earth is 6.0 × 1024 kg and they are separated by 1.5 × 108 km as measured between their centres. Solution

F=G m1 m2 G d

= = = =

∴F =

m1m2 d2

1.99 × 1030 kg 6.0 × 1024 kg 6.67 × 10–11 1.5 × 1011 m (6.67 × 10–11)(1.99 × 1030)(6.0 × 1024) (1.5 × 1011)2

= 3.54 × 1022 N Attraction

1.3

A closer look at gravitational acceleration ‘g’ is the gravitational acceleration acting on an object due the presence of the gravitational field. What are the factors that determine the magnitude of g? Consider an object on the surface of the Earth which has a mass of M, as shown in Figure 1.2. The object with mass m will have a weight force of Fw = mg towards the centre of the Earth. This weight force is created by the universal gravitation attraction force between the object and the Earth.

Fg = Fw mM Fg = G 2 d

Object mass  m

mg  d



Fw = mg mM G 2 = mg d M ∴g=G 2 d

GmM d2

Figure 1.2  An object on the surface of the Earth

mass  M

g=G

M d2

To summarise, the gravitational acceleration of any planet is proportional to the mass of the planet and inversely proportional to the square of the distance from the centre of the planet. This equation can be generalised for any planet.

Where: M = the mass of the planet, such as the Earth, measured in kg d = the distance from the centre of the planet to the point at which g is measured. The distance is measured in metres. If the object is at the planet’s surface, then d = r, where r is the radius of the planet.

5

space

g on other planets and the application of F = mg secondary source investigation physics skills H14.1a, d, e, g, h H14.2a H14.3c, d

n

■■

Gather secondary information to predict the value of acceleration due to gravity on other planets Analyse information using the expression F = mg to determine the weight force for a body on Earth and for the same body on other planets

The value of the gravitational acceleration, g, on the surface of any planet or other body can be found by using its radius and its mass. Once g is known, the weight of any object can be calculated.

Example 1

Find the value of the gravitational acceleration on the surface of the Earth. g=G

M d2

Mass of the Earth: 6.0 × 1024 kg Radius of the Earth: 6.37 × 103 km g=

(6.67 × 10–11)(6.0 × 1024) (6.37 × 106)2

= 9.84 m s–2 downward Note: Remember to convert the radius in kilometres to metres, as the d in the

formula is only measured in metres.

Example 2

Find the magnitude of the gravitational aceleration on the surface of the planet Jupiter. g=G

M d2

Mass of Jupiter: 1.90 × 1027 kg Radius of Jupiter: 7.15 × 104 km g =

(6.67 × 10–11)(1.90 × 1027) (7.15 × 107)2

= 24.8 m s–2 towards the centre of Jupiter

6

chapter 1  Gravity

Example 3

Calculate the weight of a textbook with a mass of 5.0 kg on both Earth and Jupiter. F = mg (a) On Earth: g = 9.84 m s–2 F = 5.0 × 9.84 = 49.2 N (b) On Jupiter: g = 24.80 m s–2 F = 5.0 × 24.8 = 124 N Note: This example again shows that the mass of an object is an absolute quantity, whereas the weight changes with variations in gravitational acceleration.

Table 1.1 provides information about the mass and radius of the planets and Pluto, the Sun and the Earth’s Moon and the values of their gravitational acceleration, as well as the weight of an object with a mass of 5.0 kg on their surface. Table 1.1

Mass (kg)

Radius (km)

Surface gravity m s–2

Weight of an object with a mass of 5.0 kg (N)

6.0 × 1024

6378

9.84

49.19

3.30 ×

1023

2439

3.70

18.50

Venus

4.87 ×

1024

6051

8.88

44.40

Mars

6.42 × 1023

3393

3.62

18.10

Jupiter

1.90 ×

1027

71492

24.81

124.05

Saturn

5.69 × 1026

60268

10.45

52.25

Uranus

8.68 ×

1025

25559

8.87

44.35

Neptune

1.02 ×

1026

24764

11.10

55.50

Pluto

1.29 × 1022

1150

0.65

3.25

Earth Mercury

Moon

7.35 ×

1022

1738

1.62

8.10

Sun

1.99 × 1030

696000

274.13

1370.65

Variation in the Earth’s gravitational acceleration Earth’s gravitational acceleration g has so far been treated as a constant, with a value of 9.8 m s–2. However, the value for g varies depending on a number of factors. Factors that can affect the value of g are: ■■ altitude ■■ local crust density ■■ the rotation of the Earth ■■ the shape of the Earth (which is not a perfect sphere)

7

space

Altitude Example

Find the values of g when measured at: (a) The top of a building with a height of 100 m. (b) The summit of Mount Everest with a height of 8848 m. (c) The altitude of a low earth orbit satellite, which is 310 km. M g=G d2 Known quantities:

G = 6.67 × 10–11 M = 6.0 × 1024 kg

(a) d = 6378000 m + 100 m = 6378100 m

g=

(6.67 × 10–11)(6.0 × 1024)

(6378100)2 g = 9.84 m s−2 towards the centre of the Earth

(b) d = 6378000 m + 8848 m = 6386848 m

g=

(6.67 × 10–11)(6.0 × 1024)

(6386848)2 g = 9.81 m s−2 towards the centre of the Earth

(c) d = 6378000 m + 310000 m = 6688000 m Earth Jupiter Saturn



g=

(6.67 × 10–11)(6.0 × 1024)

(6688000)2 g = 8.95 m s–2 towards the centre of the Earth These examples show that the value of g changes with the altitude at which it is measured. The further away you are from the centre of the Earth, the smaller g is. However, the change only becomes significant at very high altitudes. Even at the summit of Mount Everest, g only decreases by about 0.3%, which would not be noticed by people climbing the summit.

Saturn and Jupiter have radii much larger than Earth’s

Local crust density

The Earth’s crust does not have uniform density. Some areas have a greater density (for example where there are dense mineral deposits); by volume these areas will have a greater mass. Since the gravitational field strength is affected by mass, areas with greater density tend to have slightly bigger g values. (Such variations are used in the mining industry to detect the location of mineral deposits or, if the value of g is lower than normal, to detect the presence of natural gas or oil reserves.) The rotation of the Earth

As the Earth rotates once every 24 hours, any place on the equator is moving towards the east at about 1670 km h–1 relative to the North or South Pole. Just like on an

8

chapter 1  Gravity

imaginary fun ride, the effect is to try to ‘fling’ objects off the surface of the Earth. Such an effect is not normally noticed. (The Earth would need to rotate once every 20 seconds in order to actually fling an object off its surface at the equator!) However, this does reduce the g value slightly. The shape of the Earth

Due to its rotation, the Earth bulges slightly at the equator, resulting in the overall shape being slightly flattened at the poles. (This shape is known as an ellipsoid.) A person standing on the North or South Pole (at sea level) is closer to Earth’s centre than a person standing on the equator by about 10 km. This explains why at the equator, the g value is slightly smaller than the value at the poles.

Gravitational potential energy Explain that a change in gravitational potential energy is related to work done n Define gravitational potential energy as the work done to move an object from a very large distance away to a point in a gravitational field m1m2 Ep = – G r n

1.4

Gravitational potential energy, Ep, is the energy stored in a body due to its position in a gravitational field. This energy can be released (and converted into kinetic energy) when the body is allowed to fall. The work done, W, on an object when a force acting on the object causes it to move is given by: W = F × s, where F is the force acting, and s is the distance the object is moved while the force is acting as shown in Figure 1.3(a). (a) Box moves to the right, therefore, the work done on the object is equal to F  d

F

F

A waterfall demonstrates the concept of gravitational potential energy

d (b) The box does not move, hence distance d is zero. No work has been done on the box

SR

F

Figure 1.3  Work

Worked examples 1, 2

9

space

A similar idea is used when an object is lifted to a height above the ground. Because the force is now working against gravity, the force required to lift the object must be equal (strictly speaking just greater than) the weight force of the object (mg). The work done is stored in the object if it remains at that position. Hence the work done on the object is equivalent to its gain in gravitational potential energy. Lifting this book (mass approximately 1 kg) to a height of 1 metre above the desk requires about 9.8 joules ( J) of work to be done. This work done is now stored as gravitational potential energy and is released if the book is now dropped and allowed to fall back to the desk, being transformed into kinetic energy and then sound. When an object is raised to a height h above the ground, then its gravitational potential energy becomes: Ep = W = F × s F = mg (as the force needed must be at least equal to the weight force of the object) s = h (as the height is the distance moved) Ep = mgh Where: Ep = gravitational potential energy, measured in J m = mass of the object, measured in kg g = gravitational acceleration, measured in m s–2 h = height above the ground, measured in m

Ep = mgh

Problem! This equation is quite accurate when the object is near the surface of the Earth. However it assumes that the value of g is a constant and does not change with altitude (it does change), and cannot be used to determine an absolute value for the amount of gravitational energy posssessed by an object. It implies that any arbitrary place can be used as the reference position where the object has no gravitational potential energy (e.g. the floor, desktop, the Earth’s surface). A universal definition of gravitational potential energy is required.

Object with mass m

Ep = m gh

h

g = G

r

Planet (Earth) with mass M

10

r2

M r = h + Earth’s radius, since g = G is measured with reference to r2 the Earth’s centre Ep = G

Figure 1.4  Earth and a distant object

M

Ep = G

mM r2

×r

mM r

Note: In this case, Ep is measured from the centre of the Earth. This is because the distance r is taken from the centre of the Earth.

chapter 1  Gravity

Ep = –G

mM r

Where: Ep = gravitational potential energy, measured in J m = mass of the object, measured in kg M = mass of the planet, measured in kg G = universal gravitational constant, which is equal to 6.67 × 10–11 N m2 kg–2 r = height at which the Ep is measured, as measured from the centre of the planet, measured in m Note: If Ep is measured at the surface

of the planet, then r is the radius of the planet.

Example 1

Calculate the gravitational potential energy of a satellite with mass 110 kg, at an altitude of 320 km above the Earth. Ep = –G

mM r

G = 6.67 × 10–11 m = 110 kg M = 6.0 × 1024 kg r = 6378000 m + 320000 m = 6698000 m Ep = –

(6.67 × 10–11) × (110) × (6.0 × 1024) 6698000

Ep = – 6.57 × 109 J Or

= −6572 MJ

Why is the gravitational potential energy negative? At a position very far away from Earth, an object would experience negligible gravitational attraction. The place at which gravity becomes zero is in fact an infinite distance away. By definition, any object at such a distance is said to have zero gravitational potential energy. If an object was then given a small ‘nudge’ or a push towards Earth, it would begin to fall towards Earth, losing gravitational potential energy as it gains kinetic energy. The more gravitational potential energy the object loses, the more negative the value of Ep (subtracting from zero results in a negative value).

11

space

Conversely, to reach this infinite distance from within the gravitational field, positive work has to be done on the object, that is, effort is required to push the object upwards. So, if positive work is added to the object and the object ends up with a zero gravitational potential energy at an infinite point from the Earth, the object at anywhere below that point must have a negative energy. Analogy: If you keep adding positive numbers to an unknown number, and end up with a zero, then you are quite certain that you started off with a negative number.

Change in gravitational potential energy (Ep ) Although the gravitational potential energy is negative, the change in gravitational potential energy can be positive. The change in gravitational potential energy is equal to the second potential energy minus the first potential energy, or more correctly it is the less negative potential energy minus the more negative potential energy. For two positions in the gravitational field with distances of r1 and r2 from the centre of the Earth respectively, where r1 is greater than r2: ∆Ep = –G

(

mM mM – –G r1 r2

mM mM –G r2 r1 1 1 ∆Ep = GmM – r2 r1

)

=G

∆Ep = GmM

( (

1 1 – r2 r1

) )

Note: Since r1 is greater than r2,

Consequently, ∆E is positive.

1 1 1 1 is less than , therefore – is positive. r1 r2 r2 r1

Example

Calculate the change in gravitational potential energy for a scientific instrument with a mass of 150 kg, when it is moved from ground level to the top of Mount Everest, where the height is 8848 m. ∆Ep = Ep at the top of Mount Everest – Ep at the surface of the Earth ∆Ep = –

(6.67 × 10–11) × (6.0 × 1024) × 150 (6378000 + 8848)

[

– –

(

∆Ep = (6.67 × 10–11) × (6.0 × 1024) × 150 × – ∆Ep = 1.30 × 107 J Or ∆Ep = 13 MJ

12

(6.67 × 10–11) × (6.0 × 1024) × 150 6378000 1

6378000 + 8848

+

1 6378000

)

]

chapter 1  Gravity

Simple pendulum motion n

Perform an investigation and gather information to determine a value for acceleration due to gravity using pendulum motion or computer-assisted technology and identify reasons for possible variations from the value 9.8 m s−2

It is a syllabus requirement that students perform a first-hand investigation and gather information to determine a value of acceleration due to gravity using pendulum motion. This experiment is discussed extensively in order to provide an example of how students should approach experiments and how experimental data should be processed.

Aim To perform a first-hand investigation using simple pendulum motion to determine a value of acceleration due the Earth’s gravity (g).

Theory



 g

Procedure

œÃà …i>` >˜`Ê V>“«

physics skills H11.1b, d, e H11.2a, b, c, e H11.3a, b, c H12.1a, d H12.2a, b H12.3a, c H12.4a, b, d, e, f H13.1d, g H14.1a H14.2d H14.3c, d

TR

,œ`



1. Set up the apparatus as shown in the

6iÀ̈V>Ê>݈Ã

diagram on the right: Note: Include a diagram in the

H2

General resources— Practical reports Practical register Reliability, validity

Pendulum motion

Equipment/apparatus Retort stand, boss head and clamp, string and mass bob Stop watch, ruler

PFAs

TR

The period of a pendulum (T) is related to the length of the string of the pendulum () by the equation: T = 2π

first-hand investigation

-ÌÀˆ˜} ,i̜ÀÌ ÃÌ>˜`

Risk assessment matrix

procedure when applicable. >ÃÃÊLœL

2. Measure the effective length of the

pendulum from the top of the string to the centre of the mass bob. The length should be approximately 1 m.

ˆÀiV̈œ˜Êœv ̅iÊÃ܈˜}

Figure 1.5  Simple pendulum in motion

3. Move the mass so that the string makes an angle of about 5° with the vertical. Release

the bob. Use a stop watch to record the time for 10 complete oscillations. 4. Note: If possible, data logging apparatus (with position or velocity sensors) can be used to

more accurately find the time taken for the period of the oscillations of the pendulum. The resulting graph of the motion of the pendulum should also show the nature of the motion (simple harmonic), although this is beyond the scope of the present syllabus.

13

space

5. Change the length of the string to 0.8 m, and then repeat step 3. 6. Repeat step 4, changing the length of the string to 0.6 m and then to 0.4 m. 7. Use appropriate formulae to find the period of the pendulum and the value of g (see

below).

Results Record the data in the table below. SR

Table 1.2 Length of the string () (m)

Time for 10 oscillations (s)

Period (T ) (s)

1.00

Copy of empty table

0.80 0.60 0.40

Note: Divide the time by 10 to calculate the period of the swings, where the period is the time needed by the pendulum for one complete swing.

Calculating g: Two methods can be employed to calculate the value for g. Method 1: a simpler method T = 2π g =



  ⇒ T 2 = (2π)2 g g

4π 2  T2

Substitute each period and length into the equation, and calculate g. Then take an average value of the four g values found. Method 2: liner transformation Use a graph to plot the relationship between  and T as shown in Figure 1.6 (b). Because  is the independent variable and T is the dependent variable, we usually plot  on the x-axis and T on the y-axis.  T = 2π g



T 2 = (2π)2 × T 2 =

 g

( )

4π 2  g

The line in Figure 1.6b (opposite) is obtained by drawing in the line of best fit through the 4π 2 . Select any two points on the line to points. It. can be seen that the grad ient of the line is g 4π 2 , the value of calculate the gradient by using rise over run; by equating this gradient with g g can be calculated.

14

chapter 1  Gravity

If we plot T versus :

But if we plot T 2 versus :

T (s)

T 2 (s)

l (m) Figure 1.6  (a) T versus 

Figure 1.6  (b) T2 versus 

l (m)

Discussion Some possible discussion points include: 1. During the experiment, if the angle of the swing exceeded 10°, the simple pendulum motion would transform into a conical pendulum, which would not be desirable for the calculation of g. 2. The time for 10 oscillations was measured because if the number of oscillations was too

small, timing would become very difficult: the human reaction time would be quite large compared to the swing time, resulting in a significant amount of error in timing. If the number of oscillations was too many, then the pendulum would not be able to maintain its ‘constant swing’ due to air resistance, which also would make the experiment less accurate. 3. The main sources of experimental errors could include:

(a) Human errors in measurements and timing. In particular, reaction time could be the predominant error in timing. (b) Air friction acting on the mass bob while it was swinging. 4. Thus in order to improve the accuracy of the experiment:

(a) i. Use more accurate devices for measurements. For example, when recording time, use stop watches rather than normal watches. Computer data logging can further improve the accuracy. ii. Be familiar with the procedures in order to reduce the reaction times. (b) Reduce air currents by closing windows, shielding the pendulum from the surrounding air and using a more streamline mass bob. In fact, a more advanced pendulum experiment is actually done inside a vacuum bulb. (c) In general, doing the experiment in a team and repeating the experiment many times will improve the accuracy of the experiment. 5. The possible reasons for the variation of g values are discussed earlier in this chapter (as

required by the syllabus). These variations are too small to be detected by the method outlined above. 6. Assess the validity of this experiment by commenting on the way in which any other

variables were accounted for.

15

space

Conclusion State the value of g found in this pendulum experiment. Note: In general, first-hand investigations (experiments) are just as important as

theories. A significant amount of time should be devoted to experiments. This includes relating theories to the experiments, being familiar with procedures, calculations and outcomes of the experiments, and evaluating the accuracy and reliability of the experiments. Issues raised in the first-hand investigation sections of this text should be considered.

chapter revision questions For all the questions in this chapter, take the mass of the Earth to be 6.0 × 1024 kg, and the radius of the Earth to be 6378 km. The universal gravitational constant (G) = 6.67 × 10−11 N m2 kg−2. The mass of the moon is 7.35 × 1022 kg and the radius of the moon is 1738 km.   1. Define the terms ‘mass’ and ‘weight’.   2. Calculate the force of attraction between two neutrons separated by a distance of

1.00 × 10−13 m, knowing the mass of a neutron is 1.675 × 10−27 kg.   3. Calculate the force acting on an 150.0 kg satellite by the Earth, if the satellite orbits

2300 km above the Earth’s surface.   4. Suppose there is a force of attraction F acting between objects A and B. If the mass of

A is reduced by four times, while the mass of B is reduced by a half, and the distance between them is doubled, what is the new force acting between the two objects?   5. When humans first landed on the surface of the Moon, they had to walk in a jumping

fashion. Use relevant calculations to justify this observation.   6. If an unknown planet has a mass that is 10 times heavier than the Earth, and its radius

is four times larger than that of the Earth, how would the gravitational acceleration at the surface of this planet compare to that on the Earth?   7. The ratio of the gravitational acceleration on planet Xero to that on the Earth is 3:1; what

is the ratio of the weight of an object on this planet to its weight on the Earth?   8. Careful measurements show that the gravitational acceleration (g) is slightly smaller

at the equator than that at the poles. Briefly describe two possible reasons that can account for this phenomenon.   9. How much work is done on a 10 g pen when it is picked up from the ground to a table

1.2 m high by a student? 10. (a) A block of wood, with a mass of 390 g, is moved along a slope by a constant force of

7.0 N as shown in the diagram. The slope is inclined at 8.0° to the ground. Ignoring friction, calculate the work done on the wood if it is moved 8.0 m along the slope. Block of wood

8.0 m Slope



16



8.0º

chapter 1  Gravity

(b) What is the change in potential energy experienced by the block? 11. Calculate the change in gravitational potential energy when a 68.00 kg person is moved

from the Earth’s surface to the summit of Mount Everest, which is 8848 m in height. 12. Determine the work done on an object, with a mass of 198 g, when it is pushed from a

height of 200.0 km above the ground to 3500 km above the same point. 13. Suppose an airbus, with a mass of 15 tonnes, is flying at 250 m s−1, 1.02 × 104 m

above the ground level. Determine the total mechanical energy of the plane. Note that mechanical energy includes kinetic energy and gravitational potential energy.

SR

14. Explain why an object within Earth’s gravitational field has a negative gravitational

potential energy. 15. Plot the change in the gravitational potential energy when a 1.0 × 104 kg rocket is

launched from the surface of the Earth to a point which is very far away from the Earth.

Answers to chapter revision questions

17

space

CHAPTER 2

Space exploration Many factors have to be taken into account to achieve a successful rocket launch, maintain a stable orbit and return to Earth Introduction In Chapter 2, projectile motion and circular motion are studied. These concepts will provide the basis for understanding the launching, orbiting and safe return of satellites, space probes and spacecraft.

2.1

Projectile motion n

Describe the trajectory of an object undergoing projectile motion within the Earth’s gravitational field in terms of horizontal and vertical components

You will often see projectile motion in everyday life; for example the motion described by a thrown tennis ball, or by a football kicked over a playing field. Definition

A projectile motion is a motion that is under the influence of only one force— the weight force. Time-lapse photography shows that the trajectory of (the path described by) a projectile is a concave downward-facing parabola, as shown in Figure 2.1. Displacement (m)

Position of the project at different times

Figure 2.1  The trajectory of a projectile

18

Displacement (m)

Parabolic pathway

chapter 2  space exploration

Although projectile motion may seem complicated at first glance, there are only two simple rules that govern it: 1. The horizontal motion and vertical motion are independent. They can be analysed and calculated separately. 2. The horizontal velocity is always constant (neglecting the air friction). The vertical motion has a constant acceleration (downward at 9.8 m s–2 at the surface of the Earth) and gravity is the only force acting on the object. However, before you can perform calculations for projectile motion, you need to understand the concept that a projectile can be considered to be composed of a horizontal and a vertical motion.

Resolving a vector Consider an object that is fired at ground level at an angle of θ° to the ground with velocity u.

u

uX

R

Figure 2.2  Vector resolving horizontal

This velocity vector can be resolved into its horizontal and vertical components:

uy u

R

uX

Figure 2.3  Vector resolving vertical

Using trigonometry The horizontal velocity of the projection: ux = ucos θ The vertical velocity of the projection: uy = usin θ

19

space

H14.1g, h H14.2a, b, d H14.3b, d SR

Worked examples 3, 4, 5

Problem solving for projectile motions ■■



Solve problems and analyse information to calculate the actual velocity of a projectile from its horizontal and vertical components using: v x2 = u x2 v = u + at v y2 = u y2 + 2ay y x = uxt 1 y = uy t + ayt2 2

Five equations are used to solve projectile motion problems. These formulae along with a brief comment on each are summarised in Table 2.1. Table 2.1 Formula

Relevant motion

1.  v 2x

Horizontal component of the projectile motion

=

u 2x

2.  x = uxt

Variables (units)

Comment

x: displacement (m) ux: initial velocity (m s–1); t: time (s) vx: final velocity (m s–1)

Since the horizontal component of velocity is constant, the final horizontal speed vx is equal to ux, the initial horizontal component of velocity.

3.  v = u + at

For the vertical component of the projectile motion, the acceleration is constant.

4.  v 2y = u 2y + 2ay y Vertical component of the projectile motion

5.  y = uyt +

1 2

ayt 2

v: final velocity (m s–1) u: initial velocity (m s–1) a: acceleration (m s–2) t: time (s) y: displacement (m)

It is important to assign the correct sign to the acceleration. For example, if the upwards direction is given a positive sign for then ay is negative. This equation can be derived by making t the subject in equation 3, and then substituting for t in equation 4.

It is standard practice to select positive horizontal and vertical directions as being to the right and upwards respectively. However, in examples where all vertical quantities are downwards, choosing downwards as the positive direction avoids negative signs in the calculation. This should be clearly shown in the working.

Maximum height, time of flight and range for a standard projectile After finding the horizontal and vertical components of the initial velocity, the known quantities must be substituted into the correctly selected equation, in order to find information such as maximum height, time of flight and range of the projectile. Whenever a projectile motion question is presented, it is important to consider what is known and what is required. Given a projectile that is fired at ground level at an angle of θ° to the horizontal with initial speed u as shown in Figure 2.2, the following analyses can be undertaken.

20

chapter 2  space exploration

Maximum height Example

A golf ball is struck at 50 m s−1 and leaves the ground at an angle of 30°. What is the maximum height it will reach? It is clear that maximum height relates to the vertical motion of the projectile. Solution



uy = usinθ = 50 × 0.5 = 25 m s−1 (up) ay = –9.8 m s−2 (down) vy = 0 (at the maximum height, the projectile is neither moving up nor down for an instant) y = ? (maximum height is the vertical displacement for the projectile)

At this point, the suitable equation must be selected: vy2 0 19.6y y

= = = =

uy2 + 2ayy 252 + 2 × –9.8y 625 32 m

Time of flight

Over level ground, the time of flight will be twice the time required to reach the maximum height. This is because if air resistance is neglected, the time for a projectile to reach its maximum height and for its return to earth will be the same. The motion is symmetrical: the upwards motion mirrors the downwards motion, and a vertical line that passes through the position where the projectile reaches the maximum height is the axis of the parabolic motion. Using the above example again: Time of flight = 2 × time of reaching the maximum height. To find the time taken to reach the maximum height: uy = 25 m s−1 ay = −9.8 m s−2 vy = 0 t=? The most appropriate equation is the only equation which includes the four quantities: vy = uy + ayt 0 = 25 – 9.8t 9.8t = 25 t = 2.55 s Therefore the time of flight = 2 × 2.55 s = 5.1 s

21

space

Range (the horizontal displacement)

Range is related to the horizontal motion of the projectile. range = initial horizontal velocity × time of the flight Using the equation: x = uxt = ucosθ × t = 50cos30° × 5.1 = 43.3 × 5.1 = 221 m Further examples Example 1

A cricketer strikes a cricket ball and the ball flies off with a velocity of 14 m s–1 at 63° to the ground, to the north of the player. Where should the fielder be in order to catch the ball on its return to the ground? Solution

This question asks for the range of the projectile. R = horizontal velocity × time of the flight (tf ) Initial horizontal velocity = 14 × cos63° m s–1 = 6.4 m s–1 Initial vertical velocity = 14 × sin63° m s–1 = 12.5 m s–1 tf = 2 × time to reach maximum height Time to reach maximum height vy – uy (since vy = uy + ayt) t= a y

0 – 12.5 t= –9.8 t = 1.28 s ∴tf = 1.27 × 2 = 2.55 s ∴R = 6.4 × 2.55 = 16.3 m ∴ the fielder has to be 16.3 m north of the batter in order to catch the ball. Example 2

In an indoor soccer match, whenever the ball touches the ceiling of the court following a goal kick from the goalkeeper it is considered to be a foul. If the ceiling of the court is 10 m high, and the keeper always kicks the ball at a speed of 20 m s–1: (a) What should be the maximum angle the keeper kicks the ball so that a foul is not committed? (b) Find the velocity of the ball 2.0 s after being kicked by the goalkeeper, assuming it is kicked at 20 m s−1 at the angle found in part (a) above.

22

chapter 2  space exploration

Solution

(a) To avoid committing a foul, the maximum height achieved by the ball should be less than the height of the ceiling. vy2 = uy2 + 2ayy vy = 0 uy= initial vertical velocity ay= −9.8 s = (more strictly, s < 10) 10 uy2 = −2 × (−9.8) × 10 uy = y √2 × 9.8 × 10 uy = 14 m s –1 Since: uy = usinθ and v = 20 uy ∴ sinθ u uy ∴ θ < sin–1 u θ < sin–1

14 20

θ < 44° ∴ The ball must be kicked at an angle of less than 44° (b) After two seconds Horizontal velocity is constant: 20 × cos44 ≈ 14.38 m s−1 Vertical velocity: vy uy ay t vy

= = = = = =

uy + ayt 14 m s–1 –9.8 m s–2 2.0 s 14 + (–9.8) × 2 –5.6 m s –1

Or 5.6 m s−1 down The overall velocity will be the sum of these two, therefore: Ux = 14.3 m s–1

Vy = 5.6 m s–1 v = •14.3 s 5.6 2

2

23

space

V = √14.282 + 5.62 V = 15.15 m s –1, at 5.6 θ = tan–1 14.38

( )

= 21° ∴ After 25, the velocity is 15.15 m s−1 at a depression angle of 21° Note: The technique of vector addition has been used here.

Example 3

Plane

An F-18 jet plane is given an order to bomb a warehouse. The plane is flying at a speed of 350 m s–1, 3.50 km above the ground. Bombs will be released from the plane. If the air resistance is negligible, how far before the warehouse must the pilot release the bombs in order to hit the target? 350 ms–1

Solution

Bombs

Obviously, the bombs have to be dropped prior to reaching the warehouse, in order to hit it. The time taken for the bombs to land:

3.5 km

Warehouse

Figure 2.4  Another example of projectile motion

1 a t2 2 y



y = uyt +



y = 3.50 km = 3500 m down uy = 0 ay = 9.8 m s–2 down

Note: a is positive in this case because downward direction is positive.

∴ 3500 =

1 × 9.8t 2 2

t = 26.7 s

Horizontal distance travelled by the bombs within this time: x = uxt x = 350 m s–1 × 26.7 s = 9354 m ∴ The bombs have to be dropped 9354 m ahead of the warehouse in order to hit it.

24

chapter 2  space exploration

2.2

Galileo’s analysis of projectile motion n

Describe Galileo’s analysis of projectile motion

PFA

H1

Why was this discovery a major advance in scientific understanding? Galileo (1564–1642) was a truly remarkable scientist. Much of the accepted ‘knowledge’ of his time had not changed significantly since the days of Aristotle, some 2000 years previously. As well as studying astronomy, for which he is most remembered, Galileo furthered the study of the motion of falling objects, including projectiles, by taking meticulous measurements of the time taken for objects to roll down inclined planes. Galileo had found that the speed of objects falling directly was simply too fast for accurate measurements to be made. By using an inclined plane, this motion could be slowed down sufficiently for it to be timed using an ingenious device which relied on weighing the water that could flow into a container. (Accurate clocks or watches were still to be invented—in the early 1600s!) Galileo repeated his measurements hundreds of times to ensure that any errors were minimised. He came to the conclusion that balls rolling down inclined planes were accelerating. He was even able to 1 put this mathematically in the equation s = at 2, that is, the distance 2 travelled by the ball down the slope was proportional to the square of the time spent rolling.

‘Evaluates how major advances in scientific understanding and technology have changed the direction or nature of scientific thinking’

How did it change the direction or nature of scientific thinking? The Aristotelian view of projectile motion was that an object, once set in motion, retained its motion due to its impetus, a force from within the moving body. It was believed that an arrow, once fired, travelled in straight lines, and did so due to the continuing force on it from within. Galileo came to the conclusion that objects would retain their motion unless a force acted on them, due to their inertia. By analysing the motion of rolling balls down inclined planes after the balls were set in motion with a horizontal component, Galileo concluded that the motion of any projectile (arrows, balls, etc.) was the result of two separate types of motion: a uniform horizontal component that had no acceleration, and at the same time a vertical component that was being accelerated. The resulting path traced by V Vy V the projectile is a curve, not a straight V V y0 line. The type of curve had been named Vx by earlier Greek mathematicians—the parabola (Galileo described the path as a ‘semi-parabola’). Vx

Galileo

A sketch representing the Aristotelian view of how a cannon ball would be expected to move once set in motion due to its impetus

Vx

Vy

V

Vx

Evaluation of Galileo’s analysis of projectile motion Much of Galileo’s work on motion and forces was formalised later by Newton (born

Vy

V

25

space

in the year Galileo died), who proclaimed that he had seen further only because he had ‘stood upon the shoulders of giants’ in a reference to Galileo. Newton’s work was to form the basis upon which classical physics developed for the next 300 years.

WWW> TR

Mapping PFAs PFA scaffolds

Useful websites Some details of Galileo’s study of projectile motion on inclined planes: http://www.galileo.rice.edu/lib/student_work/experiment95/paraintr.html Some history of projectile motion: http://library.thinkquest.org/2779/History.html To duplicate Galileo’s experiment on an inclined plane: http://www.anselm.edu/homepage/dbanach/h-galileo-projectdata.htm

n

Describe Galileo’s analysis of projectile motion

Careful study of projectile motion did not begin recently; in fact it began as early as the 17th century. Galileo was the first person to deduce the trajectory of projectiles to be parabolic, through his experiments, which involved projecting an inked ball from tabletops at various heights using an inclined plane placed at the edge of the table. He realised that the horizontal motion of the projectile was totally independent of its vertical motion. More importantly, he realised the importance of mathematics in analysing the motion of projectiles. In his experiments, Galileo did his pioneer work in analysing the projectile motion mathematically, similar to our approach in the previous section. Galileo’s work also provided him with strong evidence to support his heliocentric (Sun-centred) model of the Universe. He was able to offer a satisfactory explanation for why an object dropped from a height, for example from a building on the moving Earth, did not get left behind. Galileo pointed out the reason the ball did not fall away from the building was because both the building and the object shared the same horizontal velocity. The object when falling would have an extra vertical velocity that was independent of its horizontal velocity. Consequently, the object would fall down and at the same time travel with the tower, so relative to the tower, or to anyone on the Earth who was moving with the Earth, the ball fell straight down to the base of the tower. In fact, this demonstrated the fundamental concept of relativity: motion is seen differently from different frames of reference (the background to which the motions are compared).

2.3

Circular motion n

Analyse the forces involved in uniform circular motion for a range of objects, including satellites orbiting the Earth

Circular motion can be found in systems as small as atoms, where electrons are orbiting nuclei. It can also be found in systems as big as the solar system, where planets are orbiting the Sun. Note: In fact the orbits of the planets are not perfect circles, but ellipses.

26

chapter 2  space exploration

For the purposes of this course, only uniform circular motion is covered: that is, circular motion with constant linear orbital speed, as explained in Figure 2.5. Although the orbital speed of an object undergoing uniform circular motion is constant, its direction is always changing. Consequently its orbital velocity is always changing since velocity includes both the speed and direction. Hence: Uniform circular motion has constant orbital speed but a changing orbital velocity.

Centripetal acceleration Any change in velocity involves acceleration. Acceleration in circular motion is referred to as centripetal acceleration (ac ). It is found that this acceleration is related to the orbital speed of the (uniform) circular motion and its direction is always towards the centre of the circle, as shown in Figure 2.5. The magnitude of the acceleration can be expressed as:

v2 ac = r

A ferris wheel in circular motion

Where: ac = the centripetal acceleration, measured in m s–2 v = the linear orbital velocity or speed of the object in motion, measured in m s–1 r = the radius of the circle, measured in m

Centripetal force Any acceleration is a result of a net force; the force and the acceleration are always in the same direction. Thus the force that provides centripetal acceleration and sustains circular motion is referred to as the centripetal force (Fc ), and its direction is also always towards the centre of the circle as shown in Figure 2.5. For its magnitude: Since F = ma Fc = mac m v2 Fc = r

Fc =

m v2 r

Where: Fc = the centripetal force, measured in N m = the mass of the object in motion, measured in kg v = the linear orbital velocity or speed of the object in motion, measured in m s–1 r = the radius of the circle, measured in m

27

space

Figure 2.5  An object undergoing uniform circular motion

Both centripetal force and centripetal acceleration are directed towards the centre of the circle Velocity1*

FC and aC Velocity2*

* These velocities are referred to as linear orbital

velocities, and are tangential to the circle at any instant. They have different directions but the same magnitude, thus the linear orbital speed is constant. We can define the linear orbital speed as how fast an object moves along the arc of a given size circle.

H14.1a, g H14.2a H14.3d SR

Worked example 6

Centripetal force is essential to maintain circular motion. If it is insufficient (or has been removed) then circular motion cannot be sustained. For instance, when a car is making a turn, it is describing an arc of a circle. What provides the centripetal force is the friction between the tyres and the road. If the road is covered by ice, the friction between the tyres and the road is greatly reduced. Consequently, the centripetal force may not be sufficient to sustain the circular motion, and the car then skids off the road by maintaining its linear velocity.

Problem solving in circular motion

Solve problems and analyse information to calculate the centripetal force acting on a satellite undergoing uniform circular motion about the Earth using: mv2 F= r

■■

Example 1

What is the centripetal force required to keep a particle with a mass of 2.0 × 10–8 kg moving at 2.0 × 105 m s–1, in a circular path with a radius of 50 cm? Solution

m v2 r m = 2.0 × 10–8 kg v = 2.0 × 105 m s–1 r = 50 cm = 0.5 m Fc =

2.0 × 10–8 × (2.0 × 105)2 0.50 = 1.6 × 103 N towards the centre of the circle

Fc =

28

chapter 2  space exploration

SR

Example 2

A car (mass m) which is turning a corner describes an arc of a circle with a radius of r metres. The friction of the road is just enough for the car to turn at v km h–1. Suppose suddenly the car encounters an oil spill on the road that reduces the 1 surface friction to . What is the maximum velocity the car can turn without 2 skidding?

Simulation: centripetal force

Solution

m v2 r

Fc =

The orbital speed needs to be in m s−1, and v km h−1 =

( )

v   3.6 r

m ∴ Fc =

v m s−1 3.6

2

1 , and let the new velocity by v2 2 v 2 m 1 1 /  3 .6 = Fc = 2 2 /r

Fc is reduced by

(/ )

v m /  3 .62 ∴ /r

(/ )

v m /  3 .62 /r v22 =

2

2

(/ )

(/ )

v m / 3 1   .6 = ⋅ 2 /r

2

1 ⋅ v2 2

Square root both sides: 1 v v2 = √2 ∴ The new maximum velocity is

1 v km h–1 2 √

Rotation of the Earth and the variation of g An object that is undergoing circular motion also has an angular speed (ω). The angular speed of this object refers to how fast the angle of a line that joins the object to the centre of the circle is changing. The angular speed is related to the linear orbital speed v by v = ωr. We can now explain how the rotational motion of the Earth affects the size of the Earth’s g as mentioned in Chapter 1. The Earth rotates on its own axis and all latitudes on the Earth will have the same angular speed, that is, the rate of rotation is the same (one day is 24 hours long regardless of latitude). However, due to the nearly spherical (but slightly elliptical) shape of the Earth, the equator will be further away from the rotational axis of the Earth than anywhere else on the Earth, the poles being on the axis. Consequently, places on the equator will have a larger linear speed than other places on Earth.

29

space

Having a larger linear speed also means that any object at this position will experience a greater tendency to be ‘flung’ off the Earth’s surface. This effect counteracts the effect of gravity, and reduces the measured size of g. This apparent reduction of the size of g is greatest at the equator and becomes less, closer to the poles. Analogy: If you are in a car that is making a right turn, the faster it turns the corner, the more you feel you are being thrown to the left.

2.4

An orbiting satellite

Astronauts ‘floating’ inside a spacecraft as a result of their free fall

30

Circular motion and satellites n

Analyse the forces involved in uniform circular motion for a range of objects, including satellites orbiting the Earth Satellites orbiting the Earth, including the Moon, are examples of objects undergoing circular motion. What is special in this circular motion is that the centripetal force is provided by the gravitational attraction force. The centripetal acceleration is correlated with the gravitational acceleration. Satellites orbiting the Earth are in a state of free fall. As a satellite orbits the Earth, it is pulled downwards by the Earth’s gravitational field. If the satellite was stationary, it would fall vertically down just as a ripe apple falls straight to the ground. What keeps the satellite from falling is its linear orbital velocity. That is, at the same time the satellite is falling down, it is also moving away from the Earth. This results in its path being circular, as shown in Figure 2.6. Since the horizontal and vertical motions are independent, while the satellite is describing a circle, it can be considered to be in a state of free fall, just like an object undergoing projectile motion. Indeed, if the orbital speed is not fast enough, the satellite will describe a parabolic path—a projectile motion—and fall back to the Earth. But as the orbital speed increases, the length of the parabola increases. Eventually, when the orbital speed is sufficiently high so that the rate of falling can be matched by the rate of ‘moving away’, the satellite will describe a circular path (see Fig. 2.7).

The circular motion described by satellites can be said to have two components: one that is constant tangential speed, the other being free fall with constant acceleration of gravity.

chapter 2  space exploration

Initial linear orbital speed

Linear orbital velocity is constant as the object moves around the Earth

The fall towards the centre of the Earth

Small orbital speed The object describes a parabolic pathway and falls back to the Earth

Medium orbital speed Longer parabolic pathway; still falls back to the Earth

The resultant motion of the object describes a circular path Figure 2.6  A satellite describing a circle

Large orbital speed Eventually, the satellite will not fall back to the Earth as it does in a projectile motion. It will describe a circular path and orbit around the Earth Figure 2.7  An object with different orbital speeds

Example 1

Suppose a manned satellite is orbiting the Earth. An astronaut inside the satellite is fixing a communication device and accidentally lets a screwdriver go. (a) Describe the subsequent motion of the screwdriver relative to the astronaut. (b) Explain why this would be so. Solution

(a) The screwdriver will continue to move with the astronaut, hovering beside the astronaut. (b) When the satellite is orbiting the Earth, it is in a state of free fall. The astronaut inside the satellite will share the motion of the satellite. The released screwdriver will also enter a state of free fall. However, now the satellite, the astronaut and the screwdriver are all falling at the same rate, since the gravitational acceleration is independent of the mass of the falling object. (A feather and a rock dropped from a tower will land at the same time, if there is no air resistance.) Consequently, relative to the astronaut or the satellite, the screwdriver appears to be stationary and ‘float’ at the same position.

31

space

2.5

A quantitative description of Kepler’s third law n

Define the term orbital velocity and the quantitative and qualitative relationship between orbital velocity, the gravitational constant, mass of the central body, mass of the satellite and the radius of the orbit using Kepler’s law of periods

As previously explained, satellites orbiting the Earth and planets orbiting the Sun are all examples of circular motion where the centripetal force is provided by the gravitational attraction force. Hence the centripetal force Fc can be equated with the gravitational attraction force Fg, then: Fc = Fg Fc =

m v2 r

where m and v are the mass and orbital velocity of the object that is undergoing the circular motion Fg =

GmM , M is the central mass, r replaces d r2

m / v2 = Gm/M r /2 /r v2 =



GM GM ⇒v= An equation for calculating orbital velocities r r Let T = period of the orbit, that is, the time to complete one revolution. v=

Total circumference 2πr = period T

Substitute v =

( ) 2πr T

2

=

GM 2πr into v2 = r T

GM r

GM 4π2r 2 = 2 r T

(

)

r3 GM GM = 2 = k Kepler’s third law 2 T 4π 4π2

Johannes Kepler

32

Note: k is a constant only for one particular orbital system. It is dependent on the size of the central mass M.

chapter 2  space exploration

v=



GM r

r 3 GM = T 2 4π2

Where: v = the linear orbital speed, measured in m s–1 r = the radius of the orbit, measured in m Note: r has to be measured from the centre of the

mass.

T = the period of the orbit, measured in s G = the gravitational constant, = 6.67 × 10–11 N m2 kg−2 M = the central mass, measured in kg

r 3 GM = T 2 4π2 ■■ Solve problems and analyse information using: r 3 GM T 2 = 4π 2 n

Problem solving using Kepler’s law of periods

H14.1a, b, c, d, f, g, h H14.2a, d H14.3a, b, d SR

Example 1

If a low Earth orbit satellite is to revolve about the Earth 16 times a day, what must be the altitude of its orbit? Given the mass the Earth is 6.0 × 1024 kg, and its radius is 6378 km:

Worked examples 7, 8

Solution

r3 GM = 2 2 4π T r3 =

( )

T=

total time total number of revolutions

GM ⋅T2 4π2

24 × 60 × 60 16 = 5400 s M = 6.0 × 1024 G = 6.67 × 10–11

T=

∴ r =



3

6.0 × 1024 × 6.67 × 10–11 × (5400)2 4π2

≈ 6.66 × 106 m ∴ Its height above the Earth = 6.66 × 106 m − 6.378 × 106 m = 2.83 × 105 m

33

space

Example 2

Satellites X and Y are orbiting planet ‘Physics’, their period and orbital radii from the centre of the planet are shown in Table 2.2. Table 2.2 Satellite

Period (units)

Radius (units)

X

2

3

Y

4

?

Find the value for ‘?’ Solution

GM Since both satellites are orbiting the same centre mass, can be considered as a 4π2 constant and is shared between the two satellites. ry3 rx3 GM ∴ 2 = 2 = K = Tx 4π Ty 2

ry3 rx3 = Tx2 Ty 2



ry3 33 = 22 42

SR

Animation: Kepler’s laws of planetary motion

3

ry = √108 ≈ 4.76 units ∴ ? = 4.76

Example 3

Kepler’s third law (law of periods) can also be used to determine the mass of a planet by measuring the orbital radius and period of a moon of the planet. Io is a moon of Jupiter. It has an orbital period of 1.77 Earth days, and an orbital radius of 4.22 × 108 m. Based on this astronomical data, calculate the mass of Jupiter. Solution

r3 GM = 2 2 4π T M=



r = 4.22 × 108 m T = 1.77 × 24 × 60 × 60 = 152928 s G = 6.67 × 10–11

∴ M =

34

r 3 ⋅ 4π2 T2 ⋅ G



(4.22 × 108) × 4π2 (152928)2 × 6.67 × 10–11

≈ 1.90 × 1027 kg

chapter 2  space exploration

Geostationary satellites and low Earth orbit satellites n

Compare qualitatively low Earth and geo-stationary orbits

2.6

Satellites orbits can be classified into different types. Two types of orbit that have specific characteristics and properties are geostationary and low Earth orbits. Satellites placed in these orbits are designed to perform specific functions made possible by the nature of their orbits. There are thousands of satellites in Earth orbit— the majority of them no longer functioning. The properties of these two very different types of orbits have their own advantages and disadvantages. Geostationary satellites appear to be stationary in the sky when viewed from the surface of the Earth, hence the name: geo (earth) stationary. However, like other satellites, they are also revolving around the Earth. The reason they appear to stay at one position above the Earth is because: 1. They are situated above the equator. 2. They are orbiting the Earth at the same rate as the Earth’s rotation. Thus, the satellites and the Earth have the same period, that is, they both complete one revolution or one rotation every 24 hours. Note: The time for the Earth to complete one rotation about its own axis is the length of one day: 24 hours.

Definition

Geostationary satellites are satellites that are situated above the Earth’s equator and orbit the Earth with a period of 24 hours, remaining directly above a fixed point on the equator. Example

What altitude must a geostationary satellite have so that it remains directly over a point on the equator? Given the mass the Earth is 6.0 × 1024 kg, and its radius is 6378 km: Solution

r3 GM = 2 2 4π T

( )

GM ⋅T2 4π2 G = 6.67 × 10–11 M = 6.0 × 1024 kg T = 24 × 60 × 60 = 86400 s r3 =



6.67 × 10–11 × 6.0 × 1024 × (86400)2 4π2 r = 42297524 m ∴ The height above the ground: = 4.22 × 107 − 6.378 × 106 m = 3.58 × 107 m Or 35 800 km

r=

3

35

space

Low Earth orbit satellites are satellites with orbital radii altitude between 200–2000 km (within the van Allen radiation belt, their orbital periods are less than those that of geostationary satellites). They may orbit the Earth many times per day. They do not need to orbit above the equator. Low Earth orbits are often polar so that the satellite obtains a view of the entire surface of the Earth after several orbits. Definition

Low Earth orbit satellites are satellites with smaller orbital radii than those of geostationary satellites; their orbital periods are greater than those of geostationary satellites. They orbit the Earth many times per day. Example

If a satellite’s orbit is 430 km above the Earth’s surface, how many revolutions around the Earth will it make in one day? Given the mass of the Earth is 6.0 × 1024 kg, and its radius is 6378 km: Solution

r3 GM = 2 2 4π T T2= G= M= r= =

T=

4π2 ⋅ r 3 GM 6.67 × 10–11 6.0 × 1024 430000 + 6378000 6808000 m



4π2 ⋅ (6808000)3 6.67 × 10–11 × 6.0 × 1024

= 5579s (÷ 60 ÷ 60) ≈ 1.55 hr ∴ The number of revolutions per day: 24 1.55 ≈ 15.5 That is, 15 and a half times per day.



=

Qualitative comparison between geostationary satellites and low Earth orbit satellites Quantitative comparisons of geostationary and low Earth orbits involve the r3 GM = 2 to solve specific problems. Qualitative comparisons, such application of 2 4π T as key differences, advantages, disadvantages and the main uses of each type of satellites are summarised in Table 2.3.

36

chapter 2  space exploration

Table 2.3 Key differences

Geostationary satellites

Low Earth orbit satellites

Stay at one position directly above a fixed point on the equator.

Move above the Earth so do not have a fixed position.

Orbit with the Earth’s rotation, so their periods are the same as that of the Earth.

Periods are much smaller than that of the Earth. May orbit the Earth many times a day.

Situated at a very high altitude, approximately 35,900 km above the Earth’s surface.

Much lower orbital altitude. Can be made to pass above any point on Earth.

Situated above the equator. Have a limited view of the Earth’s surface (approximately one-third). Advantages

Easy to track since each satellite stays at one position at all times. Do not experience orbital decay.

Able to view the Earth’s entire surface over several orbits. Able to provide scans of different areas of the Earth many times a day. Geographical mappings are made possible. Low altitudes enable a closer view of the surface of the Earth. Low altitudes allow rapid information transmission with little delay. Low altitudes mean the launchings of these satellites are easier and cheaper, as less fuel for the same satellite mass is required.

Disadvantages

Delay in information transmission must be considered. Each satellite has a limited view of the Earth as it only stays at one point above the Earth. (Each satellite can ‘see’ about one-third of the Earth’s surface.) Therefore many geostationary satellites are required to provide coverage of the entire surface of the Earth. Even then, polar regions may still not be properly covered. Their high altitude makes launching processes more difficult and expensive as more fuel is needed.

Much effort is required to track these satellites, as they move rapidly above the Earth. Atmospheric drag is quite significant and orbital decay is inevitable. The orbital paths of the satellites have to be controlled carefully to avoid interference between one satellite and another. They are more severely affected by the fluctuations in the Earth’s van Allen radiation belts.

They suffer more damage from incoming energetic cosmic rays due to their high altitude. Main uses

Information relay: information is sent up to one satellite and is bounced off to another place on the Earth.

Geotopographic studies: including patterns of the growth of crops and spreading of deserts.

Communication satellites, e.g. Foxtel.

Remote sensing.

Weather monitoring.

Geoscanning and geomapping. Studying weather patterns.

37

space

Orbital decay n Account for the orbital decay of satellites in low Earth orbit

A satellite picture of Sydney CBD taken by a low Earth orbiting satellite (LEOS)

The atmosphere is a gaseous layer that surrounds the Earth. It allows us to breathe and prevents harmful radiation from reaching the Earth’s surface. It extends to more than 300 km above the Earth’s surface and its density decreases exponentially with distance from the surface. At the summit of Mount Everest (less than 9 km above sea level), the density of the atmosphere is reduced to approximately one-third that at sea level. A low Earth orbit satellite is usually placed within the upper limits of the Earth’s atmosphere. Although the density of the atmosphere is extremely low at such altitudes, friction will still be generated, acting as a resistive force on the moving satellite. This resistive force will slow down the satellite’s orbital velocity (see the circular motion m v2 , if v is section), and causes the satellite to drop to a lower orbit. (Since Fc = r decreased, r will be decreased. The lower altitude means that the satellite is now in a yet denser part of the atmosphere. This leads to an even greater resistive force acting on the satellite. The satellite will then slow down further at a faster rate, and move into an even lower orbit. As this process continues, eventually its orbital velocity will be too small to sustain its circular orbital motion, and the satellite will spiral back to the Earth. In the process, it is usually burnt up in the denser atmosphere, due to the heat caused by air friction, although pieces of residue can pass through the atmosphere and land on Earth. An unprotected satellite moving at around 7 km s−1 will mostly vaporise, however, spacecraft such as the space shuttle use this same air friction to slow down in a controlled manner. The kinetic energy of the craft is transformed into heat energy, which is absorbed by the protective insulating tiles. Damage to these tiles may result in the shuttle burning up upon re-entry into the Earth’s atmosphere. This topic is discussed further in Section 2.9.

38

chapter 2  space exploration

2.7

Escape velocity Outline Newton’s concept of escape velocity n Explain the concept of escape velocity in terms of the: – gravitational constant – mass and radius of the planet n

It has been seen that when the initial horizontal velocity of a projectile increases sufficiently, the object will not fall back to the Earth following a parabolic pathway, but will describe a circular path around the Earth (see Fig. 2.7). If the velocity is increased further, an elliptical path will follow. Eventually, with even greater horizontal speed, it will escape the Earth’s gravitational pull and never come back (see Fig. 2.8). This velocity is referred to as the escape velocity of the Earth. Sir Isaac Newton is considered to have been the first to think about such a situation, launching a projectile horizontally from a very high tower.

As the initial projection speed increases

The object falls back to Earth following a parabolic pathway

Definition

Earth

Circular orbit

Escape Elliptical orbit

Escape velocity is the velocity at which an object is able to escape from the gravitational field of a planet.

What factors affect the value of the escape velocity?

Figure 2.8  An object with different projection speeds

If an object is to escape the gravitational field of a planet, its kinetic energy due to its velocity (v) must exceed or at least equal its gravitational potential energy. Therefore: 1 Kinetic energy Ek = Ek  Ep mv2 2 1 mM m v2  G r 2



v



 here r is the radius of the planet, as the object of concern w is launched from either at or very close to the surface of the planet.

2GM r

Note that as ‘m’ is cancelled from both sides of the equation, the escape velocity is not dependent on the mass of the escaping object. As ‘G’ is also a constant, the only two variables are ‘r’, the radius of the planet (i.e. how far from the planet’s centre is its surface) and ‘M’, the mass of the planet itself. This has two consequences: 1. Different planets have different escape velocities. 2. A massive body, such as a rocket, will have the same escape velocity as a small object like an atom.

39

space

Example 1

Using the data provided in Table 1.1 in Chapter 1, calculate the escape velocity for Mercury. Hence explain why, unlike Earth, Mercury does not have an atmosphere. Solution

1 mM m v2  G 2 d

i.



v



∴v



2GM r G = 6.67 × 10–11 M = 3.3 × 1023 kg r = 2439000 m 2 × 6.67 × 10–11 × 3.3 × 1023 2439000

v  4248 m s –1 or  v  4.2 km s –1 ii. The escape velocity for Mercury as calculated above is quite small (compared to 11.2 km s–1 on Earth). Since the escape velocity is independent of an object’s mass, under normal circumstances the velocity of gas molecules can easily exceed the escape velocity of Mercury due to their own thermal energies, especially as Mercury is close to the Sun and therefore very hot. Consequently, a small planet like Mercury will not be able to retain its atmosphere, as the majority of its atmospheric gas molecules will escape into space. For the same reason, the Moon cannot retain its own atmosphere as any gas molecules will be able to move faster than the Moon’s escape velocity.

2.8

Leaving Earth Launching a rocket and the conservation of momentum

Analyse the changing acceleration of a rocket during launch in terms of the: – law of conservation of momentum – forces experienced by astronauts

n

Newton proposed three laws of mechanics. The third law states: when one object exerts a force on another, it will itself receive an equal force but opposite in direction. Newton’s third law is also known as the law of conservation of momentum. Rockets usually combust hydrogen in oxygen to produce thrust. These gases are liquified for storage to reduce the storage volume.

40

chapter 2  space exploration

In the engine, which is situated at the end of a rocket, the two gases mix and then burn. The combustion of these gases produces enormous amounts of energy and pushes the gases at the end of the rocket backwards or downwards with a very high velocity. As these gases are pushed backwards, the rocket also receives an equal but opposite force, in accordance with Newton’s third law. Hence the rocket receives a forward or upward force that lifts it off the ground. Analogy: When you are swimming, in order to move forward, you push the water backwards with your hands and feet.

Because the force pair is between the rocket and the expelled gases, rockets are able to accelerate even in a vacuum. Unlike aeroplane jet engines, the oxygen required for the combustion of the fuel (hydrogen) must be carried onboard the rocket itself so combustion can continue in space. Physics background: The vast majority of the mass of a spacecraft (such as the space shuttle) at lift-off is the fuel (hydrogen) and the oxygen. Storing these gases as liquids requires very low temperatures (as well as high pressure), which in itself causes problems. The fuel tanks cool down so much that ice forms on the exterior of the tanks, which can fall off during launch and damage the shuttle. The insulating foam designed to keep the tank cold (and to help prevent ice from building up) often falls off during launch. The shuttle Columbia had its heat-shielding tiles damaged so badly that the heat upon re-entry caused the spacecraft to burn up, killing all on board. Liquefied gases occupy less than 0.1% of the volume occupied at normal temperatures and pressure. The oxygen required weighs 16 times more than the fuel itself!

Increase in acceleration during lift-off The upward acceleration of a rocket as it ascends does not remain constant but rather it increases gradually for three reasons. The first reason is that as the rocket ascends, fuel is consumed, which results in a decrease in mass of the rocket. The upward thrust force of the rocket remains constant (because the engine power does not change). Therefore, according to Newton’s second law: F = ma, if the mass m decreases, and the thrust F is unchanged, then a, the acceleration, gradually increases. If we plot the acceleration versus time for a single-stage rocket as it ascends (most modern rockets are multistage rockets), the graph would look similar to Figure 2.9. The straight-line portion of the graph indicates that shortly after the engine is turned on, there will be acceleration. The curved portion shows the Acceleration acceleration increases gradually over time. Note for (m s–2) multistage rockets, the graph will be similar, except the acceleration drops to ‘−g’ at the end of each stage and rises in a similar way as before when a new staged is turned on. The second reason is that the direction of the velocity changes from being vertical to horizontal as the rocket goes into orbit—thus its acceleration is no longer reduced by g. Third, as the rocket ascends, it moves further and further away from the planet. This results in

Figure 2.9  Acceleration of a single-stage rocket versus time

Time (s)

41

space

a decrease in the force of gravity. Consequently, the net upward force acting on the rocket increases, hence its acceleration increases. The magnitude of gravity at typical low Earth orbit altitudes is still around 90%–95% of the magnitude on the Earth’s surface. However, at the altitude of geostationary satellites, the magnitude of gravity is about 1% of the value on the surface of the Earth.

A word about multistage rockets A schematic drawing of a multistage rocket is shown in Figure 2.10. When the fuel in the first stage is used up, this stage is discarded and the engine of the second stage is turned on. A similar process takes place when the fuel of the second stage is depleted. The purpose of the multiple stages is to allow rockets to avoid carrying empty fuel tanks, which act as unnecessary mass and decrease the efficiency of the launching process.

An ascending multistage rocket

Instrument carrying site

Stage III

Acceleration and g forces n

Engine of the third stage Stage II

I dentify why the term ‘g forces’ is used to explain the forces acting on an astronaut during launch

Acceleration during lift-off can also be assessed using ‘g forces’. Definition

g force is a measure of acceleration force using the Earth’s gravitaitonal acceleration as the unit.

Engine of the second stage

Stage I

Engine of the first stage

Figure 2.10  A schematic drawing of a multistage rocket

As we stand, sit or walk we experience 1 g, due to the normal force that pushes upwards, and we feel our ‘normal’ weight. However, there are certain situations where the g forces we experience will deviate from 1, and the apparent weight we experience will be different. This is summarised in Figure 2.11 (a) to (e). Note: The situation shown in Figure 2.11 (b) is commonly

experienced during lift-offs. However, the size of the g force has to be carefully controlled, because large g forces can be lethal, as discussed below.

A positive g force is one that is directed from the feet to the head (upwards), whereas a negative g force is in the other direction (downwards). If a person experiences the sensation of feeling more weight than normal, the g force is positive. Feeling less weight (or even negative weight) is due to negative g forces. Effects of large g forces

Although the human body can tolerate moving at any speed, it cannot withstand very high accelerations or g forces. The maximum g force on shuttle astronauts is limited to around 4 g, while astronauts in their training may be subjected to 10 g forces. An

42

chapter 2  space exploration

enormous g force acting along the longitudinal axis of the astronaut is fatal: ■■ Enormous positive g force: The extreme case of the situation shown in Figure 2.11 (b), will tend to drain the blood away from the head and brain, causing unconciousness or ‘blackout’ and death if it is prolonged. ■■ Enormous negative g force: The extreme case of the situation shown in Figure 2.11 (e), causes the blood to rush from the feet to the head and brain, which leads to excessive bleeding through orifices and brain damage. This is known as a ‘red-out’. It too can be fatal. To help astronauts withstand extremely large g forces during lift-offs, the astronauts lie down so that the forces act in the direction along their back-to-front axis and special cushions are used. Pilots of fighter planes, routinely subjected to large g forces during tight turns and manoeuvres, wear ‘g suits’ which apply pressure to lower parts of the body, preventing blood from being pulled away from the brain. As astronauts move around inside the spacecraft once in orbit, and the g forces only act for several minutes from launch until orbit is attained, g suits are impractical.

Astronaut inside a spacecraft

Note: During the re-entry of a rocket, the astronauts again experience positive g forces. This is because decelerating downwards is equivalent to accelerating upwards. N

N

N

a0 (upward acceleration)

a0

7

7

a  g (downward acceleration)

7 s7 N s/NEEXPERIENCES g 1 s/NEFEELSLIGHTER

s7 N s/NEEXPERIENCES1 g s/NEFEELSHEAVIER

s7 N s/NEEXPERIENCES1g s/NEFEELSHISHERNORMALWEIGHT (a)

g a 0 (downward acceleration)

(b)

(c)

a g (downward acceleration)

7 s7 N, N  0 s/NEEXPERIENCES0g s/NEFEELSWEIGHTLESS4HISISTHE STATEOFFREEFALLITISASIFTHEFLOOR HASBEENREMOVED (d) (e) Figure 2.11  Acceleration and g force

.

W s. 0 (more correctly, the normal is non-existent) s/NEEXPERIENCES 0 g s/NEFEELSNEGATIVEWEIGHT Note: W = weight force; N = normal force

43

space

Earth’s rotational motion and rocket launching n

A launch site at the equator

2.9

The Earth rotates on its axis from west to east at the quite fast speed of 465 m s–1 at the equator. At the same time it is also orbiting the Sun with a speed of 29.8 km s–1. These high speeds can contribute to rocket launchings. To launch a satellite that is to orbit the Earth, the rocket is usually launched in an eastward direction, the same direction as the Earth’s rotation. This is to take advantage of the rotational speed of the Earth so that the rocket can gain an additional velocity during its lift-off without requiring fuel. Also, this type of launch is usually located at places near the equator, where the linear orbital speed of the Earth’s spin is the greatest (see page 29 in this chapter). This reduces the amount of fuel required to achieve the same orbital velocity, thus making the launch more economical and efficient. Similarly, to launch a satellite that is to probe around the Sun, the rocket is launched in the same direction as the direction the Earth is orbiting the Sun. This again enables the rocket to pick up an additional velocity component during its lift-off, making the launching process more efficient. Most countries with satellite launching capabilities have their launch sites as close to the equator as possible. The United States of America launches its satellites and space shuttles from Florida. There have been plans to construct a new Australian launch facility somewhere in Cape York, as close to the equator as possible.

Coming back to Earth n n

A spacecraft re-entering the Earth’s atmosphere

44

Discuss the effect of the Earth’s orbital motion and its rotational motion on the launch of a rocket

Discuss issues associated with safe re-entry into the Earth’s atmosphere and landing on the Earth’s surface Identify that there is an optimum angle for safe re-entry for a manned spacecraft into the Earth’s atmosphere and the consequences of failing to achieve this angle Spacecraft or space shuttles on their return to Earth are purely under the influence of gravity, and only the atmosphere provides a frictional medium to slow them down. High technologies and precise planning are essential in order to overcome the difficulties experienced during the re-entry of manned spacecraft. For a spacecraft to return safely, it is most important to ensure the spacecraft enters the atmosphere at a certain angle called the optimum re-entry angle, which lies within the very narrow range of about 5.2° to 7.2° to the atmosphere. If the re-entry angle is too big, that is if it exceeds 7.2°, then the upward resistive force (the friction between the spacecraft and the atmosphere) experienced by the spacecraft will be

chapter 2  space exploration

too large, so that it will be decelerated too rapidly. The disastrous consequences are that the g force experienced by the astronauts will be too large to tolerate and will be fatal. Also, the huge friction produces an enormous amount of heat too rapidly so that it will cause the spacecraft to melt or burn up. Analogy: If you throw a small piece of rock straight down into a lake, it will be slowed down very

rapidly. Whereas if you throw it with the same force but at an angle, it will be slowed down at a much slower rate.

If the re-entry angle is too small, that is less than 5.2°, then the spacecraft will not enter but will bounce off the atmosphere, and return to space. Because a spacecraft that is about to land does not have enough fuel reserve to actively guide its return, a spacecraft that is bounced off the atmosphere may not have sufficient fuel to re-align itself for a second attempt at a controlled re-entry—and burn up. Analogy: When you throw a shell into the sea at a very shallow angle (almost horizontally), the shell does not sink into the sea, but rather it skims off the surface of the water.

Even when the re-entry angle is met, the friction between the spacecraft and atmosphere will still produce an enormous amount of heat. Therefore special materials have to be employed to shield the spacecraft and astronaut from this heat blast. On a modern spacecraft, external porous silicon complex tiles are employed to insulate against the heat, while internal reflective aluminium plates are used to reflect the excessive heat back into space. Final adjustments are made by air conditioners, which keep the interior of the spacecraft at normal room temperature. The Apollo missions used an ablative shield that burned away—taking the heat energy with it. Another phenomenon that occurs during re-entry into the atmosphere is known as ‘ionisation blackout’. This blackout refers to a loss in radio communications for between 30 seconds to several minutes during the high-speed initial phase of re-entry. The intense heat caused by the high speed of the vehicle produces a plasma layer (hot charged particles) that prevents radio waves being received by or transmitted from the spacecraft. This issue is being studied closely by scientists in an endeavour to minimise the effect or to eliminate it entirely by designing new antennas and spacecraft shapes.

The work of rocket scientists n

Identify data sources, gather, analyse and present information on the contribution of one of the following to the development of space exploration: Tsiolkovsky, Oberth, Goddard, Esnault-Pelterie, O’Neill or von Braun

Five rocket scientists who have made many great contributions to the development rocketry as well as space exploration include: n n n n n

Konstantin Tsiolkovsky Robert H. Goddard Hermann Oberth Wernher von Braun Robert Esnault-Pelterie

secondary source investigation PFAs H1 physics skills H13.1a, c H14.1h

An outline of their contributions to the development of space exploration is given here. Further research into one chosen contributor should be undertaken using secondary sources such as the internet by typing in appropriate words into a search engine.

45

space

Konstantin Tsiolkovsky

Robert Esnault-Pelterie

German rocket experts (L to R, foreground) Dr Ernst Stuhlinger, Professor Hermann Oberth, Wernher von Braun and Robert Lusser

Konstantin Tsiolkovsky: a Russian theorist (1857–1935) n Built wind tunnels to study the aerodynamics of a variety of aircraft. n Was the first to propose the idea of liquid fuels instead of solid fuels for rockets. n Proposed the idea of mounting one rocket engine on top of another, with the first discarded after it is used. This was the forerunner of modern multistage rockets. n Proposed solutions to problems in navigation, heating as a consequence of air friction and maintaining fuel supply for rockets.

Robert H. Goddard

TR

General resources—How to evaluate a website

Robert H. Goddard: an American experimentalist (1882–1945) n n n n

Measured the fuel values for various rocket fuels, such as liquid hydrogen and oxygen. Launched the world’s first liquid-fuel-powered instrument rocket. Launched the first liquid-fuelled supersonic rocket. Developed pumps for liquid fuels, as well as rocket engines that have automatic cooling systems.

Hermann Oberth: an Austrian theorist (1894–1989) n Simulated the effects of weightlessness. n Calculated the velocity a rocket must achieve in order to escape the Earth’s gravitational pull. n In his book Ways to Spaceflight, for which he won the Robert Esnault-Pelterie–André Hirsch Prize, he proposed the idea of ion and electric repulsion rocket engines. n Employed von Braun to help him to launch his first rocket near Berlin. Thus he was responsible for influencing future generations in the area of rocketry.

Wernher von Braun: a German engineer (1912–1977) n When still very young, helped Oberth to launch his first rocket near Berlin. n Led the development of V2 rockets for Germany, which caused massive destruction in other countries during World War II. n Helped the US to develop advance missiles for military purposes and later rocketry for high altitude studies and space exploration (after surrendering to US armies at the end of World War II). n Led the development of Saturn series rocketry, which let the first American walk on the Moon.

Robert Esnault-Pelterie: French aircraft engineer and spaceflight theorist (1881–1957) n Designed and developed ailerons—moveable surfaces on the trailing edge of an aircraft wing. These became an essential element on most modern aircraft.

46

chapter 2  space exploration

n Developed and trialled monoplanes. In 1908 his Pelterie II set a record, flying 1200 m at 30 m altitude. n Inventor of the ‘joy stick’ flight control used in many planes built during World War I. n Calculated in 1913 the energy required to reach the Moon and nearby planets. n Proposed the use of atomic energy and nuclear power for interplanetary flight.

Projectile motion n

Perform a first-hand investigation, gather information and analyse data to calculate initial and final velocity, maximum height reached, range and time of flight of a projectile for a range of situations by using simulations, data loggers and computer analysis

A number of commercial packages offer computer simulation programs; data logging products are supplied with software to capably analyse projectile motion experiments. Useful websites A simulation with user inputs: http://galileo.phys.virginia.edu/classes/109N/more_stuff/Applets/ProjectileMotion/enapplet.html

first-hand investigation physics skills H11.1a, b, e H11.2a, b, c, d, e H11.3a, b, c H12.1a, d H12.2a, b H12.4a, b, c, d, e H13.1d H14.1a, d, e, g H14.2a, d

A simulation showing component velocities as the projectile moves: http://www.walter-fendt.de/ph11e/projectile.htm

>WWW

A straightforward projectile motion analysis: http://www.walter-fendt.de/ph11e/projectile.htm

Aim To analyse an example of projectile motion and compare the calculated value for its horizontal displacement to the one measured experimentally.

Equipment/apparatus Toy car track, ball bearing, sand tray (or similar), metre ruler

Procedure 1. Set up the experiment as shown in the diagram: Ball bearing

SR

Toy car track h

Projectile motion simulation spreadsheet

%y %x



Sand tray Set-up of an experiment to analyse an example of projectile motion

47

space

2. Calculate the ball bearing’s horizontal launch speed (ux) by using the conservation of

energy, that is, gravitational potential energy is converted to kinetic energy:

n ux = √2gh. (Note: the ball bearing’s rotational kinetic energy and friction results in ux being slightly less than the value given by this equation. A closer value is ux = √1.43gh). 3. Calculate the expected time of flight of the ball bearing, using ay = 9.8 m s−2 (down),

y is the height of the table and uy = 0.

4. Hence calculate the expected value for x, and compare this with the experimental value

of x obtained by performing the experiment (measuring the horizontal distance from the impression made on the sand tray to a point directly below the edge of the table). Be sure that all other variables are kept constant.

5. Present your results in a suitable format. 6. Assess the reliability of the experiment by repeating it several times. (If it is reliable, the

results of each trial will not differ significantly. The experiment is not made more reliable by simply repeating it!) 7. Write a suitable conclusion.

An excellent task to extend this first-hand investigation involves using a spreadsheet (Excel) to calculate the X,Y position of a projectile at small time increments; use this to produce a ‘strobe photograph’.

CHAPTER revision questions For all the questions in this chapter, take the mass of the Earth to be 6.0 × 1024 kg, and the radius of the Earth to be 6378 km.   1. Luke is in a train which drives past a platform at a constant velocity v m s−1. He is

tossing a pen vertically up and down. Incidentally Luke’s brother is on the platform. Describe the motion of this pen seen by Luke’s brother and justify your answer.   2. A young child is practising his sling shot. He swings a rock with mass of 10 g with a

rope of 25 cm long so that it makes a horizontal circle just above his head, and the rope develops a tension of 10 N. He then releases the rock, and the rock shoots towards a tree which is 1.2 m tall and 0.60 m away. (a) At what speed will the rock be projected initially? (b) Assuming this young child is 1.0 m tall, calculate the range of the sling shot. 1.0 m (child holding the rock)

Range

48

chapter 2  space exploration

(c) Will the rock hit the tree? (d) Calculate the position on the tree where the rock will hit the tree.   3. Sketch a velocity-time graph to describe the motion of a marble stone that has been

projected vertically upwards from the ground level, which then returns to the original position, if it is projected at a velocity of 24 m s−1. Label your axis.   4. An Australian athlete is competing in the 2008 Olympic hammer-throw. Assume the

string is 0.50 m long and the mass of the ball (hammer) is 1.0 kg. (a) If the athlete is exerting a force of 20 × 102 N while swinging the ball, how fast will the ball be released? (b) Assuming the athlete is 1.8 m tall, and the ball leaves just above his head at an angle of 40° to the ground, calculate how high the ball will reach and how far the ball will land. (c) Justify why such athletes are usually quite tall and bulky in build.   5. A stone is thrown from the top of a cliff at 15 m s–1, 35° to the horizontal. If the stone

takes 5.2 seconds to land: (a) determine the height of the cliff (b) calculate the impact velocity   6. Outline Galileo’s analysis of projectile motions.   7. Describe one other of Galileo’s contributions apart from his work on projectile

motions.   8. On a normal day, a car is able to turn a corner whose arc has a radius of 10 m at

30 km h−1. On a rainy day, the slipperiness of the road reduces the friction between the tires and road by a half. If this car is still going to turn safely without skidding, how fast should it turn?   9. A mass bob is undergoing a conical pendulum motion as shown in the diagram. If the

string is 85 cm long and the tension in the string is 15 N, and the radius of the circle is 15 cm, determine the period of this conical pendulum. Note: Period is defined as the time for one complete revolution or oscillation. String (tension is 15N)

85 cm

15 cm

Mass bob

10. A student watched a documentary on TV that seemed to show that astronauts in a

spacecraft orbiting around the Earth were able to ‘float’ inside the spacecraft. The

49

space

student explained this by saying there is no gravity acting on these astronauts, thus they are not falling. What is the problem with this statement? How would you explain such a phenomenon? 11. Using equations, describe Newton’s contributions to Kepler’s third law. 12. Kepler’s third law can be written as

r3

= k, where k is a constant. Justify the following T2 statement: K is only a constant if one is dealing with objects in the same orbital system.

13. The Moon orbits the Earth every 27.3 days. The Moon has a mass of 7.35 × 1022 kg, and

a radius of 1738 km. (a) How long would it take a rocket with a constant velocity of 1.0 × 106 m s−1 to reach the Moon? (b) Determine the orbital velocity of the Moon. (c) Determine the size of the gravitational attraction force between the Moon and the Earth. 14. A satellite orbiting the planet Physics with an orbital radius of 2300 km, measured from

the centre of the planet. It completes one revolution around the planet in 4.2 hours. If the same satellite is to descend to a radius of 1800 m, how long will it take to orbit the planet now? 15. During your study, you have qualitatively evaluated the key features of low Earth orbit

satellites and geostationary satellites. List two advantages of low Earth orbit satellites and geostationary satellites respectively. 16. A low Earth orbit satellite that has a mass of 1.0 tonne is to be placed into an orbit

around the Earth such that it will go around the Earth exactly 12 times a day. (a) Calculate the height above the ground at which the satellite needs to be. (b) Describe one similarity and one difference of launching another satellite which has a mass of 2.0 tonnes into the same orbit. 17. Justify the fact that geostationary satellites do not experience orbital decay. 18. Critically evaluate one technical difficulty of launching a space probe from the surface of

Saturn. (Assume Saturn has a solid surface.) 19. The acceleration gradually increases during the launching of a modern rocket. Analyse

two factors that could contribute to this increase in acceleration. 20. Define the term ‘g force’, and discuss the relevance of g force during the early stage of

rocket launching. 21. A spacecraft is to be launched vertically, and an astronaut is standing upright in the

spacecraft. (a) If the astronaut weighs 68 kg, what is the size of the normal force acting on him or her before the spacecraft takes off? (b) Assuming a constant upward acceleration of 20.8 m s−2, what will be the normal force acting on the astronaut now? (c) What is the weight of this person if it is measured while the spacecraft is accelerating at 20.8 m s−2? (d) Name one possible health hazard that may be experienced by this astronaut. (e) Name two ways of minimising this health hazard.

50

chapter 2  space exploration

22. Suppose you are inside a lift that is going down from level 10 to the ground level at a

constant velocity of 6.00 m s−1. Three seconds before reaching the ground level, the lift starts to decelerate to prepare to stop at the ground level. If your mass is 70.0 kg: (a) What is the net force you experience when the lift is going down at a constant velocity of 6.00 m s? (b) What is the net force you experience in the last three seconds? (c) What will your apparent weight be during the last three seconds? 23. Define escape velocity. Calculate escape velocity for the planet Jupiter and comment on

its magnitude. The mass of Jupiter is 1.90 × 1027 kg and its radius is 71492 km. 24. (a) Discuss the safety issues involved in re-entries of satellites or spacecrafts and

evaluate the significance of the optimum re-entry angle(s). (b) Even if everything is done correctly during the re-entry, there are still technical difficulties. Name one of these difficulties and comment on how this can be controlled. 25. Identify the following statement as true or false. If false please provide a correction to

such a statement: (a) A geostationary satellite can be placed anywhere above the Earth as long as its altitude will allow it to have a period of exactly 24 hours as according to Kepler’s third law. (b) There is no net force acting on a satellite while it is orbiting the Earth. 26. Choose two scientists from the following list:



(a) Konstantin Tsiolkovsky (b) Robert H. Goddard (c) Hermann Oberth (d) Wernher von Braun (e) Robert Esnault-Pelterie Using point form, list three significant contributions these two scientists have made to rocketry and space exploration.

27. Design an experiment that allows you to measure one unknown horizontal projection

velocity of a projectile (assume friction is negligible). In this experiment, you may assume the gravitational acceleration is 9.8 m s−2.

SR

Answers to chapter revision questions

51

space

CHAPTER 3

Gravity, orbits and space travel The solar system is held together by gravity

3.1 SR

Gravity: a revision n

n Worked example 9

PFA

H4 ‘Assesses the impacts of applications of physics on society and the environment’

TR

Mapping the PFAs PFA scaffolds H4

n

n

Describe a gravitational field in the region surrounding a massive object in terms of its effects on other masses in it m1m1 Define Newton’s Law of Universal Gravitation: F = G d2 Discuss the importance of Newton’s Law of Universal Gravitation in understanding and calculating the motion of satellites Discuss the importance of Newton’s Law of Universal Gravitation in understanding and calculating the motion of satellites

Sputnik 1, the first artificial satellite to be launched by the human race, sent a wave of fear through the western world, particularly in the US. Its launch by the Soviet Union in October 1957 ignited the Space War between the Soviet Union and the United States, and was a contributing factor to the ensuing Cold War. The possibility of satellite motion was first conceived by Sir Isaac Newton in the 1680s. Using the previous works of Galileo and Kepler on projectile motion and the motion of the planets respectively, Newton extended the idea A of a gravitational force to the motion B of the Moon around the Earth. That

C D

Sputnik 1

52

Newton’s concept of how a satellite might be placed in Earth orbit uses the analogy of cannonballs shot from ‘Newton’s mountain’

chapter 3  Gravity, orbits and space travel

gravitational force obeys an inverse square law (i.e. the force is inversely proportional to the square of the distance between 1 two objects, F α 2 ). d The Russian scientists responsible for the planning of Sputnik 1 were able to use Newton’s law of universal gravitation to calculate how fast the spacecraft needed to be propelled by the rockets so that it would maintain a stable orbit. All other satellites which have been placed in orbit since Sputnik 1 (of which there have been thousands) have used the same calculations—based on Newton’s gravity. In our modern society, satellites are relied upon for many uses, including pay TV and other communications, remote sensing for mineral exploration, detailed weather observations and measurements of atmospheric ozone and other pollutants, mapping and monitoring of land use and modern GPS navigation systems used by aircraft, ships, cars and even missiles. Useful websites Newton’s law of universal gravitation: http://www.glenbrook.k12.il.us/GBSSCI/PHYS/CLASS/circles/u6l3c.html

A GPS satellite

>WWW

Projectile motion and satellites: http://www.glenbrook.k12.il.us/GBSSCI/PHYS/mmedia/vectors/sat.html More information about Newton, his life and his work: http://www.galileoandeinstein.physics.virginia.edu/lectures/newton.html

The concepts dealt with in Chapter 3 are closely related to those in Chapter 1. Most of the Part 3 (refer to the syllabus) materials are covered in Chapters 1 and 2 and are not be repeated in detail in Chapter 3. Recall that any mass, regardless of its size, will have a gravitational field around it. However, gravity, as one of the four forces in nature (the other three being the strong nuclear force, the weak nuclear force and the electromagnetic force), is extremely weak. Unlike the nuclear forces, it can act over very large distances, and is responsible for keeping the atmosphere and oceans (and us) on Earth, and for holding the solar system and indeed our entire galaxy together. Without gravity, stars would not be held together, and would not have the necessary pressure within their cores to enable nuclear fusion. A very different Universe, dark and lifeless, would be the result. Isaac Newton had defined quantitatively the size of the gravitational force in his law of universal gravitation, which states the size of the attraction force between two objects is proportional to the product of their masses and inversely proportional to the square of their distance of separation (see Chapter 1); and mathematically.

F= G

m1m2 d2

Where: m1 and m2 = mass of the two objects, measured in kg d = distance between the two objects measured from the centre of each object (m) G = universal gravitational constant, which is equal to 6.67 × 10−11 N m2 kg−2

53

space

Lastly, recall that the motion of a satellite in orbit around the Earth or other planets is governed by the force of gravity. When space probes are placed in specific orbits, the utilisation of gravity is of paramount importance in determining how the orbit is calculated. These concepts have already been discussed in detail in Chapter 2. A further example is included in this section.

Example

Compare the orbital speed of satellites which have stable orbits with an altitude of 300 km: (a) Above the Earth. (b) Above the Moon’s surface. Solution

(

As the centripetal force required to keep the satellite in orbit Fc =

(

)

)

m v 2 is r

m1m2 , the two equations are equated, d2 the mass m1 (the mass of the satellite) is cancelled and the value for v, the orbital speed, is calculated. provided by the force of gravity FG = G

(a)  For Earth (mEarth = 6.0 × 1024 kg):

Fc = FG

m v 2 mm = G 12 2 r d v2 m = G 22 d r

v2 = G

mEarth (as r = d) d





–11 v = 6.67 × 10 ×



= 7.7 × 103 m s –1

6.0 × 1024 (6370 + 300) × 103

(b)  For the Moon (mMoon = 7.4 × 1022 kg; radius = 1700 km)

v2 = G

mMoon d





v = 6.67 × 10–11 ×



= 1.6 × 103 m s –1

7.4 × 1022 (1700 + 300) × 103

The calculations show that a spacecraft orbiting the Moon will have a much slower orbital speed than one orbiting the Earth at the same altitude.

54

chapter 3  Gravity, orbits and space travel

Note: In Chapter 1, there are three formulae which students refer to as being similar:

F=G

m1m2 d2

g=G

M d2

Ep = –G

mM

r To remember these formulae efficiently and correctly, you should have a sound understanding of the relationship among them and have ideas about the derivations of these formulae. This will also help you to be clear about which formula to use for a specific exam question.

Factors that affect the size of gravitational attraction n

Present information and use available evidence to discuss the factors affecting the strength of the gravitational force

The law of universal gravitation states that the size of the attraction force is proportional to mass and inversely proportional to the square of distance; therefore the bigger the mass and the closer the distance is from the mass Force of Earth against person object, the larger the gravitational force. (equal and Due to gravity’s weakness (the small opposite to weight value of G), only very massive bodies of person) produce noticeable gravitational fields. For example, a 50 kg person on Earth’s surface, being approximately 6370 km from the centre of the Earth (which has a mass of 6.0 × 1024 kg) only produces an attractive force of 490 N. This force may be thought of as either being a force on the person, or an equal (but opposite) force on the Earth. (Both views are correct.) What will be the size of the attraction force if this person is on Jupiter?

■■

secondary source investigation physics skills H14.1a, c, d, f, g, h H14.2a, d H14.3b

Weight of person against Earth (weight force)

Solve problems and analyse information using: F = G

m1m2 d2

Example 1

(a) Two large asteroids, both with a mass of 1.5 × 109 kg, pass within 200 km of each other. What force of attraction would there be between the asteroids? This result is the equivalent to the weight of a small grain of sand! Solution

F=G

m1m2 d2



= 6.67 × 10–11



= 1.7 × 10–3 N

1.5 × 109 × 1.5 × 109 (300 × 103)2

55

space

Example 2

Jupiter has a mass of 1.9 × 1027 kg. This is approximately 200 times the mass of Earth. Some astronomers believe that Jupiter has saved Earth from being bombarded by many more asteroids than it actually has in the past. Why is this? Solution

The answer stems from the way in which Jupiter’s gravity can attract asteroids that stray too close to the planet, pulling them in and thus preventing them from continuing in an orbit that may pass close to Earth or colliding with Earth. Example 3

Tides on Earth are caused by a combination of the Moon’s and the Sun’s gravity acting on the oceans. Compare the strength of the Moon’s gravity on Earth’s surface (approximately 400 000 km from the Moon’s centre) to that on the Moon’s surface. The radius of the Moon is 1700 km. Solution

Newton’s law of universal gravitation is an example of the inverse square law: that is, the strength of the gravitational field is inversely proportional to the square of the distance, so: 400000 2 Strength of gravity on Moon’s surface is = 1700 = 5.5 × 104 greater than that exerted at Earth’s surface.

(

3.2

)

The slingshot effect n

Identify that a slingshot effect can be provided by planets for space probes

The principle of the slingshot effect is to use a planet’s gravitational field and orbital speed to help a space probe gain extra speed by flying past the planet.

A space probe

This allows a long distance space probe to increase its velocity every time it passes a planet without spending any fuel (other than a small amount for maneuvering). This consequently reduces the time of its trip as well as the fuel requirement.

How does the slingshot effect work? When a space probe is passing close to a planet, say Jupiter, the probe accelerates (speeds up) due to the force of the planet’s gravitational field. However, when the space probe moves away from Jupiter, it decelerates (slows down) for the same reason. Effectively, the incoming speed of the probe is about the same as its receding speed except the trajectory or pathway of the probe has now been changed (see Fig. 3.1). Jupiter is not stationary but it is orbiting around the Sun. Therefore, as the probe is pulled along and swung around the

56

chapter 3  Gravity, orbits and space travel

A probe that is bypassing Jupiter

The slingshot effect Overall velocity of the probe in relation to the Sun, which is faster than the incoming velocity

2 The probe speeds up as it is pulled in by the planet

3

3 The receding velocity: the probe slows down as it is pulled back by the planet

1 The incoming velocity of the probe

2 1

Probe

Probe Velocity of Jupiter in relation to the Sun

Planet 1, 2, 3 are the same as in Figure 3.1

Jupiter

Figure 3.1  A probe passing Jupiter

Sun

planet by the gravitational field of Jupiter, it will also have the speed of Jupiter around the Sun added to its original speed; hence, the probe speeds up with respect to the Sun (see Fig. 3.2). The maximum speed the probe can gain will be twice the speed of the planet around the Sun, however, this is not often achieved. Nevertheless, by allowing the probe to by-pass many planets during its trip in a specific pattern, the probe will be able to travel towards the destination (not necessary in a straight path) at a significantly higher speed. This also means in order to use this type of slingshot effect, all planets have to be at the right place at the right time. This very narrow launching window requires careful calculations and planning. The extra velocity gained by the probe does not come for ‘free’. When the slingshot effect takes place, momentum will still have to be conserved, that is, the total initial (angular) momentum of the probe and planet must equal the total final momentum. Since the probe has now sped up and gained momentum, so the planet must lose an equal amount of momentum and slow down. Because momentum is the product of mass and velocity, and the mass of the planet is much larger than that of the probe, the speed lost by the planet will be insignificant compared to the speed gained by the probe.

Figure 3.2  The slingshot effect

chapter revision questions For all the questions in this chapter, take the mass of the Earth to be 6.0 × 1024 kg, and the radius of the Earth to be 6378 km. 1. Define the term ‘gravitational field’. 2. (a) With the aid of a diagram, briefly describe the meaning of the ‘slingshot effect’ and

the physics principle behind it. (b) With respect to the slingshot effect, explain the meaning of ‘conservation of angular momentum’.

SR

3. The Earth revolves around the Sun once every 365 days. The Sun has a mass of

1.99 × 1030 kg and a radius of 696 000 km. (a) Determine the distance between the Earth and the Sun. (b) Calculate the force of attraction between the Earth and the Sun.

Answers to chapter revision questions

57

space

CHAPTER 4

Special relativity Current and emerging understanding about time and space has been dependent upon earlier models of the transmission of light Introduction Time runs slower and becomes relative, length contracts and mass becomes greater for fast-moving objects or frames of reference. These strange phenomena may seem to contradict common sense, but they are the consequence arising from the special theory of relativity, proposed by German/US scientist Albert Einstein (1879–1955) in 1905. In this chapter we will analyse in detail the origin, make-up and impacts of the theory of special relativity. The chapter also evaluates the impacts of the theory on re-directing scientific thinking in the 20th century as well as on space travel and exploration.

Albert Einstein

4.1

The aether model n

Outline the features of the aether model for the transmission of light

Aether was once proposed to be an undetectable (by touch, smell or vision), extremely thin, elastic material that surrounded all matter and at the same time was permeable to all matter on Earth.

The role of the aether The aether was thought to be the medium through which light propagates. It was always thought that energy required a medium for transmission and propagation. Just as sound needs a medium such as the air to propagate, the medium for light to propagate was the aether. It was logical to suppose that as all other types of waves need a medium for their propagation (e.g. water waves, sound waves, earthquake waves, waves in springs and strings), so too does light. Light had been shown to have a definite wave nature due to its ability to display interference and diffraction effects, as well as reflection, refraction and dispersion. Maxwell’s equations for electromagnetic radiation also showed mathematically how light was a wave. The difficulty for the scientists at the time of the late 1800s was to explain how a wave (light) could travel through a vacuum, unsupported by any medium.

■■

58

chapter 4  special relativity

The aether was also thought to be the absolute frame of reference to which all motion was compared. Long ago, scientists like Galileo realised that all motion is relative (see Chapter 2). However, they believed ultimately that there should be an absolute frame of reference to which all motion could be compared. This absolute frame of reference was the aether. Even Newton suggested the need for an absolute frame of reference when studying motion. To illustrate this ‘absolute frame of reference’ concept, consider how the motion of aircraft, ships and cars is measured. The frame of reference used is almost always the Earth’s surface, despite the fact that the Earth’s surface is subject to the motion of the rotation of the Earth and the orbital speed of the Earth around the Sun. The Earth’s surface is very good to use as a frame of reference for the motion of ordinary transport, but becomes less useful to measure the motion of spacecraft or space probes.

■■

Other features of the aether Not only was the aether extremely thin and transparent, it was also proposed that the aether had a very high elasticity. This property was proposed as the aether had to be a solid to transmit transverse waves—light transverse waves only pass through solid—but at the same time sufficiently thin enough to let planets move through it unimpeded. It followed that having a high elasticity meant aether behaved like a solid when it was subjected to instantaneous and varying forces, like that of transverse waves. However, it could be distorted infinitely when it was under a continuous uni-directional force, such as the motion of planets.

The Michelson–Morley experiment n

n n

Describe and evaluate the Michelson–Morley attempt to measure the relative velocity of the Earth through the aether Discuss the role of the Michelson–Morley experiments in making determinations about competing theories Gather and process information to interpret the results of the Michelson–Morley experiment

James Clerk Maxwell had modelled the nature of light (and other yet to be discovered electromagnetic radiation) mathematically. Maxwell believed that there was a need for light to travel through a medium, the ‘luminiferous aether’. This would result in the speed of light changing if the aether itself was moving, or being moved through (as is the Earth, at about 30 km s−1 as it orbits the Sun), when measured against an absolute frame of reference. Such a requirement would result in the equations themselves changing. The Michelson–Morley experiment, performed in 1887 and repeated many times since, marked a watershed in modern physics. The experiment attempted to show the existence of the aether by detecting its affect on the speed of light (see page 60). The ‘null’ result (no change in the speed of light was detected) was met with scepticism by many theoretical physicists at the time.

TR

Mapping the PFAs PFA scaffolds H2

4.2 PFA

H1 ‘Evaluates how major advances in scientific understanding and technology have changed the direction or nature of scientific thinking’

PFA

H2 ‘Analyses the ways in which models, theories and laws in physics have been tested and validated’

59

space

A. A. Michelson (left) and E. W. Morley

WWW>

The first implication of this ‘null’ result was published in 1889. It was proposed that the length of bodies as they moved through the aether may in fact vary. Yet other explanations, using the physics of the time (based on Newton’s and Galileo’s works) did not stand up to logical argument or made little sense. Not until Einstein’s publication in 1905 of his special theory of relativity did a possible explanation arise. By making space and time relative, the need for the aether dissolved. Einstein’s theory explained the results of the Michelson–Morley experiment by saying that the measured speed of light will be the same, regardless of the relative velocity of the source of light and the observer. Michelson himself did not have confidence in the results of the experiment, and many years later, one of Michelson’s colleagues was able to produce a result that showed that the speed of light did indeed vary by up to 10 km s−1, depending on the direction in which it was travelling. This result would seem to go against the special theory of relativity. However, Einstein’s relativity, despite its effects not being able to be observed and measured, became widely accepted among the scientific community. The publication and subsequent widespread acceptance of Einstein’s special theory of relativity could not have occurred (or may have been delayed) had it not been for the Michelson–Morley experimental results being debated as widely as they were. Useful websites An overview of the Michelson–Morley experiment: http://galileoandeinstein.physics.virginia.edu/lectures/michelson.html http://www1.umn.edu/ships/updates/m-morley.htm http://www.redsofts.com/articles/read/251/7297/The_Invisible_Ether_and_Michelson_Morley.html http://www.phys.unsw.edu.au/einsteinlight/jw/module3_M&M.htm Detailed page for Michelson–Morley and special relativity, with animated explanations: http://www.upscale.utoronto.ca/GeneralInterest/Harrison/SpecRel/SpecRel.html

The Michelson–Morley experiment secondary source investigation PFAs H1, H2 physics skills H13.1a, b, c H14.1f H14.3c, d

n

Gather and process information to interpret the results of the Michelson–Morley experiment

Hypotheses need experimental evidence, and the aether hypothesis was no exception. In 1887, A. A. Michelson and E. W. Morley set up an experiment in the US, now known as the Michelson–Morley experiment. The aim of the experiment was to prove the existence of aether by detecting the velocity of the Earth through the aether (aether wind) using light as a tool. The apparatus and the set-up of the experiment are schematically represented in Figure 4.1.

Procedure and principle of the experiment Note: For a better understanding of this experiment, we will follow the path of light from

label ‘1’ to ‘5’ in Figure 4.1. 1. A light beam is emitted at the light source and travels towards the half-silver mirror.

60

chapter 4  special relativity

3 Mirror A

4 Mirror B

The light beam which is split at the half-silver mirror travels across the aether wind

The light beam is then reflected also across the aether wind

The light beam which is split at the half-silver mirror first travels with the aether wind

The light beam emitted by the light source

1 Light Source

The light beam is then reflected to travel against the aether wind 2 Half-silver mirror The re-united light beam travels towards the interferometer Direction of aether wind

5 Observer (Interferometer)

2. The half-silver mirror is a device that splits the light beam into two paths: one that will

travel with and against the aether wind; the other that will travel across the aether wind. Note that the aether wind is created as a result of the Earth moving through the stationary background aether. The wind ‘blows’ in the opposite direction to the motion of the Earth.

Figure. 4.1  The set-up of the Michelson–Morley experiment

Analogy: When you drive a car through the air, ‘wind’ is generated. The direction of the wind is in the opposite direction to the motion of the car. 3. One light beam travels across the aether wind to reach mirror A, and then reflects back

also across the aether wind. 4. The other light beam travels first with the aether wind to reach mirror B and then reflects

back to travel against the aether wind. (If the aether wind is to ‘blow’ in the opposite direction, then this beam will first travel against the aether wind, then with the aether wind.) 5. The two light beams re-unite at the half-silver mirror and are reflected to the

interferometer. If aether exists, then the velocity of the light beam that has travelled with, then against, the aether wind would be affected. However, the velocity of the light that has travelled across the aether wind is affected to a different extent. Therefore, when the two beams re-join, they should be out of phase to each other, and an interference pattern can be observed at the interferometer. However, this is not enough to prove the existence of aether, as the phase difference might be caused by the difference in the length of the two different pathways the light beams have undertaken. Only when the entire apparatus is rotated 90o, and a change in the interference pattern is observed, can the existence of aether be proven, since the beam that was once travelling across the aether wind is

61

space

now travelling along and against the direction of aether wind and vice versa, showing definitively the phase difference is due to the effect of aether on the velocity of light.

Results of the experiment When the entire apparatus was rotated 90°, no change in the interference pattern was observed. The experiment was carried out at different places at different times; still a null result was recorded. The null result of the experiment meant that the motion of the Earth through the aether (aether wind) could not be detected. Consequently, the existence of the aether could not be proven. Therefore, the aether model was still lacking experimental evidences. SR

Simulation Michelson inferometer

Impact of the experiment (evaluation) The inability to provide evidence for the existence of the aether would mean the aether model was an invalid physics theory. However, because it formed the basis of many physics theories and laws, scientists at the time found it hard to discard the model. Many proposals were put forward to try to explain the negative or null result of the experiment. For example, it was suggested the Earth carried the aether with it, and thus there was no relative motion (hence, aether wind) detected; also the Michelson–Morley’s experiment was criticised as not accurate enough to detect the slight change in the interference pattern. However, these suggestions were simply ‘creative’, and no physical theory supported them. However, in 1905, the great Albert Einstein proposed a revolutionary theory of his own—special relativity—in which he completely abandoned the aether model. The theory not only successfully accounted for the null result of the Michelson–Morley experiment, at the same time it offered people an entirely new perspective of the physics world, as discussed later in this chapter. Note: The best way of explaining the inability to detect something is to say that it does not exist.

Plane “A”

An example is shown below to illustrate how a light ray moving directly into and then against the aether wind would have a different average velocity to the light ray moving perpendicularly to the direction of the aether wind. Two aeroplanes have a race. Both planes can fly through the air at 200 km h–1. Plane ‘A’ will fly from the “X” start to a point ‘X’, Similarly, time to return 400 km to the north. distance Wind time = Plane ‘B’ will fly from 50 km h–1 speed 400 km = the start to a point ‘Y’, 200 + 50 km h–1 400 km 400 km to the east. = 200 – 50 km h–1 = 1 h 36 min The finish is the same point as the start. = 2 h 40 min If there is no wind blowing, the planes will return at the same 200 km h–1 50 km h–1 time. However, on the day of the race the 193.65 km h–1 wind is blowing at a “Y” Plane “B” constant 50 km h–1 Start from the north.

62

chapter 4  special relativity

Plane ‘A’ takes two hours 40 minutes to get to ‘X’, and another one hour 36 minutes to return (with the wind behind it): a total of four hours 16 minutes. Plane ‘B’ must head slightly into the wind so that it gets blown back on course, making a speed in both directions of 193.65 km h–1. It takes a total of four hours eight minutes 400 km ( × 2, to the nearest minute). 193.65 km h–1 Plane ‘B’ wins the race by a significant margin. In the Michelson–Morley experiment, the light ray moving perpendicularly to the aether wind returns faster than the light ray moving directly into and then with the aether wind. As the apparatus is rotated through 90°, this effect would vary, and the interference pattern being observed would appear to change. See the associated PFA on pages 60–62 information for the implications of this ‘null’ result.

Frames of reference n

Outline the nature of inertial frames of reference

4.3

Definition

A frame of reference is anything with respect to which we describe motion and take measurements. Galileo and Newton had long ago realised the importance of frames of reference and devised their own theory of relativity. They realised that motion is different when described in relation to different frames of reference. Galileo’s analysis of projectile motion is an example of this. They also realised measurements such as velocity are relative depending on the frames of reference used. For example, when the motion of a car is being described, our frame of reference is usually the ground. Its speed is then measured with respect to the ground, say 60 km h–1. However, if another frame of reference is used, say another car that is travelling in the same direction and at the same speed, then we can say the first car is stationary and its speed is zero with respect to that frame of reference. Another example would be a person reading a book at their desk: the person is stationary with respect to the desk, but with respect to the Sun as a frame of reference, the person is orbiting the Sun (being on Earth) with a speed of 30 km s–1.

Types of frames of reference Inertial Definition

An inertial frame of reference is one that is either stationary or moving with a constant velocity. In an inertial frame of reference, the laws of motion are always valid. No imaginary force needs to be ‘made up’ in order to explain the motion of objects within inertial frames of reference. Since the laws of motion are true for both a stationary frame of reference and for one that is moving with a constant velocity, there is no physical experiment that can be done within an inertial frame of reference to distinguish

63

space

whether such a frame is stationary or moving at a constant velocity. To illustrate this, imagine being on an aeroplane that is flying smoothly at a steady speed. The flight steward serves tea and coffee as easily as in a restaurant; a person can walk up and down the aisle as they would in a cinema; and dropping a ball results in it falling vertically under the influence of gravity and no other force. With ear muffs on and the blinds closed, it is not possible to tell whether the plane is indeed in flight or stationary on the ground. Non-inertial Definition

A non-inertial frame of reference is one that is undergoing acceleration. For a non-inertial frame of reference, the laws of motion do not hold true. For example, if a tennis ball is placed on the floor of a bus, when the bus accelerates forward, the ball rolls backwards. For an observer who is inside this non-inertial frame of reference, no force is observed acting on the ball, yet the ball does not remain stationary. This obviously violates the laws of motion since Newton’s first law of motion states that an object will remain stationary or moving with a constant velocity in the same direction unless being acted upon by an unbalanced (net) force. Here, a fictitious (fake) force, that is, a false backward force, needs to be introduced in order to maintain the validity of the laws of mechanics. The existence of a fictitious force is one of the most distinctive features of a non-inertial frame of reference and allows it to be distinguished from an inertial frame of reference. Such forces are also known as ‘inertial forces’, centrifugal force being an example.

Non-inertial and inertial frames of reference Perform an investigation to help distinguish between inertial and non-inertial frames of reference

first-hand investigation

n

physics skills

As an activity, undertake the following procedure: 1. While walking in a straight line at a constant speed along level ground in the open, throw a tennis ball vertically above you and catch it again. Observe the motion of the ball relative to you, while a stationary observer also observes the motion of the ball. It may be possible to digitally record your observations.

H11.1b H11.2e H11.3a, b H12.1a, d H12.2b H12.3a H13.1e H14.1b, e H14.3b, c, d

2. Compare and contrast the observations of the motion of the ball made by you and by the

stationary observer. 3. Next, again while walking steadily as in step 1, throw a ball vertically above you. This

time, while the ball is in the air, stop walking or start running forward or make a sudden turn. Again, observe the relative motion of the ball with respect to you, and have a nearby stationary observer make their own observation of the motion of the ball relative to them. 4. Compare and contrast the ball’s motion as made by the two observers on the two different

occasions. Discuss the results in the context of frames of reference. 5. While in a bus or train that is travelling along a straight road or track with a steady speed,

drop a ball. Observe the ball’s motion and then repeat the experiment while the bus or train is taking off, stopping or stopped. Compare the observations made.

64

chapter 4  special relativity

6. Before performing this experiment, carry out a risk assessment. This means that: (a) all

potential hazards are identified; (b) ways in which these hazards may be minimised or avoided should be written down. As an example, walking over level ground in step 1 is potentially hazardous if there is a ditch, rock or pole which could cause injury. The ground or path should be checked beforehand for such hazards. 7. As a final task, imagine that you are inside a locked shipping container. Devise an

experiment you could perform to show you are in an inertial frame of reference. Explain your experiment to the class.

Principles of special relativity n n

Discuss the principle of relativity Describe the significance of Einstein’s assumption of the constancy of the speed of light

4.4

The negative result of the Michelson–Morley experiment was the major inspiration and incentive for Einstein to propose the special theory of relativity. In his theory, Einstein completely abandoned the aether model because he saw it as totally unnecessary. In the absence of this absolute frame of reference, all inertial frames of reference became relative and no one was truer or more correct than another. The absence of the aether (if the results of the Michelson–Morley experiment to detect the effect of the aether wind on the speed of light could be interpreted as being due to the lack of the aether itself) also meant the velocity of light was constant in all directions and under all circumstances. This successfully accounted for the null results of the Michelson–Morley experiment as the constancy of the speed of light led to no change in the interference pattern when the experimental apparatus was rotated through 90°. The principles of the theory can be summarised into two major ideas: 1. The velocity of light has a constant value of c, regardless of the relative motion of the source and observer. 2. All inertial frames of reference are equal and no inertial frame of reference is truer than others. Note: The velocity of light and other electromagnetic radiation (EMR) is c, which is approximately equal to 3  108 m s–1. It is the highest velocity any matter can achieve. In fact, except for light and other EMR, nothing can reach the velocity c.

It is also important to note here that the special theory of relativity only applies to inertial frames of reference; it is invalid for non-inertial frames of reference. Relativity which involves non-inertial frames of reference and gravity is dealt with in Einstein’s general theory of relativity (not required by the syllabus). Example

A star is moving away from the Earth at v m s–1. What will be the velocity of the star’s light when it is measured upon reaching the Earth?

65

space

(a) c (b) c + v (c) c – v (d) More information needed. Solution

The answer is (a). The velocity of all EMR, including light, is constant at c, to all observers. It is independent of the relative motion of the source and observer.

4.5

Impacts of special relativity



Identify that if c is constant then space and time become relative Explain qualitatively and quantitatively the consequence of special relativity in relation to: – the relativity of simultaneity – the equivalence between mass and energy – length contraction – time dilation – mass dilation

■■

Solve problems and analyse information using: E = mc2

n n

H14.1d, f, g, h H14.2a, b, c H14.3a, c, d



2

lv = lo 1 – v 2 c

SR

tv =

to



2

1 – v2 c

Worked examples 10, 11, 12, 13

mv =

mo



2

1 – v2 c

When Einstein’s special theory of relativity is referred to by non-scientists, many regard it as something far too difficult to comprehend. However, with a little thought, the theory and its consequences can be seen as logical outcomes of the way in which Einstein regarded the speed of light: that is, the speed of light is a constant no matter how it is measured. In other words, because the speed of light is constant, it follows that many quantities, such as time, length and mass which were once thought to be absolute are now relative. Because of the very fast speed of light, Einstein used thought experiments to help explain the special theory of relativity. Such ‘experiments’ can be carried out in the mind of an experimentalist; however, they are not possible to conduct in reality.

66

chapter 4  special relativity

Relative simultaneity Events which are observed to occur at the same time are considered to be simultaneous. However, two observers moving at relativistic speeds (i.e. speeds that are more than about 10% of the speed of light) may not both observe the simultaneous events equally. That is, one observer may regard the events as being simultaneous but the other observer may not. Definition

Two events that are simultaneous to one observer may not necessarily appear simultaneous to another observer who is in a frame that is moving at a relativistic speed. This is known as relative simultaneity. Consider the following situations (thought experiment): Suppose in a very long train that is at rest, fireworks are launched at both ends of the train at the same time. For an observer standing at the middle of this train as shown in Figure 4.2 (a), the light from both ends of the train will have to travel the same distance (i.e. d = d) to reach the observer. The time for the light from the head d of the train to reach the observer t, will be t = (s = vt), and the time for the light c d . Hence t = t. from the tail to reach the observer t, will be t = c Therefore, light from both ends of the train will reach the observer at the same time; who will consequently conclude that the launching of the fireworks was simultaneous. Now suppose there is another observer who is in the same situation as the observer described above (that is the second observer also stands at the middle and the fireworks are still launched at the same time at both ends). However in this case, the train is moving at a relativistic speed v; that is the second observer is in a different frame of reference. This is shown in Figure 4.2 (b) In this situation, when the light from the front of the train reaches the observer, the observer would have travelled some distance towards the site where the light was emitted. For the same argument, when the light from the rear reaches the observer, the observer would have travelled some distance away from the site where the light was emitted at the end of the train. Consequently the time for the light from the front (d – vt) , and the time for the of the train to reach the observer t, will be t = c (d  – vt) . Thus t > t (since light from the rear to reach the observer t, will be t = c d is still equal to d).

Figure 4.2  (a) The thought experiment for the relativity of simultaneity

Observer

Tail of the train

Head of the train

Light travels at c

Light travels at c Firework

Firework d'

d

67

space

Position of the observer when the light reaches him

Position of the train when the light reaches the observer

Observer vt

Tail of the train

Head of the train

Light travels at c

Light travels at c Firework

Velocity of the train v m s–1 d'

Figure 4.2  (b) The thought experiment for the relativity of simultaneity (different frame of reference)

Firework

d

The conclusion is that the light from the rear end of the train will take longer to reach the observer than the light from the front of the train. The result is that the observer will conclude that the firework at the front of the train is launched first, then later at the rear. Now the two simultaneous events no longer appear to be simultaneous. Note: The speed of light is always constant despite the two light sources moving in

different directions; this is the key to the results in relative simultaneity.

This concept of ‘relative simultaneity’ demonstrates that even absolutes are now relative and there is no ‘better’ inertial frame of reference in Einstein’s world of special relativity. Students usually find ‘relative simultaneity’ an abstract concept to appreciate. However, we cannot rely on our common sense when we are dealing with ‘relative simultaneity’ or more generally, special relativity, simply because they do not occur in everyday life. In order to observe a noticeable effect of relative simultaneity, the velocity of the observer has to approach c and the distance light has to travel needs to be close to infinite. Of course, neither of these happens in everyday situations.

Figure 4.3  The thought experiment for time dilation

Pathway of the light beam observed by A (d) Observer A (tO)

Velocity of the train v m s–1

Key: Observed by Observer A Observed by Observer B

68

Pathway of the light beam observed by B (d') Mirror

Light source

Observer B (tV)

Time dilation Time in Einstein’s special relativity also loses its absolute nature and becomes relative. Consider the following situation (thought experiment): In Figure 4.3, a train is moving to the right at a constant velocity v that is close to the speed of light. A light source is placed on the floor of the train while a mirror is fixed onto the ceiling of the train. A light beam is emitted at the light source. Observer A, who is inside the train, will see the light beam going straight up and down. Thus the distance light has travelled will be twice the vertical distance between the floor and

chapter 4  special relativity

the roof of the train, say 2d. (Assume the light source and mirror have no thickness for the purpose of simplicity.) Analogy: If you throw a pen inside a moving train that has a constant velocity, you will see your pen going straight up and dropping straight down into your hand.

On the other hand, Observer B, who is outside the train, will see the light beam travelling forward-upward and then forward-downward as shown in Figure 4.3. The light has to travel diagonally in order to ‘catch’ up with the moving light source and mirror. Say the light now has to cover a distance 2d, with d being the hypotenuse of the imaginary right angled triangle whose perpendicular height is the distance between the roof and floor, which is d. By Pythagoras theorem, the hypotenuse of a right angled triangle is always longer than the other two sides of the triangle, hence, d > d. Since the light velocity has a constant value of ‘c’, then: 2d ■■ to = , where to is the time Observer A measures for the light to travel from the c light source to the ceiling and then back. Observer A is stationary relative to the train—the frame of reference, hence, Observer A’s time (t o) is referred to as the rest or original time. d ■■ tv = , where tv is the time Observer B measures for the light to travel from the c light source to the ceiling then back. Observer B is outside the train and moving relative to the train, hence, the time Observer B measures (tv ) is referred to as the moving time. ■■ Since d  > d, then tv > to. The above phenomenon can be generalised to all times, which is then known as ‘time dilation’ in the theory of special relativity: Definition

Time dilation can be summarised as ‘a moving clock appears to run slower’. Note: Since the time is running slower, the time interval is lengthened, hence, time is

dilated.

Time dilation is governed by the equation:

tv =

to



v2 1– 2 c

Where: to = the rest (‘original’) time, observed by the person in the moving frame who is relatively stationary to the moving frame tv = the moving time, observed by the stationary observer who is moving relative to the frame of concern, hence, moving time v = the velocity of the frame of concern, measured in m s–1 c = the speed of light, which is approximately equal to 3 × 108 m s–1 to and tv can take any time units as long as they are consistent.

69

space

Example 1

Suppose in the thought experiment above, shown in Figure 4.3, the train is travelling at 3.0 × 107 m s–1. If one hour has passed for the stationary observer outside the train, how long has passed for the person on the train? Note: In order to answer time dilation questions correctly, it is very important to identify tv and to correctly.

Solution

For the stationary observer who observes the moving train from outside, there is a relative motion between the observer and the train, and so this observer’s time is the moving time, tv. The observer on the moving train will appear to be stationary relative to the train, therefore, this observer’s time is the rest time, to.

tv =

to



2 1– v c2

tv = 1 hour = 60 minutes v = 3.0 × 107 m s–1 to = ? ∴ to = tv ×

to = 60 ×

√ () √ ( 1–

v c

2

7 1 – 3.0 × 10 3.0 × 108

)

2

to = 60 × √1 – (0.1)2 to = 59.70 min = 59 min 42 s In order to make sure we have done the calculation correctly, we can always check: One hour has passed for the observer who is stationary, but only 59.70 minutes have passed for the observer on the train. Hence the time for the observer in the moving train (frame) runs slower. This agrees with our definition of time dilation!

Example 2

A super rocket is launched to travel to a star which is 10 light-years away from Earth. If the rocket travels at velocity of 0.94c on average: (a) How long will a single journey be, as measured by scientists on Earth? (b) How long will a single journey be, as measured by astronauts on board?

70

chapter 4  special relativity

Note: Light-year is a distance unit. One light-year is equal to the distance covered by light in one year, it is equal to: 3  108  3600  24  365 ≈ 9.46  1015 m.

Solution

(a) s = vt ⇒ t =

s v



t=

10 × (3 × 108) × (3600 × 24 × 365) seconds 0.94 × (3 × 108)



t=

10 × (3 × 108) × (3600 × 24 × 365) years 0.94 × (3 × 108) × (3600 × 24 × 365)



t=

10 0.94



t ≈ 10.64 years

(b)

tv =

to



2 1– v c2

tv = time measured by scientists on Earth, since they are moving relative to the rocket = 10.64 years to = time measured by astronaut, as they are at rest relative to the rocket = ? v = 0.94c ∴ to = tv ×

√ () √ ( 1–

v c

2

1 – 0.94c c



to = 10.64 ×



to = 10.64 × √1 – 0.942 to = 3.63 years

)

2

∴ according to the astronauts, the journey will take 3.63 years.

SR

Simulation: time dilation

Length contraction Definition

Length contraction is when the length of a moving object appears shorter compared to the length of the object when measured at rest. Length contraction occurs according to the formula:

71

space



lv = lo 1 –

Where: lo = the rest length; it is the length of an object observed when it is stationary, or by an observer who is stationary relative to the object lv = the moving length; it is the length observed when the object is in motion, or by an observer who is in motion relative to the object v = the relative velocity of the object, measured in m s–1 c = the speed of light Again, lo and lv can take any length units as long as they are consistent.

v2 c2

Note: The phenomenon of length contraction can be proven by using a similar thought experiment used in proving time dilation.

Example 1

A pen is measured to be 15 cm long when placed on the table. How long would it be if it is now moving at a velocity of (a) 340 m s–1 (b) 2.7 × 108 m s–1 Solution

(a) Since the pen is 15 cm at rest, this is the rest length lo. When it is moving, the length it has is the moving length, hence lv.



v2 c2



lv = lo 1 –



lo = 15 cm v = 340 m s–1 lv = ?



lv = 15 ×



lv ≈ 15 cm ∴ the length of the pen when it is moving at 340 m s–1 is still 15 cm.

√ ( 1–

340 3 × 108

)

2

(b)  Similarly:



v2 lv = lo 1 – 2 c lo = 15 cm v = 2.7 × 108 m s–1 lv = ?

∴ lv = 15 ×

72

√ (

8 1 – 2.7 × 10 3.0 × 108

)

2

chapter 4  special relativity



lv = 15 × √1 – (0.9)2



lv ≈ 6.54 cm ∴ when the pen is moving at 2.7 × 108 m s–1, its length is measured as being 6.54 cm.

Note: The example above illustrates a very important principle. All effects of special relativity (including time dilation and mass dilation) only become apparent when the speed is a significant proportion of the speed of light, that is, a relativistic velocity.

Also, length contraction only occurs in the dimension of the direction of the motion. The following example illustrates this.

Example 2

For the rectangle below, its length and width are measured to be 15 cm and 10 cm respectively when it is at rest. If it is now moving in the direction shown by the arrow √5 at c m s–1, what is its length and width? 3 Figure 4.4 

15 cm

10 cm

v=

√5 c m s–1 3

73

space

Solution

Only the length is moving in the direction of the motion, hence, it will experience length contraction.



lv = lo 1 –

v2 c2

lo = the length of the rectangle when it is at rest = 15 cm lv = the length of the rectangle when it is in motion = ? v=

√5 c m s–1 3

∴ lv = 15 × lv = 15 × lv = 15 ×

√( ) √5 c/ 3 c/

1–

2

√ ( ) √ 1–

√5 3

1–

5 9

2

2 3 lv = 10 cm ∴ the length of the rectangle when it is in motion is 10 cm.

lv = 15 ×

The width is not moving in the direction of the motion, therefore length contraction does not apply. The width will still be 10 cm. Hence the rectangle will become a square, but not a smaller rectangle.

Example 3

A UFO that is flying at 4.5 × 107 m s–1 is measured to have a length of 30 m according to a stationary observer on the ground. What will be its length measured by a pilot on the UFO? Solution

The UFO is moving relative to the observer on the ground, so that the length of 30 m measured by the observer is the moving length, hence lv. The pilot on the UFO is stationary relative to the UFO so that the length the pilot measures will be the rest length, lo (even though the pilot is moving).



lv = lo 1 –

v2 c2

lv ∴ lo = 1– v c

√ ()

74

2

chapter 4  special relativity





lv = 30 m v = 4.5 × 107 m s–1 lo = ? 30 lo = 7 1 – 4.5 × 10 3.0 × 108

√ (

lo =

)

2

30 √1 – (0.15)2

= 30.34 m ∴ the length of the UFO as measured by its pilot is 30.34 m.

Mass dilation Definition

Mass dilation is when the mass of a moving object appears greater compared to the object’s mass at rest. Mass dilation occurs according to the formula:

mv =

mo



1–

v2 c2

Where: mo = the rest mass; it is the mass of an object measured when it is stationary, or by an observer who is stationary relatively the object mv = the moving mass; it is the mass measured when the object is in motion, or by an observer who is in motion relative to the object v = the relative velocity of the object, measured in m s–1 c = the speed of light Once again, mo and mv can take any mass units as long as they are consistent.

Example 1

Electrons have a rest mass of 9.109 × 10–31 kg. If an electron is moving at 6.50 × 106 m s–1, what will be its mass? Solution



mv =

mo



2 1–v c2

75

space

mo = 9.109 × 10–31 v = 6.50 × 106 m s–1 mv = ? 9.109 × 10–31 ∴ mv = 6 1 – 6.50 × 10 3.00 × 108

√ (



)

2

m = 9.111 × 10–31 kg

∴ this moving electron has a mass of 9.111 × 10–31 kg.

Example 2

(a) The engine of a spacecraft can provide a constant thrust of 2.0 × 105 N. If the spacecraft has a mass of 1.00 × 105 kg at rest, what is its initial acceleration? (b) What will be its acceleration when its velocity reaches 0.999 999 9c? Solution

(a) F = F = m = a =

ma 2.0 × 105 N 1.00 × 105 kg ? F a = m a = 2.0 m s–2 In the direction of the thrust (b) When the spacecraft is at rest, its mass is 1.00 × 105 kg, therefore mo = 1.00 × 105 kg. When it is moving at 0.999 999 9c, its mass then is the ‘moving mass’, mv.



mv =

mo



2 1–v c2

mo = 1.00 × 105 kg v = 0.999 999 9c m s–1 mv = ? 1.00 × 105 mv = 1 – 0.999 999 9c/ c/

√ (



76

mv ≈ 2.23 × 108 kg

)

2

chapter 4  special relativity

Then: a=

F = 2.0 × 10­5 N m = 2.23 × 108 kg

a=

F m

2.0 × 10­5 2.23 × 108

a = 8.94 × 10–4 m s–2 in the direction of the thrust

∴ the acceleration when the velocity reaches 0.999 999 9c is as little as 8.9 × 10–4 m s–2

Therefore, as a consequence of mass dilation, as the speed of an object increases, its mass increases; as a result its acceleration decreases if the acting net force remains constant. If this trend continues, the final outcome is that as the speed of the object approaches the speed of light, its mass approaches infinity, and its acceleration approaches zero, as shown in example 3. It follows that under this circumstance, its velocity can no longer increase; hence, the velocity of any object in the Universe will not exceed that of light. This phenomenon agrees with all the special relativity formulae: since all formulae v2 v2 are related to the 1 – 2 ; then if v > c, then 1 – 2 < 0; and the equations become c c meaningless.



(

)

>WWW

Useful website The Poul Anderson novel Tau Zero, also called To Outlive Eternity, is a good fiction story illustrating relativity when v is almost c. This story can be downloaded from http://www.webscription.net/

The equivalence between mass and energy In the special theory of relativity, Einstein proposed: Definition

Enery–mass equivalence is where energy and mass are equivalent and are interconvertible. This relationship is governed by the equation:

E = mc 2

Where: E = the energy, measured in J m = the mass, measured in kg c = the speed of light, which is equal to 3 × 108 m s–1

77

space

This simple equation links mass and energy together, so that mass and energy are no longer independent. This also brings revolutionary changes to the laws of physics. In particular, the law of conservation of energy and the law of conservation of mass are challenged. These laws state neither mass nor energy can be created nor destroyed. However, by Einstein’s equation, mass can be created by sacrificing energy, and energy can be created by destroying mass. The consequence is that modifications to these laws need to be made in order to accommodate this relationship between mass and energy. The new law becomes the law of conservation of mass and energy, which states: Matter and energy cannot be destroyed or created. They can only be transformed.

A nuclear explosion at a testing facility in Mururoa, French Polynesia, 1970—energy derived from the loss of mass

This energy and mass relationship also opens up a whole new realm of energy study. Since energy is related to mass by a factor of c2, it follows that if we could convert any mass into energy, then the Earth in fact has an almost infinite amount of energy reserves. This idea has been extended into nuclear technologies and matter and anti-matter interactions where a small amount of mass is destroyed to create energy. (Nuclear reactions will be discussed in detail in From quanta to quarks.) Example 1

Assume one small loaf of bread weighs 250 g. If we are able to convert all the mass into energy, how much energy can we produce? Solution

E = mc2 m = 0.25 kg E = ? E = 0.25 × (3 × 108)2 E = 2.25 × 1016 J Note: This is approximately the amount of energy a huge power station would deliver in one year; hence, the amount of energy released is enormous.

78

chapter 4  special relativity

Example 2

The anti-matter of an electron is called a positron. In simple terms, it is equivalent to an electron carrying a negative charge. When a positron meets an electron, they completely annihilate each other and all masses are converted into energy. Calculate the energy released when a positron meets an electron so that their masses are totally annihilated. Solution

Mass of the electron = 9.109 × 10–31 kg Mass of the positron = 9.109 × 10–31 kg (since they are identical except the signs of their charges) Total mass annihilated = 1.8218 × 10–30 kg E = mc2 E = (1.8218 × 10–30) × (3 × 108)2 E ≈ 1.64 × 10–13 J

The modern standard of length n

Discuss the concept that length standards are defined in terms of time in contrast to the original metre standard

4.6

In 1793 the French government decreed that the unit of length shall be one ten-millionth (i.e. 10−7) of the distance from the north pole to the equator, passing through Paris. This distance was to be called the metre. Three platinum bars were made based on the survey. Although it was found that the surveyors had made an error in their measurements, these bars served as the standard for length rather than the original definition. A platinum-iridium alloy bar replaced the original standard bars in 1889. The need for a more accurate standard of length led to the formal adoption in 1960 of a definition of the metre based on a wavelength of radiation from kryton-86, specifically being 1 650 763.73 wavelengths of a particular emission line measured in a vacuum. This change made the standard more precise than previously, but with the need for an even more precise standard, it was changed in 1983. The new standard, still in use today, is based on the definition of time, a very precisely known standard, being defined as 9 129 631 770 oscillations of the Cs-133 atom. Using this precise definition, the present definition of the metre is the length of the path travelled by light in a vacuum during the time interval of 1/299 792 458 th of a second. This definition itself assumes that the speed of light is exactly 299 792 458 metres per second. Thus the metre can be determined experimentally. It may be that a further redefining of the metre will be required at some point in the future. The present definition does not take into account certain relativistic phenomena, including time dilation, as well as how the speed of light is affected by the strength of the gravitational field through which it is travelling. The need for an even more precise standard for length will drive any changes.

79

space

Evidence for special relativity secondary source investigation PFAs H1, H2 physics skills 11.1, 12.3, 12.4, 13.1, 14.1, 14.5

n

Analyse information to discuss the relationship between theory and the evidence supporting it, using Einstein’s predictions based on relativity that were made many years before evidence was available to support it

Just like the aether model, Einstein’s special theory of relativity also needed supporting experimental evidence. Unfortunately, highly advanced instruments were required in order to detect accurately the minute changes in time, length and mass at low speeds. Speeds that were close to the speed of light, which would otherwise produce obvious relativistic changes, were impossible to achieve, and this is true even with our current technology. This made verifying the theory an extremely difficult task. In fact, Einstein proposed his special theory of relativity as early as 1905, but it was not until a few decades later that strong and convincing evidence was made available to prove the validity of the theory. Some experimental evidence for special relativity are described below. Atomic clocks

Atomic clock

Atomic clocks are extremely accurate and sensitive clocks that can measure time down to—and more precisely than—one billionth of a second. The experiment involves two synchronised atomic clocks. One of the atomic clocks is put in a jet plane, which is sent off to fly at a very high speed for a period of time, while the other one is left on Earth (stationary). When the jet plane returns, the time of the two atomic clocks are compared. It is found that the two clocks are no longer synchronised; the time of the clock placed in the jet plane runs slower, as predicted by Einstein, hence, time is dilated. Muons

A muon detector

Muons are one type of subatomic particles. They are formed in the upper atmosphere as a consequence of the interaction between the atmosphere and cosmic radiation. Muons are unstable and are subjected to natural decay. They have a very short half-life, about two micro-seconds. Although they travel at a very high velocity, with this short half-life it is impossible for any muon to reach the Earth’s surface, since the distance from the upper atmosphere to the Earth’s surface is large and all muons would have decayed before they could reach it. However, with the aid of advanced instruments, scientific laboratories have detected the presence of muons. This comes as no surprise according to the special theory of relativity. Since muons are travelling at a relativistic velocity, their time will be dilated as measured by a stationary scientific laboratory; that is, muons’ extremely short half-life of two micro-seconds (as measured when they are at rest) will be lengthened significantly, long enough for them to reach the surface of the Earth. The fact that muons are detected at the surface of the Earth provides very strong evidence for the validity of special relativity. Note: The other way to explain this phenomenon is that as muons move down at a very

high velocity, the distance before them appears to move at a relativistic velocity relative to them. Consequently, the distance they have to travel will contract, short enough for the muons to reach the Earth’s surface within the limit of their lifetime.

80

chapter 4  special relativity

Limitations of special relativity and the twin paradox What is the twin paradox? Consider two identical twins who have just celebrated their 21st birthday. One of the twins is put on a spacecraft and takes a long return trip to a distant star at a speed close to c while the other remains on Earth. The Earth twin sees her own state as stationary while the space twin rockets off; thus she sees the time in the spacecraft ticking more slowly than time on Earth (observing time dilation). Hence the Earth twin concludes that the space twin will age less and will be younger upon her return. However, from the space twin’s perspective, she is at rest, while the Earth twin moves away at a relativistic speed. Thus the space twin concludes the Earth twin’s clock is running slower and will therefore expect the Earth twin to age less and be younger upon their reunion. However, they cannot both be younger than each other, hence, the paradox emerges!

4.7 Analogy: If you are in a moving train, you will think you are stationary, but everything outside is moving.

How is the paradox solved? The paradox is resolved when we take into account the limitation of special relativity: special relativity is only valid for inertial frames of reference. The Earth twin, who is at rest, remains in the same inertial frame of reference throughout, therefore her conclusion is always valid. The space twin however is not always in an inertial frame of reference, as the space twin has to accelerate to move away from Earth, make turns and decelerate to come back. Thus during these periods of acceleration, the space twin occupies a non-inertial frame of reference. Consequently the space twin’s conclusion is fallacious. Note: To produce a correct analysis of the time events from the point of view of the space twin, we must incorporate the effects of acceleration on time. As it turns out if we apply the time dilation formula from the theory of general relativity (which students do not need to know in this course), the answer will show the space twin returns younger, which is in agreement with the conclusion made by the Earth twin using special relativity.

What does the ‘twin paradox’ show? Since only the statement made by the Earth twin is valid whereas the one made by the space twin needs to be disregarded, the ‘twin paradox’ is not a true paradox, but rather a stimulus that illustrates the limitation of special relativity.

Thought experiments and reality n

Analyse and interpret some of Einstein’s thought experiments involving mirrors and trains and discuss the relationship between thought and reality

Most of Einstein’s theories are based on abstract thought experiments. For instance, the idea of time dilation and relative simultaneity are deduced based on thought experiments involving trains, light and mirrors. Although the thought experiments do not lack logic, they deviate quite a lot from reality because:

secondary source investigation PFAs H1, H2 physics skills 11.1, 12.3, 12.4, 13.1, 14.3

81

space

n Trains or spacecraft cannot travel at relativistic velocities. n Even if a train does travel this fast, it is then impossible for an observer outside the train to

observe anything and make any accurate measurements for the events happening in the train. n In the time dilation thought experiment, it is impossible in reality to see the single light beam travelling up from the light source and then reflecting back from the mirror. We will just see the whole train light up. n In the relative simultaneity thought experiment, in order to observe any noticeable effects, the train needs to be infinitely long, this cannot be achieved in reality at two levels. First, it is simply impossible to construct such a train. Second, even if such a train is made, the observer will not be able to see the light flashes from the sources placed at both ends of this train. This is because the infinitely large distance will result in the light intensity dropping to zero before reaching the observer. Note that only the logic is valid in thought experiments—the experiments themselves cannot be reproduced in reality. Nevertheless, thought experiments are significant as sometimes they are the only way we can deduce important scientific theories. Can you come up with some thought experiments?

4.8

Implications of special relativity for future space travel n

Discuss the implications of mass increase, time dilation and length contraction for space travel

Impact of mass dilation As we have discussed earlier, the increase in mass as the speed of a spacecraft approaches c means that it becomes more difficult to further accelerate the spacecraft once its velocity becomes relativistic. This factor limits the speed of the spacecraft, with the maximum speed being one that is slightly under the speed of light even in an ideal situation (e.g. very powerful engine). Unfortunately, compared to the vastness of the Universe, this speed is extremely small. Consequently space trips will take an extremely long time. For example, a trip to our closest star Alpha Centauri C will take 4.3 years even when travelling close to the speed of light.

Impact of time dilation As we have seen in the time dilation section, time in the moving frame appears to run slower. Hence if a spacecraft can travel at a relativistic velocity, then its pilots’ time will run significantly slower, therefore they will age much less compared to people on Earth. This means the extremely lengthy space travel as observed by the people on Earth (say, 4.3 years) will be reduced considerably according to the pilots (0.61 years if v =0.99c). This allows the pilots to make prolonged space travel within their lifetime. Also, when they return, they will see their children and probably grandchildren to be older than they are. More years have passed on Earth than they have experienced. They are able to ‘see the future’.

Impact of length contraction When a spacecraft is moving through space, relative to the pilots on the spacecraft, the space in front of the spacecraft is moving towards them. This means that the

82

chapter 4  special relativity

distance of the journey will appear shorter to the pilots than that being measured by people on Earth. Consequently, it will take the pilots a shorter time to reach their destination. However, it should be noted here that the pilots’ time is at the same time running slower, as measured by an external ‘stationary’ observer. Note: Once again, this shows length is related to time.

chapter revision questions   1. Aether was once a very significant part of most physics theories.

(a) Outline why aether was important for physics theories. (b) List three properties of aether.   2. Michelson and Morley were determined to find evidence for the existence of aether.

(a) With the help of a diagram, outline the method they used to try to detect the presence of aether. Specifically comment on why they had to rotate their apparatus by 90°. (b) What was the result of their experiment, and how could this be interpreted? (c) What were the consequences of the results of their experiment?   3. A student decides to carry out a pendulum experiment that she did in a school

laboratory in a train that is travelling steadily at 20 m s–1. How would the results compare to those obtained in the school laboratory? Explain your answer.   4. (a) Define the term ‘non-inertial frame of reference’.

(b) Give two examples of non-inertial frames of reference. (c) You are going to perform a simple experiment to confirm the two examples named in part (b) are non-inertial frame of references; briefly describe the procedure and results of the experiment.   5. Johnny is running at 5 m s–1.

(a) If he is going to throw a tennis ball at 10 m s–1 in the same direction as he is running, what will be the velocity of the ball with respect to the ground? (b) What will be the velocity of the ball relative to him? (c) If he is carrying a torch and is shining a beam in the same direction as he throws the ball, what will be the velocity of this beam of light relative to the ground? (d) What will be the velocity of the light relative to the torch?   6. Would a person in a rocket that is accelerating upwards be able to use special relativity

to predict length contractions?   7. Discuss the concept of relative simultaneity. Briefly comment on the consequences.   8. As mentioned in the chapter, muons have a half-life of approximately two micro-seconds.

What will be their half-life as measured by an Earth laboratory, if the muons are travelling at 0.99c?   9. Suppose a super plane is to make a journey from town A to town B, 3400 km apart. If

the plane travels at 0.3c throughout the journey: (a) How long will the journey be according to a resident in town B?

83

space

(b) How long will the journey be according to the pilot of the plane? (c) What is the distance between town A and B according to the pilot? (d) If the pilot is able to call the resident in town B, they will disagree with each other both in regard to the time elapsed and the distance of the journey. Who is correct? Justify your answer. 10. A spherical UFO with a radius of 50.0 m flies past the Earth at a velocity of

2.7 × 108 m s–1. (a) Determine its height and width as measured by observers on the Earth. (b) If a human being can live for 100 years on the surface of the Earth, how long will they live for according to their relatives on the Earth when they travel inside this UFO? 11. Determine the mass of a moving hydrogen ion, travelling at 4.0 × 107 m s–1. 12. A super aircraft has a mass of 30 tonnes (including full load of fuel) when it is parked at

the airport. When it reaches and is flying at its maximum speed, its mass is 31 tonnes (including the fuel). Calculate its maximum speed. 13. It is true that all scientific theories need to be proven or validated by experiments.

Describe one experiment which was conducted in an attempt to validate Einstein’s special theory of relativity. 14. There are many obvious links between the relativistic effects described in special

SR

Answers to chapter revision questions

84

relativity. (a) How can the concept of equivalence between mass and energy be linked to the concept of mass dilation? (b) How can the concept of time dilation be linked with the concept of length contraction? 15. What are thought experiments? How do they differ from reality? 16. Evaluate the implications of special relativity on space travel.

SR

Mind map

motors and generators

motors and generators

CHAPTER 5

The motor effect Motors use the effect of forces on current-carrying conductors in magnetic fields Introduction As the module title ‘Motors and generators’ suggests, this module consists of two main physics topics—electric motors and electric generators. Motors are electric devices that convert electrical energy into mechanical rotations. They have extensive applications in all industries and are also the functional component for many household devices such as fans and drills. On the other hand, electric generators work in the opposite way, converting mechanical rotations into electricity. They form the heart of many power stations. This chapter analyses in detail the physics principles behind the functioning of an electric motor.

5.1

Some facts about charges and charged particles

5.2

The motor effect

A stationary charge produces an electric field. A moving charge with constant velocity produces an electric field as well as a magnetic field. ■■ A moving charge that is accelerating produces electromagnetic radiation. ■■ A stationary charge will experience a force in an external electric field. This is because the field produced by the stationary charge interacts with the external field and results in a force. ■■ Similarly, a moving charge with constant velocity will experience a force in both an electric field and an external magnetic field. ■■ ■■

n n

88

Identify that the motor effect is due to the force acting on a current-carrying conductor in a magnetic field Discuss the effect on the magnitude of the force on a current-carrying conductor of variations in: – the strength of the magnetic field in which it is located – the magnitude of the current in the conductor – the length of the conductor in the external magnetic field – the angle between the direction of the external magnetic field and the direction of the length of the conductor

chapter 5  the motor effect

■■

Solve problems and analyse information about the force on current-carrying conductors in magnetic fields using F = BIlsin 

Definition

SR

Worked example 16

The phenomenon that a current-carrying conductor experiences a force in a magnetic field is known as the motor effect. This of course comes as no surprise since we know that current is created by a stream of moving electrons. Since electrons are moving charges, and we know that moving charges experience a force in a magnetic field, it follows that a current-carrying wire will experience a force in a magnetic field.

A quantitative description of the motor effect It can be shown through experiments (such as using a current balance) that the force on a current-carrying wire, when it is placed in a magnetic field, will depend on the following factors: 1. As the strength of the magnetic field increases, the force increases. 2. As the current in the wire increases, the force increases. 3. If the length of the wire inside the magnetic field increases, the force increases. 4. The size of the force is a maximum when the wire is placed perpendicular to the field lines. It reduces in magnitude and eventually becomes zero when the wire is rotated to a position where it is parallel to the magnetic field. Mathematically, the force is related to the above factors by the equation: Where: F = force due to motor effect, measured in N B = magnetic field, measured in Tesla (T) I = current, measured in amperes (A) l = length of the wire, measured in m θ = angle made between the magnetic field lines and current-carrying wire. This is shown in Figure 5.1.

F = BIlsin θ

When the wire is perpendicular to the magnetic field, the force is a maximum: θ = 90°, hence F = BIl (Figure 5.1(a)) When the wire is parallel to the magnetic field, θ = 0°. The force is zero. (Figure 5.1(b)) Magnetic field

Magnetic field Wire

I

I

Magnetic field

Q2 Q1

Wire

I

Wire Figure 5.1  (a) Forces on a wire that is perpendicular to a magnetic field

Figure 5.1  (b) Forces on a wire that is parallel to a magnetic field

Figure 5.1  (c) Forces on a wire on an angle within a magnetic field

89

motors and generators

In Figure 5.1(c) the angle θ is measured between the magnetic field lines and the wire carrying the current. Note: It is easy to make mistakes when choosing which angle to use for the equation F = BIlsin θ. In the situation shown in Figure 5.1 (c), it is the angle labelled θ1, NOT the angle labelled θ2. The use of sin θ1 resolves the wire to obtain the component of the length that is perpendicular to the magnetic field. The parallel component does not contribute to the force, so it can be disregarded.

Example 1 Magnetic field Wire

I

Wires and magnetic field perpendicular

A 20 cm long current-carrying wire is placed inside a magnetic field, as shown in the diagram on the left. The magnetic field has strength of 1.0 T, and the current in the wire is 1.5 A. Calculate the magnitude of force on the wire when: (a)  It is at the position shown in the diagram above. (b)  It makes an angle of 30° to the magnetic field. (c)  It is parallel to the magnetic field.

Solution

(a) F = BIlsin θ (b) F = BIlsin θ (c) B = 1.0 T B = 1.0 T I = 1.5 A I = 1.5 A l = 20 cm = 0.20 m l = 20 cm = 0.20 m θ = 90° θ = 30° F = 1.0 × 1.5 × 0.20 sin 90° F = 1.0 × 1.5 × 0.20 sin 30° F = 0.30 N F = 0.15 N

F = BIlsin θ θ = 0° F = (1.0) × (1.5) × (0.20) sin 0° F=0N

Example 2

The current-carrying wire shown in the diagram below experiences a force of 0.5 N. If the strength of the magnetic field is halved and the current quadrupled, what is the new force acting on the wire? Solution

Magnetic field Wire I

Wires and magnetic field at an angle

90

The original force F = BIlsin θ Now for the new force F B = ½ B I = 4 I F = ½ B × 4I × l sin θ F = 2 (BIlsin θ) F = 2 F = 2 × 0.5 = 1 N Therefore, the new force is 1 N.

chapter 5  the motor effect

The direction of the force Force is a vector quantity, which means that it must have both magnitude (size) and direction. In order to determine the direction of the force acting on a current-carrying wire, a rule called the ‘right-hand palm rule’ can be adopted.

Current

Force

Magnetic field

Figure 5.2  The right-hand palm rule

The right-hand palm rule

Using the right hand, when the fingers point to the direction of the magnetic field, and the thumb points to the direction of the conventional current, then the palm points to the direction of the force. Example 1

Determine the direction of the force acting on the wire in examples 1 and 2 on page 90: Solution

From Example 1 on page 90 (a) Fingers point up, thumb points to the right and the palm (hence the force) points out of the page. (b) Fingers point up, thumbs points to the right and the palm (hence the force) points out of the page. (c) Zero force, hence no direction is applicable. From Example 2 on page 90 Fingers point up, thumb points to the left and the palm (hence the force) points into the page. Note: Vector quantities are not complete without directions. When you are asked to calculate vectors, find the magnitude and never leave out the direction!

Example 2

Find the size and the direction of the force acting on the current-carrying wire in each scenario: (a) Magnetic field Wire I

45º

Magnetic field strength: 2 T Current size: 1.0 A Length of wire: 1.2 m Wires and magnetic field at angle of 45°

91

motors and generators

Solution (a)

F = BIlsin θ F = 2.0 × 1.0 × 1.2 × sin 45° F ≈ 1.7 N into the page (b)

Magnetic field Wire I

75º

Magnetic field strength: 3.0 T Current size: 0.20 A Length of wire: 35 cm

Solution (b)

F = BIlsin θ F = 3.0 × 0.20 × 0.35 × sin 75° F ≈ 0.20 N into the page (c)

Magnetic field

Wire

I

Magnetic field strength: 5.0 T Current size: 10 A Length of wire: 0.20 m

Solution (c)

F = BIlsin θ Since the wire is to the right, and the magnetic field is into the page, the angle between them is 90°. F = 5.0 × 10 × 0.20 × sin 90° F = 10 N up the page (d)

Magnetic field

Wire



92

Magnetic field strength: 0.30 T Current size: 2.0 mA Length of wire: 35 m

chapter 5  the motor effect

Solution (d)

F = BIlsin θ Since the wire is parallel to the magnetic field, the force the wire experiences will be zero. (e)

Magnetic field

Wire I

45° Magnetic field strength: 0.50 T Current size: 3.0 A Length of wire: 0.47 m

Solution (e)

F = BIlsin θ Note: It is important to realise that the angle between the wire and the magnetic field is still 90°, despite the wire being inclined at 45° as shown.

F = 0.50 × 3.0 × 0.47 × sin 90° F = 0.71 N Direction: Fingers point out of the page, thumb points 45° left-down, hence the palm (force) points 45° left-up, (or NW). (f)

Wire

Magnetic field I



Magnetic field strength: 2.5 T Current size: 4.0 A Length of wire: 300 mm

Solution (f)

F = BIlsin θ F = 2.5 × 4.0 × 0.30 × sin 90° F = 3.0 N to the right

Forces on electrons Electrons, as they move through a magnetic field, will also experience a force; this is also true for all moving charges. The direction of the force can also be determined by the right-hand palm rule, however with modifications:

93

motors and generators

Using the right hand: when the thumb points opposite to the direction in which the electron or negative charge is moving, and the fingers point to the direction of the magnetic field, the palm points to the direction of the force. Note: Negative charges moving in one direction are equivalent to positive charges moving in the opposite direction.

The magnitude of the force acting on charged particles will be covered in Chapter 10.

5.3

Force between two parallel current-carrying wires n



Describe qualitatively and quantitatively the force between long parallel current-carrying conductors: F I1 I2 =k l d

Magnetic field around a current-carrying wire A current-carrying wire produces a magnetic field around it. The nature of the field is that it forms planar (two-dimensional) concentric rings around the wire. See Figure 5.3. The direction of the magnetic field can be determined by using ‘the right-hand grip rule’.

Magnetic field

Current

Direction of magnetic field

Figure 5.3  (a) Magnetic field around a currentcarrying wire

94

Figure 5.3  (b) The right-hand grip rule

chapter 5  the motor effect

The right-hand grip rule

When the thumb points in the direction of the conventional current, the fingers curl in the direction of the magnetic field. The intensity of the magnetic field is proportional to the size of the current and inversely proportional to the distance from the wire. Mathematically: I B=k , d where B is the magnetic field strength, k is a constant which has a value of 2 × 10–7; I is the current; and d is the distance from the wire. (This equation is not required by the syllabus.) Note: The intensity of the magnetic field around a straight current-carrying wire does not follow the inverse square law, as the intensity of the magnetic field is only inversely proportional to the distance, not its squared value.

A quantitative description of the force between two parallel current-carrying wires It follows that if two straight current-carrying wires are placed parallel to each other, one wire produces a magnetic field and the other wire experiences a force due to this magnetic field. Hence there must be a force between two current-carrying wires. Magnitude of the force

Consider two equal-length wires separated by a distance of d. Wire 1 carries a current I1 and wire 2 carries a current I2. Wire 1 produces a magnetic field, and its strength at distance d is given by: B=k

I1  d

(1)

Wire 2 experiences a force given by F = BI2lsin θ Since the two wires are parallel, one must be perpendicular to the magnetic field produced by the other wire (recall, the magnetic field is planar concentric rings that are perpendicular to the wire, see Fig. 5.3a). Therefore, θ is 90°, sin θ = 1. Hence: F = BI2l

(2)

Substituting equation 1 into equation 2, we can obtain the equation that quantitatively describes the force between two parallel current-carrying wires:

( )

F= k

I1  I2 × l d

F=k

I1 I2l d

or F I I =k 1 2 d l

95

motors and generators

F l

=k

Where: F = the magnitude of the force between the two current-carrying wires, measured in N B = the strength of magnetic field, measured in T I1 and I2 = the currents in the respective wires, measured in A d = the distance of separation between the wires, measured in m l = the length of the wires that are parallel to each other, measured in m

I1I2 d

Note: take care when you apply the above equation: n Remember that unlike the gravitational attraction equation, this equation involves

d not d 2. n The value for  has to be the common length. Direction of the force

As we have discussed earlier, force is a vector, so calculations of a force are never complete without stating the direction. Consider two wires carrying currents in the same direction; their end view is shown in Figure 5.4 (a). Using the right-hand grip rule, wire 1 produces a magnetic field that is anticlockwise. At the position of wire 2, the magnetic field produced by wire 1 has an upward component.

Wire 1

F

Wire 2

Wire 1

F Wire 2

Figure 5.4  (a) Two wires carrying currents in the same direction (out of the page)

96

Figure 5.4  (b) Two wires carrying currents in the opposite direction, wire 1 into the page, wire 2 out of the page

chapter 5  the motor effect

Then apply the right-hand palm rule on wire 2, the force acting on it is to the left. By the same argument, the field produced by wire 2 at the position of wire 1 is downwards, so that the force acting on wire 1 is towards the right. Therefore the force is attraction. Now consider two wires carrying currents in the opposite direction, their end view is shown in Figure 5.4 (b). According to the right-hand grip rule, wire 1 produces a magnetic field that is clockwise. At the position of wire 2, the field has a downward direction. Applying the right-hand palm rule, the force acting on wire 2 is to the right. By the same argument, the force acting on wire 1 is to the left. Therefore, the force is repulsion. To conclude: Two parallel wires that carry currents in the same direction attract each other, whereas if they carry currents running in opposite directions, they will repel each other. Note: To remember this, just think that it is opposite to magnets, since two of the same

magnetic poles repel and two of the opposite magnetic poles attract.

■■

Solve problems using:

F l

=k

I1I2 d

H14.1a H14.2a H14.3a, d

Example 1

Two wires that carry electricity into a factory (same direction) are separated by a distance of 0.20 m: if each carries a current of 6.0 × 103 A, state the force acting per unit length of the wires. Explain why these wires have to be held firmly to the ground. Solution

F I I The force per unit length = k 1 2 d l

SR

Worked examples 14, 15

2 × 10–7 × 6.0 × 103 × 6.0 × 103 0.20 –1 = 36 N m attraction =

There will be 36 N of force acting per metre of the wires. This force will bring the wires together and may short-circuit the wires—they need to be held firmly to avoid this.

Example 2

In the laboratory, a student set up some apparatus 1.0 to determine the size of the currents passing through two parallel wires, each 1 m long. The student measured the force between the two wires to be 2.0 × 10–3 N when they were separated by 1.0 cm in air. If both wires carried the same current, determine its size.

97

motors and generators

Solution



F=k

I1 I2 d

2 × 10–7 × I 2 × 1.0 0.010 2 I = 100 I = 10 A

2.0 × 10–3 =

Note: It is easy to see that the force between two current-carrying wires is an extension

of the motor effect.

5.4

Torque: the turning effect of a force n Define

torque as the turning moment of a force using:

 = Fd

Think about when you want to shut a door with just one push. If you push on the door near the hinges, it is probably not enough to shut the door. However, if you push at the edge of the door with the same amount of force, then you will probably slam the door. Thus in order to describe the turning effect of a force, we need to introduce the term torque. Definition Torque is demonstrated in everyday activities such as turning a nut on a bolt with a spanner

Torque is the turning effect of a force. Quantitatively, it is the product of the distance from the pivot of turning to where the force is applied and the size of the force perpendicular to the distance. Mathematically, torque is equal to:

 = Fp × d

Where: Fp = the perpendicular force, measured in N d = the distance from the pivot, measured in m  = the torque, measured in N m See Figure 5.5 (a) and (b) for how to apply the equation

If the force F is perpendicular to the axis of rotation, then Fp = F If F is not perpendicular to the axis of rotation, then we can resolve the vector, and only take the perpendicular component. Hence Fp = F × cos θ (or F × sin φ)

98

chapter 5  the motor effect

F

Figure 5.5  (a) Force F is perpendicular to the axis of rotation

Figure 5.5  (b) Force F is not perpendicular to the axis of rotation

Example

Calculate the torque for the following situations. The force is shown by the arrow, and the dot represents the pivot point: (b)

(a) 20 N

5.0 N 140º



2m

1m





4.5 m

Calculating torque

Solution (a)

 = Fp × d Since the force is perpendicular to the distance, Fp = F = 20 N Distance is measured from the pivot, which is 2 m, not 3 m.  = 20 × 2 = 40 N m, object will turn clockwise. Solution (b)

 = Fp × d Since the force is not perpendicular to the distance, we need to resolve the vector to obtain the perpendicular component, Fp = F × sin (180 – 140) = 5sin 40° = 3.21 N Distance is 4.5 m.  = 3.21 × 4.5 ≈ 14.5 N m, object will turn clockwise.

Note: Understanding the concept of torque is essential for understanding the functional

principle of electric motors.

SR

Simulation: beam balance

SR

Simulation: torque puzzle

99

motors and generators

5.5

Motor effect and electric motors n

Describe the forces experienced by a current-carrying loop in a magnetic field and describe the net result of the forces As we have seen earlier, a current-carrying wire will experience a force inside a magnetic field. Not surprisingly, with the correct design, we can utilise this property to do useful work for us. The device is an electric motor and is employed in a range of useful applications. Definition

An electric motor is a device which converts electrical energy to useful mechanical energy (usually rotation). A simple DC motor

Electric motors can be classified into: ■■ direct current (DC) motors ■■ alternating current (AC) motors depending on the current that they use to run. They share some common functional principles; however, there are subtle differences in their structures. In this chapter, all explanations will be based on DC motors. AC motors and induction motors will be discussed in Chapter 9.

Functional principle of a simple DC motor

Figure 5.6  A simple DC motor

To begin our study of DC motors, let us first look at a very simple DC motor that you can easily construct at home. Figure 5.6 is a schematic drawing. A loop of wire is placed between the poles of two magnets. The coil is mounted on a central axis that allows it to rotate freely. The ends of the coil are connected to

Rotational axis Rotation b

c

Coil

F

F

Magnetic field a

d

  Brush Commutator

  Power source

100

chapter 5  the motor effect

an external power source. When the power source is switched on, the current will run in a clockwise direction, that is, from a to b to c then to d. Note: Conventional current always leaves the positive terminal of a power source and enters the negative terminal.

Now let us examine the force acting on each of the four sides of the coil: On side ab : The current is running into the page; therefore, if we apply the right-hand palm rule, we can see that the force is acting down. Hence there is a torque acting on the side ab which causes it to rotate anti-clockwise about the central axis. ■■ On side cd: The current is running out of the page, thus the force is up. There is also a torque on the side cd which causes it to rotate in the anti-clockwise direction as well. Consequently, the two torques act as a couple—a pair of torques on each side of a pivot that both cause the same rotation—on the coil, and the coil spins in an anti-clockwise direction. So now we have a functional motor. ■■

Note: It can also be seen here that sides ab and cd will always be perpendicular to the magnetic field regardless of the position of the coil as it spins.

Side bc is initially parallel to the magnetic field, so it does not experience any force. However, as the coil starts to turn, the current will start to have an upward component, by applying the right-hand palm rule, the force acting on side bc is into the page. Side ad will be in a similar situation to side bd, except as the coil spins, its current will have a downward component. Hence the force is out of the page. There is no contribution to the torque. Consequently, the forces acting on sides bc and ad stretch the coil. However, since the coil is usually rigid, the stretching effect is usually not seen. Hence their contribution to the function of motors is usually neglected in the descriptions in this chapter. Only the two sides which are (always) perpendicular to the magnetic field (ab and cd) are considered to be the functioning part of a motor, as they are responsible for creating the motor’s torque and rotation.

A quantitative description of the torque of a DC motor The torque acting on a coil (loop) of current-carrying wire inside a magnetic field is caused by the forces acting on the sides which are always perpendicular to the magnetic field, such as sides ‘ab’ and ‘cd’ in Figure 5.6. It can be quantitatively determined by the equation:

 = nBIAcos θ

Where:  = the torque, measured in Nm n = the number of turns of the coil B = the magnetic field strength, measured in T I = the current in the coil, measured in A A = the area of the coil, measured in metres squared (m2) θ = the angle between the plane of the coil and the magnetic field, measured in degrees. See Figure 5.7 (a) to (c).

101

motors and generators

Note: This equation can easily be derived from combining the equations  = Fp × d and F = BIlsin θ. This is not required by the syllabus, but you should try!

Consider the following side views of a DC motor, illustrating the size of the torque where the coil is at different angles to the magnetic field. Note that the current runs out of the page at A and into the page at B.

I

I

I

60º I

A B

A

Figure 5.7  (a) The plane of the coil is parallel to the magnetic field, hence θ is 0° and cos θ is 1; the torque is a maximum

I

I

A

H14.1a, d, g H14.2a, b H14.3c, d

■■

B

B

Figure 5.7  (b) The coil is at 60° to the magnetic field, and cos 60° is ½, hence the torque on the coil is half the maximum Figure 5.7  (c) The coil is now perpendicular to the magnetic field, so θ is 90°, and cos 90° is 0, hence no torque is acting on the coil; this comes as no surprise, as the force is now stretching the coil rather than turning the coil

Solve problems and analyse information about simple motors using:  = nBIAcos 

Example

In a DC motor, a circular loop of coil with a radius of 50 cm is placed between the poles of two magnets which provide a uniform magnetic field of 0.3 T. The coil is situated at 55° to the magnetic field. (a) Calculate the size of the torque acting on the wire when a current of 2.3A passes through the coil. (b) If the number of turns is increased to 150 turns, what is the new torque? Solutions

(a)  = nBIAcos θ (b) Similarly, when n = 150 n = 1  = nBIAcos θ B = 0.30 T  = 150 × 0.30 × 2.3 × π × 0.502 cos 55° I = 2.3A ≈ 46.6 N m 2 2 A = πr = π × 0.50 θ = 55°  = 1 × 0.30 × 2.3 × π × 0.502 cos 55° ≈ 0.31 N m

102

chapter 5  the motor effect

Problem! The coil in a motor will rotate correctly when it is parallel or inclined to the magnetic field. However, when the coil is at the vertical position, the torque acting on it is zero, hence there is no turning affect. As pointed out before, at the vertical position, the forces acting on the coil try to stretch the coil rather than turn the coil. It follows that in order to make a DC motor spin continuously, a split ring commutator is needed.

The need for a split ring commutator in DC motors What happens to the coil of a DC motor without a split ring commutator? As we have seen in the previous section, the torque on the coil is a maximum when the plane of the coil is parallel to the magnetic field, and becomes smaller as it rotates. The torque becomes zero when the coil is at the vertical position, or perpendicular to the magnetic field. At this point, despite the absence of torque, the coil will still rotate through the vertical position due its momentum or inertia. However, the force acting on sides A and B will still be up and down respectively. Hence, as soon as the coil passes through the vertical position, it will be pushed back to the vertical line and carried over to the initial side, as shown in Figure 5.8. When the coil reaches the initial side, it is pushed past the vertical line once again and the process repeats. However, each time, the coil loses some of its momentum and inertia due to friction, so each time it will move less distance past the vertical line as it’s carried over by its momentum. The subsequent motion of the coil is that it oscillates about the vertical plane and eventually comes to rest. Thus, in order to ensure that DC motors spin continuously and smoothly, a device called a ‘split ring’ commutator must be employed. In a simple DC motor, a split ring consists of two halves of a metal (usually copper) cylindrical ring electrically insulated from each other (see Fig. 5.9).

5.6

Figure 5.9  (a) A DC split ring commutator





Figure 5.9  (b) End view of a split ring commutator; it is in firm contact with carbon brushes that are held in position by springs

rotational axis

A F

B

A I

I

F

F

I I

F

B

Figure 5.8  A DC motor without a split ring commutator

A split ring commutator

103

motors and generators





Figure 5.10  (a) End view of a split ring commutator position (a)





Figure 5.10  (b) End view of a split ring commutator position (b)





Figure 5.10  (c) End view of a split ring commutator position (c)





Figure 5.10  (d) End view of a split ring commutator position (d)

5.7

How does a split ring commutator work? When the coil is parallel to the direction of the magnetic field, the commutator is horizontal as shown in Figure 5.10 (a). As the coil rotates, say anti-clockwise, the commutator also rotates between the brushes; however, each half of the commutator is still in contact with the brush of the same polarity (Fig. 5.10b). When the coil reaches the vertical position, the halves of the commutator are not in contact with the brushes, and the current flowing through the coil drops to zero (see Fig. 5.10c). This however has no net effect, as the current in the coil when the coil is at the vertical position only stretches the coil anyway. The critical part is when the coil swings past the vertical position. Each half of the commutator changes the brush it is contacting to the one of opposite polarity (see Fig. 5.10d). This effectively reverses the current direction in the coil. Consequently, the direction of the force acting on the two sides of the coil will also be reversed, allowing the coil to continue to turn in the same direction rather than pushing it back (shown in Fig. 5.11). When the coil reaches the vertical position again after another half cycle, the halves of the commutator will change their contacts again, reversing the current direction and consequently reversing the direction of the force and maintaining the rotation in the one direction. The repetition of this process via the action of a split ring commutator allows a DC motor to rotate continuously in one direction.

In summary A split ring commutator helps a DC motor to spin in a constant direction by reversing the direction of current at vertical positions every half cycle by changing the contact of each half of the split ring rotational axis commutator with the carbon brushes. This allows the force A A and hence the torque acting I F on the two sides of the coil I F I F which are perpendicular to F I the magnetic field to change B B direction every half revolution at the vertical positions, consequently allowing the coil of the DC motor to spin Figure 5.11  The reversal of the current direction, hence the continuously. force direction due to the action of a split ring commutator

Features of DC motors n n

Describe the main features of a DC electric motor and the role of each feature Identify that the required magnetic fields in DC motors can be produced either by current-carrying coils or permanent magnets

The main features of a DC motor are those that are essential for the function of the motor. They are: ■■ a magnetic field ■■ a commutator ■■ an armature ■■ carbon brushes

104

chapter 5  the motor effect

Magnetic field As the motor effect states, a current-carrying wire in a magnetic field will experience a force. So for a functional motor, a magnetic field is essential. Recall that a magnetic field can be represented by magnetic field lines. The direction of the field lines shows the flow of the magnetic field and the density of the lines shows the strength of the magnetic field. Where the field lines are closer together, the magnetic field has greater strength, and where field lines are spread out, the field strength is smaller. Magnetic fields can be provided by either permanent magnets or electromagnets. Furthermore, as we have seen in this chapter, the structure which provides the magnetic field usually remains stationary, and is called the stator, whereas the coil usually rotates inside the field, and is called the rotor. However, many industrial motors work with the coil held stationary and the magnets rotating.

Magnets

Permanent magnets

Permanent magnets are ferromagnetic metals which retain their magnetic property at all times.

Magnetic field

They can be made with different shapes, the most common ones being horseshoe magnets and bar magnets like those shown in Figure 5.12. Permanent magnets, like all magnets, have two poles, a north and a south pole. Using a bar magnet as an example, the magnetic field lines come out from the north pole and return into the south pole. The magnetic field strength is much stronger at the poles than on the sides. Electromagnets

Figure 5.12  Magnetic field around a bar magnet

An electromagnet consists of a coil of current-carrying wire called a solenoid wound around a soft iron core. The electromagnet, as its name suggests, only possesses magnetic properties when a current is made to pass through the coil. Electromagnets are usually able to provide stronger magnetic fields than permanent magnets and have major advantages over permanent magnets in that their strengths are adjustable and can be switched on and off when desired. The magnetic field around an electromagnet is similar to that around a bar magnet, as shown in Figure 5.13 (a). In order to determine the poles of the electromagnet, we can use another ‘right-hand grip’ rule as shown in Figure 5.13 (b). Radial magnetic field

As we have discussed, the torque acting on the coil varies as it rotates to different positions with respect to the magnetic field lines. The variation in the torque results in a varying speed of rotation of a motor: faster at the point when the coil is parallel to the magnetic field and slower when it is perpendicular. Fortunately, motors usually spin at speeds as high as 3000 revolutions per minute (rpm), hence the variation in

105

motors and generators

Figure 5.13  (a) Magnetic field around an electromagnet

North Pole

I

I

Direction of the current

Figure 5.13  (b) Right-hand grip rule: do not confuse this with the other right-hand grip rule mentioned earlier in this chapter

Radial magnets (arrows) Coil is always parallel to the magnetic field over this range of rotation angles

Coil Figure 5.14  The coil is between the poles of a set of radial magnets: note that the coil is always parallel to the magnetic field lines

An armature with a soft iron core. Note also the three sets of coil used

106

the rotational speed is undetectable by sight, and usually has negligible impact. Nevertheless, when DC motors are used in applications that require a constant rotation speed, a radial magnetic field must be introduced in order to solve this technical difficulty. A radial magnetic field can be created by shaping the pole pieces of magnets into curves, hence the word ‘radial’. The radial magnetic field ensures that the plane of the coil is parallel to the magnetic field at a greater range of positions, so that the angle between the magnetic field and the coil remains zero for longer. Consequently, the torque acting on the coil is kept at the maximum for longer. (Since cos θ is always 1,  = nBIA.) This is shown in Figure 5.14. Radial magnets With the torque maintained at the maximum for longer, the motor is more efficient.

Armature Armature’ refers to the coil of wire that is placed inside the magnetic field. The coil is almost always wound around a soft iron core in order to maximise the performance of the motor. A justification is given in Chapter 6. In this chapter, all armatures are represented as if they only have a single loop. In real DC motors, armatures have numerous loops to maximise the torque that is acting on them, since the torque  is directly proportional to the number of turns of the coil, n. Also, in real DC motors, three coils are usually used instead of a single coil, with the coils aligned at 120° to each other. This maximises the torque that can act on the coils, thus making the motor run more efficiently. Finally, as we discussed in the magnetic field section, the coils can either be the rotor or the stator, depending on the design of the DC motor.

chapter 5  the motor effect

Split ring commutator The features and the significance of a split ring commutator have already been discussed in detail in this chapter pages 103–104. Although all parts of a DC motor are important to its function, the split ring commutator is the most subtle part of the DC motor’s design.

Carbon brushes Working with the commutators are the carbon brushes. They are responsible for conducting current into and out of the coil. They are also responsible for contacting different parts of the commutator every half cycle in order to reverse the current to ensure the continuous spin of the motor. Carbon brushes are usually made of graphite (a type of carbon), and are pressed firmly onto the split ring commutator by a spring system. Carbon brushes are preferred to wires and soldering for obvious reasons. If wires were soldered to the armature, not surprisingly, as the armature rotates, the wires would eventually tangle up and break. Brushes can serve the same role as the wires in terms of conducting electricity but they do not rotate along with the moving armature. Also, carbon or graphite is used to make brushes because: 1. The graphite is a lubricant; it can reduce the friction between the brushes and the commutator. Reduced friction enables the motor to run more efficiently, and minimises wear and tear. 2. Graphite is a very good conductor of electricity. 3. Graphite is able to withstand the very high temperatures generated by the friction between brushes and commutators. Over time, carbon brushes gradually wear out. Motors are designed so that these can be replaced very easily.

Carbon brushes of a DC motor

SR

For simulation 20.3 Electrical motor

Applications of the motor effect n

Identify data sources, gather and process information to qualitatively describe the application of the motor effect in: – the galvanometer – the loudspeaker

Galvanometer Definition

secondary source investigation physics skills H13.1a, b, c, e H14.1g, h H14.3d

A galvanometer is a very sensitive device that can measure small amounts of current. It is the basic form of an ammeter or a voltmeter. A galvanometer works on the principle of the motor effect. A simple galvanometer consists of a fine coil with many turns wound around a soft iron core; the coil is placed inside a radial magnetic field produced by permanent magnets with shaped pole pieces. It also has a

107

motors and generators

A galvanometer

Scale

0

Needle

Spring Coil Radial magnet

Radial magnet

torsional spring attached to the axis of rotation of the coil. A pointer is attached to the coil and a scale is developed. All these are shown in Figure 5.15. When the galvanometer is used to measure a small current, the current is passed through the coil. The coil experiences a force inside the radial magnetic field (motor effect) and starts to rotate, similarly to the rotation of DC motors as described earlier. (In the case of the galvanometer shown in Fig. 5.15, the coil rotates clockwise.) However, as the coil rotates, it stretches the spring, which will then exert a torque that counteracts the initial torque created by the motor effect. As the coil rotates, the spring is stretched more and the opposing torque exerted by the spring increases. Eventually, when the opposing torque is equivalent to the forward torque, the coil stops rotating. The degree of movement of the coil is indicated by the pointer on the scale (in this case, to the right of the zero); the bigger the forward torque, the more the coil will move. Since the magnetic field strength and the area of the coil are both kept constant, the forward torque will only depend on and in fact will be directly proportional to the size of the current. ( = nBIA, since a radial magnetic field is used.) Hence, the appropriately developed scale gives a reading of the measured current. The role of the radial magnetic field is again significant to the function of the galvanometer. As discussed before, it is used to produce a constant maximum torque. This also ensures that the size of the forward torque is only dependent on the current and is independent of the position of the coil. Thus, the conversion from the angle of rotation, which is a direct measure of the size of the forward torque, to current is made possible.

Figure 5.15  A moving coil galvanometer

Loudspeaker

A loudspeaker

A loudspeaker also works on the basis of the motor effect: it converts electrical energy to sound energy. A simple loudspeaker consists of a coil of wire between the pole pieces of the magnets which form the core. A paper diaphragm is then attached to this coil–magnet unit. Electrical signal inputs to the loudspeaker are in the form of alternating currents in which the waves vary in frequency and amplitude. When signals are fed into the coil inside the loudspeaker, the coil experiences a force as a consequence of the motor effect. As we can see in Figure 5.16, when the current flows from A to B and we apply the right-hand palm rule, we can see that the force acting on the wire pushes the wire out. However, when the current flows from B to A, the force will pull the coil in. Since we know that the AC signals vary in direction very rapidly, the coil should move in and out very rapidly as well. However, because the coil is very tightly wound on the magnetic pole piece, they cannot move freely but they can vibrate. The vibration of the coil–magnet unit causes the paper diaphragm to vibrate with it. The vibration of the paper diaphragm causes air to vibrate which produces sound waves.

108

chapter 5  the motor effect

The nature of the sound waves will depend purely on the characteristics of the AC signal inputs. The pitch of the sound is dictated by the frequency of the sound waves, which is directly related to how fast the coil vibrates, which in turn is dependent on the frequency of the AC signals. The loudness of the sound is the amplitude of the sound waves, this in turn depends on the degree or the strength of the vibration, which is dependent on the amplitude of the input AC signals.

Figure 5.16  A loudspeaker

Paper cone Fm

Core

Vibration

> A

Fm

B

Demonstrating the motor effect n

Perform a first-hand investigation to demonstrate the motor effect

The syllabus requires you to perform a first-hand investigation to demonstrate the motor effect. The first-hand investigation could include both demonstrating the factors that affect the size of the motor effect, that is the formula F = BIlsin θ, and the application of the right-hand palm rule. To demonstrate the formula F = BIlsin θ, experiments can be set up to show that the size of the motor effect is directly related to the magnetic field (B), the strength of the current (I ), the length of the wire (l), or the sine of the angle between the magnetic field and the currentcarrying wire. It is important to realise that as there are four variables that can change the size of the force, it is important to test only one at a time. For example, if you wish to demonstrate the effect of the current on the size of the motor effect, then it is important to change only the current and keep the other three factors constant as the controls. An example of that—a current balance—is shown in the exercise questions. To demonstrate the right-hand palm rule, you can choose to change either the direction of the current or the direction of the magnetic field, and see the resultant movement of the conductor, which indicates the direction of the force. Again, since there are two variables that can affect the direction of the force, it is important to only change one at a time and keep the other one constant as the control.

first-hand investigation PFAs H1, H2 physics skills H11.1e H11.2e H11.3a, b H12.1d, d H12.2b H14.1d, g, h TR

Risk assessment matrix

A current balance from a school laboratory

109

motors and generators

If the school does not have access to a current balance, there are a number of simpler ways to demonstrate the motor effect, such as the method outlined here. 1. Link approximately 2 m of connecting wires together, and plug the ends into the DC output of a power pack. 2. With the connecting wires lying loosely on a desk, place either a strong horseshoe magnet or opposing poles of a pair of strong bar magnets so that the wire passes through the magnetic field. 3. With the power pack set to 6 V, turn the current on and off again quickly (to prevent the power pack from overheating). 4. If the wire does not jump up, repeat the experiment with the poles of the magnet reversed. (The movement of the wire while the current is flowing is due to the motor effect.)

chapter revision questions   1. Draw the magnetic field pattern for the following bar magnets:

(a)

(b)   2. The diagram shown below is a solenoid. Z

Y

X

Current

Current



(a) State the direction of the magnetic field at points X, Y, Z respectively. (b) At which point is the magnetic field strongest? (c) List two factors that will affect the strength of the magnetic field produced by this solenoid.   3. Determine the force acting on the following current carrying wires:

(a)

0.05 T

(b)

0.001 T

(c)

0.50 T 90 cm

2.0 A

3.00 A 3.50 m

9.0 A

110

50 cm

45º

30.0º

chapter 5  the motor effect

(d)

(e)

3.4 T

(f)

1.0 T 2.8 m

0.12 T

) is 0.30 A, from left to right

Semicircle 5.0 cm

5.0 mA 0.60 A

1.3 m

30º 2.0 cm

2.0 cm

  4. When a current-carrying wire is placed inside an external magnetic field, it will

experience a force. (a) Use appropriate axes to plot the relationship between the angle inscribed by the wire and magnetic field and the force experienced by the wire, such that a straight line can be obtained. (b) Suppose the gradient of this straight line is 1, the length of the wire is 20 cm and the current flowing through it is 0.45 A, determine the strength of the external magnetic field.   5. A slope is set up with one conducting rail on each side, a power source is connected

to the rails. A rod is placed on the slope perpendicular to both rails so that a current is running through it. If the rod has a mass of 30 g and is 25 cm long inside a magnetic field which is 0.6 T in strength, and the current size is 1.3 A, determine the acceleration of the rod. Magnetic field

Rod

I 20º

  6. A child is sitting on a swing in a modern fun park that is operated electronically. As

shown in the figure below, the swing seat hangs off the support at points A and B, and is immersed in a uniform magnetic field of 0.50 T directed up the page. To start the swing moving, a current is passed into the swing in the direction of AXYB. A

B

X Y



Magnetic field

111

motors and generators

(a) Which part(s) is/are responsible for the swing moving—AX, XY, or BY—justify your answer. (b) In which direction will the swing move initially? (c) Calculate the magnetic force that is developed to move the swing, if side (the seat) XY is 4.0 m long and the current passing through it is 50 A. Extension (d) If sides AX and BY have a negligible mass, and side XY plus the child weighs 45 kg altogether, calculate the angle the swing will move through initially. The magnetic field is then turned off to allow the swing to pursue a simple pendulum motion. (e) If the sides AX and BY are 3 m long, how many cycles will the swing make in a 1 minute interval? (Assume there is no friction.)   7. Wires A, B, C are three parallel current-carrying wires as shown in the diagram below; the wires are all 3.3 m long and 1.0 m apart from each other.



A

10 A

B

5.0 A

C

20 A

Determine the resultant force acting on wire B.

  8. Two massless parallel current carrying wires are supported by strings as shown in

the diagram. If each of the wires is 1 m long and carries a current of 50 mA, and the supporting strings are 85 cm long, calculate the force acting on each of the wires when the angle between the strings is 60o.

60° 85 cm

85 cm

supporting strings

End view



  9. Determine the size of the torque acting on the following rods:

4.0 N

2.4 N

120º Pivot



2.0 m

40º Pivot

60 cm

20 cm

10. The armature of a DC motor consists of 40 turns of circular coil with a diameter of

20 cm. A voltage of 6.0 V is supplied to the coil. If the coil has a total resistance of 0.50 Ω, and the magnetic field has a strength of 0.20 T:

112

chapter 5  the motor effect

(a) Determine the size of the torque when the coil is in the horizontal position (its starting position). (b) How would this torque affect the motion of the coil? (c) Determine the size of the torque when the coil has moved through an angle of 30o. (d) Determine the size of the torque when the coil has reached the vertical position. (e) Comment on the magnitude of the net torque once the motor has reached a constant speed. 11. The coil of a DC motor inclining at 30o to a magnetic field is experiencing a torque of

3.2 N m. If the coil has now turned so that it is inclining at 45o to the magnetic field and the current through the coil is suddenly tripled, determine the new torque acting on this coil. 12. A motor system is designed to lift a mass, as shown in the diagram. Side view Pivot

Mass



If the maximum magnetic field is 2.0 T, the maximum current size is 40 A and the coil is square with sides measured by 2.4 m, what is the maximum mass the system is able to lift?

13. Assess the role of split ring commutator in allowing DC motors to spin continuously in

one direction. 14. Simple motors used at domestic levels use copper for their commutators. State two

advantages and two disadvantages of copper commutators compared with graphite commutators. 15. The spring inside a galvanometer is essential for its function.

(a) What is the role of this spring? (b) The radial magnetic field of this galvanometer is 0.2 T: what is the siginificance of using the radial magnetic field? (c) If the galvanometer contains 30 turns of a square coil, with sides each measuring 2.0 cm, calculate the torque exerted by the spring when the needle is pointing at 0.20 A.

SR

16. Evaluate the importance of the motor effect to the functioning of a loudspeaker. 17. Design a first-hand investigation to assess the relationship between the current in a

straight wire (conductor) and the force it experiences when it is placed inside a magnetic field.

Answers to chapter revision questions

113

motors and generators

CHAPTER 6

Electromagnetic induction The relative motion between a conductor and magnetic field is used to generate an electrical voltage Introduction British scientist Michael Faraday was the first person to realise that a changing magnetic field was able to produce electricity, a phenomenon known as electromagnetic induction. In 1831, after a series of experiments, Faraday stated his law of electromagnetic induction, which quantitatively described the process. The understanding of electromagnetic induction was further improved by Lenz’s law, proposed by the Russian physicist Heinrich Friedrich Emil Lenz (1804–65) in 1834. Electromagnetic induction provides a means of converting mechanical energy into electrical energy; this later becomes the principle for electric generators. Electromagnetic induction also has other useful applications and these will be described in this chapter. Definition

Electromagnetic induction is the interaction between magnetic fields and conductors to generate electricity.

6.1

Michael Faraday’s discovery of electromagnetic induction n

Outline Michael Faraday’s discovery of the generation of an electric current by a moving magnet

Hans Christian Oersted, a Danish physicist and chemist, was the first person to observe the deflection of a compass needle when placed near a current-carrying wire. He deduced that a current-carrying wire must produce a magnetic field around it. This discovery established the link between electricity and magnetic fields, and initiated the study of electromagnetism. Not surprisingly, thoughtful scientists wondered: if electricity could produce magnetic fields, why couldn’t a magnetic field produce electricity? Many scientists started to investigate ways of using magnetic fields to produce electricity, but unfortunately none of them succeeded. The problem lay in their failure to realise the necessity of a changing magnetic field in the generation of electricity. The British chemist and physicist Michael Faraday was the first scientist to solve this riddle. (Although it was believed that Joseph Henry established the theory earlier.) He successfully demonstrated the generation of electricity using a changing magnetic field, and proposed the theory of electromagnetic induction. So for now, remember: In order to produce electricity, a changing magnetic field is essential.

114

chapter 6  electromagnetic induction

n

PFA

Outline Michael Faraday’s discovery of the generation of an electric current by a moving magnet

H1

Why was this discovery a major advance in scientific understanding?

‘Evaluates how major advances in scientific understanding and technology have changed the direction or nature of scientific thinking’

In 1819, Oersted had shown that an electric current produces a magnetic field. Faraday mapped the shape of this field in 1821, and in 1831 was finally able to induce a current in a conductor when the conductor was subjected to a changing magnetic field. Being able to induce a current using a change in a magnetic field was the missing link in the evidence that tied electricity and magnetism together, a phenomenon now known broadly as electromagnetism.

How did it change the direction or nature of scientific thinking? In the years following his discovery of electromagnetic induction, Faraday changed his thinking on the nature of electricity. Rather than being a fluid, Faraday envisaged (wrongly) that electricity is a force that could be passed between particles of matter. In 1864, James Clerk Maxwell published his equations showing light to be a form of electromagnetism—and that it was the changing electric and magnetic fields that caused the propagation of light.

Evaluation of Faraday’s discovery of electromagnetic induction Faraday’s discovery was extremely valuable. It led to further work and discoveries in the field of electromagnetism. It also caused a change in the understanding of the nature and behaviour of electricity and magnetism in general, enabling the development of electric generators, which subsequently led to the widespread use of electricity for lighting through the 1880s and beyond.

>WWW

Useful websites Details and Java animation of Faraday’s experiment: http://micro.magnet.fsu.edu/electromag/java/faraday/ The Royal Institution of Great Britain’s resource on Faraday (and others): http://www.rigb.org/rimain/heritage/faradaypage.jsp

Figure 6.1  Faraday’s early experiments on electromagnetic induction

Faraday’s experiment on electromagnetic induction In Faraday’s early experiments, he wound a copper wire around a piece of wood, and connected the ends to a DC power source; this coil of wire was termed the primary coil. He then wound another copper wire in between the primary coil, as shown in Figure 6.1. The ends of the second wire were connected to a galvanometer, and this coil is referred to as the secondary coil. When he switched on the power source attached to the primary coil, the meter registered a very small and momentary reading of electricity, then dropped to zero. It remained zero for the rest of the period while the current was on. When the current was switched off, the meter again registered a reading

Wood block; later replaced by steel in a glass tube

Galvanometer

G

Secondary coil Primary coil

DC power source

115

motors and generators

then dropped to and remained at zero; however, this time, the needle of the meter swung in the opposite direction to the initial movement. The wooden block was later replaced by a glass tube with a steel needle inserted inside. A similar procedure was carried out and similar results were  observed, except that the current registered was slightly bigger. -iVœ˜`>ÀÞÊVœˆ Later, in order to reinforce the concept of electromagnetic induction, Faraday conducted a different experiment using an iron ring. A primary copper coil was wound on one side of the ring and a secondary coil was wound on the other side as shown in Figure 6.2. When the DC power source was switched on and then off as in the previous experiment, a similar pattern of readings was observed in the galvanometer. However, this time, the current registered was much greater. Àœ˜ÊÀˆ˜}

*Àˆ“>ÀÞÊVœˆ

Figure 6.2  Mutual induction experiment

Definition

This phenomenon, where the current in one circuit will induce a current in another circuit nearby, is called mutual induction.

The importance of a changing magnetic field to electromagnetic induction Why was there a momentary flow of current in the secondary coil only when the DC power was switched on or off in the primary coil, but no current registered while the power source was left on? When the DC power source is switched on, the current does not reach its maximum value in the primary circuit instantaneously; instead, the current builds up to this value over a very short time. Hence there is a brief period of increasing current, and since the strength of the magnetic field is proportional to the current, this is also a momentary period of increasing magnetic field strength. This increasing, therefore changing, magnetic field induces an EMF or current in the secondary coil. Note: EMF stands for electromotive force. For the purposes of this chapter, it has the same meaning and unit as voltage. An EMF in a closed circuit will cause a current to flow.

However, the current quickly reaches its maximum value in the primary circuit and will remain at this value as long as there is a supply of DC power. This is a period of constant current hence constant magnetic field, and consequently no electricity will be induced. Therefore, after a brief moment, the reading on the meter drops to zero. When the DC power source is switched off, there is no longer a supply of voltage to the primary coil. However, the current takes a brief moment to drop back to zero. This decrease in current also produces a decrease in the strength of the magnetic field. Again, this changing magnetic field causes another momentary flow of electricity in the secondary coil, but in the opposite direction.

116

chapter 6  electromagnetic induction

Faraday’s electromagnetic induction experiment with moving magnets

Bar magnet

There are many ways we can create a changing magnetic field. One common way is to have relative motion between V the coil or the conduct and the magnetic field or the object that is creating the magnetic field. In one of Faraday’s later experiments, he demonstrated the effect of moving magnets on inducing electricity in a coil. He wound a coil with many turns and connected the coil to a galvanometer. He then pushed a magnet towards the coil and observed a reading on the meter (see Fig. 6.3a). However, when he stopped moving the magnet, the reading of the meter dropped to zero. As he withdrew the magnet, the meter again registered a reading; however, this time, the needle swung to the other side, showing the induced electricity was flowing in the opposite direction (see Bar magnet Fig. 6.3b). When the magnet was pushed in and withdrawn more quickly, a similar pattern was observed on the meter, except the size of the induced EMF or current was much bigger. V The theory behind the experiment is as follows: as the magnet is pushed closer to the coil, the magnetic field strength experienced by the coil increases, hence this changing of the magnetic field will induce an EMF in the coil. However, when the magnet is not moving, there is no change in the magnetic field and no EMF is induced. Finally, when the magnet is withdrawn from the coil, there is again a changing magnetic field, thus an EMF will be induced. However, since now the magnetic field is decreasing in strength, the EMF induced is in the opposite direction. When the magnet is moved in and out more quickly, there is simply a greater rate of change of magnetic field, therefore, the induced EMF or current is bigger (see Faraday’s law). The direction in which the induced electricity will flow will be covered in detail in the section Lenz’s law.

Magnetic field lines, magnetic flux and magnetic flux density n n

Solenoid

Galvanometer Figure 6.3  (a) When the magnet is pushed into the coil Solenoid

Galvanometer Figure 6.3  (b) When the magnet is withdrawn from the coil

6.2

Define magnetic field strength B as magnetic flux density Describe the concept of magnetic flux in terms of magnetic flux density and surface area

Magnetic field lines As we saw in Chapter 5, magnetic fields can be represented by lines, called magnetic field lines, and we saw the magnetic field lines around a bar magnet and a solenoid.

117

motors and generators

Magnetic flux Definition

Magnetic flux is defined as the number of magnetic field lines passing through an imaginary area. Mathematically, if the magnetic field is perpendicular to the area, then the magnetic flux is equal to the product of strength of the magnetic field and size of the area as shown in Figure 6.4 (a): ϕ = BA, where ϕ is the magnetic flux in webers (Wb), B is the magnetic field strength in Tesla (T), and A is the area in m2. However, if the magnetic field lines are not perpendicular to the area, then only the perpendicular component of the magnetic field is taken into consideration. In these cases: ϕ = BAcos θ, where ϕ is the magnetic flux in Wb, B is the magnetic field strength in T and A is the area in m2. θ is the angle between the magnetic field lines and the normal to the area (see Figure 6.4b). Note: θ is not the angle between the magnetic field line and the area.

Normal

Normal B-field

B-field

Q

f

to the area Field lines Q  0°, cos  1 &is a maximum

Field lines incline to the plane 0º 90º, cos 1 & is below the maximum



Figure 6.4  (a) Magnetic flux, field lines perpendicular

Normal

B-field

to the area to the area plane Q Field lines Q  90°, cos Q  0 & is a minimum Figure 6.4  (c) Magnetic flux, field lines parallel

118

Figure 6.4  (b) Magnetic flux, field lines inclined

chapter 6  electromagnetic induction

Analogy: The magnetic flux can be compared to the number of arrows that can be collected by

a wooden shield, which of course depends on the number of arrows being fired (the magnetic field strength) and the area of the shield. It also depends on the angle at which the shield is facing the arrows. If the shield is held perpendicular to the arrows (in which case θ is zero), then the number of arrows collected (flux) is the maximum. If the shield is held at an angle to the arrows, then the number collected will be below the maximum. Last, if the shield is held parallel to the arrows, then the shield cannot collect any arrows. Example

A loop of wire is circular in shape, and has a radius of 4.0 cm. If there is a magnetic field with a strength of 0.50 T passing through the coil perpendicular to the loop, calculate the magnetic flux for this loop. If the loop is rotated through an angle of 30°, what is the new magnetic flux? Solution

(a) ϕ = BAcos θ (b)  ϕ = BAcos θ B = 0.50 B = 0.50 2 A = π × 0.0402 A = π × 0.040 θ = 0° θ = 30° 2 ϕ = 0.50 × π × 0.0402 × cos 30° ϕ = 0.50 × π × 0.040 × cos 0° ≈ 2.18 × 10–3 Wb ≈ 2.51 × 10–3 Wb

Magnetic flux density Since we know that ϕ = BA, if we re-arrange this formula, then: ϕ A According to this equation, we can define the magnetic field strength B in another way: B=

Definition

A magnetic field, B, can be defined as the amount of magnetic flux per unit area, or simply the magnetic flux density. It follows that if the unit for ϕ is the Wb, and the unit for A is metres squared, B must have the unit weber/metre squared (Wb m–2). Hence the unit of magnetic field can be either T (Tesla) or Wb m–2.

Faraday’s law: a quantitative description of electromagnetic induction n

6.3

Describe generated potential difference as the rate of change of magnetic flux through a circuit

Faraday’s law states: The size of an induced EMF is directly proportional to the rate of change in magnetic flux.

119

motors and generators

Mathematically: ε=

∆ϕ (this is not required by the syllabus) ∆t

When there are n turns in the coil, the formula becomes: ε=n

∆ϕ ∆t

Where n is the number of turns, ϕ is the magnetic flux in Webers, t is the time in ∆ϕ is the rate of change in flux. seconds, and ∆t Consequently, we need to modify our early definition: instead of In order to induce electricity, a changing magnetic field is essential, we need: In order to induce an EMF, a changing magnetic flux is essential. Note: A changing magnetic field will result in a changing magnetic flux; however, there

are other ways to create changes in magnetic flux.

Factors that determine the size of the induced EMF From the equation above, we can clearly see the factors that will affect the size of the induced EMF: 1. The size of the change in the magnetic field. As the size of the change in the magnetic field increases, the size of the induced EMF increases. 2. The speed of the relative motion between the magnetic field and the conductor. As the speed increases, the rate of change in flux increases, hence the size of the induced EMF increases. 3. The number of turns of coil or conductors. Increasing the number of turns in the coil will increase the size of the induced EMF. 4. The change in area that the magnetic field passes through. The greater the change in area, the greater the change in the flux value, so the size of the induced EMF increases. Note: Sometimes for a single moving conductor, the idea of ‘cutting’ the field lines is used as the conductor moves through the magnetic field, to denote that the conductor is linking with the magnetic field and so there exists a changing magnetic flux. A maximum ‘cut’ by convention means a maximum change in flux.

The negative sign It is more correct for Faraday’s law of electromagnetic induction to have a negative sign, that is: ε = –n

∆ϕ ∆t

The reason for having a negative sign as well as its significance on the results of the electromagnetic induction will be discussed in the section ‘Lenz’s law’.

120

chapter 6  electromagnetic induction

Example

Consider a rectangular coil with dimensions 5.0 cm × 4.0 cm placed between the poles of two bar magnets that generate a field strength of 0.60 T. The coil has 200 turns, and is initially parallel to the field lines. If the coil is made to rotate anti-clockwise to reach the vertical position in 0.010 second, calculate the EMF generated in the coil. Solution

Note that when the coil is parallel to the magnetic field, the flux is a minimum (θ = 90°), whereas when the coil is at the vertical position, the flux is at its maximum (θ = 0°). ε = –n n ∆ϕ ∆t

= = = = =

∆ϕ ∆t

200 final ϕ – initial ϕ 0.60 × 0.050 × 0.040 × cos 0° – 0.60 × 0.050 × 0.040 × cos 90° 1.2 × 10–3 Wb 0.010

200 × 1.2 × 10–3 ε = –  0.010 = –24 V Note: This type of calculation is not required by the syllabus. It is shown here to demonstrate the application of Faraday’s law.

Lenz’s law n

Account for Lenz’s law in terms of conservation of energy and relate it to the production of back EMF in motors

SR

Simulation: Faraday’s law

6.4

Lenz’s law states

Whenever an EMF is being induced in a conductor as a result of changing magnetic flux, the direction of the induced EMF will be such that the current it produces will give rise to a magnetic field that always opposes the change and hence opposes the cause of induction. Therefore, the negative sign in Faraday’s law assigns the direction for the induced EMF, that is, it opposes the cause of induction. Example 1

Determine the direction of the induced EMF and hence the current in the coil for the following situations:

121

motors and generators

(a)

Bar magnet

V



(b)

Solenoid



Bar magnet



Solenoid

V

Figure 6.5  Lenz’s law examples

Solutions

(a) The magnet is moving relative to the coil, hence there is a changing magnetic field, therefore induction. The induced EMF or current will flow to oppose the cause of induction, that is, the north pole moving towards the coil. To oppose this approaching north pole, the current will flow in a direction that will result in a north pole on the left hand side of the coil. Note: North and north repel.

Use the right-hand grip rule: the thumb points to the left, therefore the current flows into the coil on the right and out on the left. (b) To oppose the receding south pole, a north pole needs to be created on the left hand side of the coil. Note: To attract the south pole back.

Hence, using the right-hand grip rule, the current flows into the coil on the right and out on the left.

Example 2

A circular loop of wire is situated in a magnetic field as shown in the diagram below:

Figure 6.6  Lenz’s law example 2

(a) At this instant, will there be a current flowing in the loop? (b) If the magnetic field increases in strength in the direction shown in the diagram, what will happen in the loop? Solutions

(a) No current will be flowing in the loop, since there is no change of magnetic field or flux. (b) There will be a current flowing in the loop. The direction of current is anti-clockwise. To oppose an increase in the strength of the magnetic field

122

chapter 6  electromagnetic induction

into the page, the induced current must flow in such a way as to produce a magnetic field out of the page. This means that the current should produce a north pole pointing out of the page. By applying the right-hand grip rule, the thumb points out of the page and the fingers curl anti-clockwise; hence, the current flows anti-clockwise.

Example 3

Determine the direction of the induced current in a moving wire for the following cases (assume the circuit is completed externally): (i)

(ii)

(iii)

(iv)

V V V

V Figure 6.7  (a)

Solution

Since the cause of the changing magnetic flux is the velocity of the wire (the relative motion between the wire and the magnetic field), the induced current will flow in such a way that it will oppose this velocity. Hence the force created by the induced current will have the opposite direction to that of the velocity. Note: The force is trying to stop the movement.

Hence by applying the ‘right-hand palm rule’, if we point the palm (the force) in the opposite direction to the velocity of the wire, and align the fingers with the magnetic field, the thumb will point to the direction of the induced current. Note: Also note that for all the cases, the conductor is ‘cutting’ the magnetic field lines maximally (perpendicularly), so the change in flux is maximal. This results in a maximum induced EMF or current.

Therefore: (i)

(ii) V

(iii)

(iv)

B

B

B

V

I I

B

I

V

I

V Figure 6.7  (b)

123

motors and generators

6.5

Lenz’s law and the conservation of energy

B

Figure 6.8  Lenz’s law and the conservation of energy

6.6

The need for external circuits

Figure 6.9  A conductor moving through a magnetic field Magnetic field

I

B

124

Consider a wire that is moving through a magnetic field directed into the page in a vacuum. The wire initially has a constant velocity v. As the wire moves through the magnetic field, it will experience a changing magnetic flux. (The wire as it moves carves out a changing area that links up with the magnetic field, therefore the magnetic flux changes.) Hence there will be an EMF induced and if we assume there is a complete external circuit, a current will flow. If the current did not flow in the direction to oppose the cause of induction, that is, the velocity of the wire as stated by Lenz’s law, but rather flowed in the opposite direction, then the wire would speed up rather than slow down. This will in turn create a greater changing magnetic flux, and in turn induce a greater EMF or current. This induced current will further speed up the wire, inducing an even greater EMF or current. As this cycle goes on, eventually, the wire would be moving at an infinitely high speed (limited to c), and there would be an infinitely large current flowing in the wire. Since no energy input is required V in this case, energy is being created! This of course cannot happen, since the law of conservation of energy states: Energy cannot be destroyed or created, it can only be transformed or transferred. Hence to obey the law of conservation of energy, the current must flow to oppose the cause. Therefore, Lenz’s law is just an example of the conservation of energy. It follows that as the wire moves through the magnetic field, current will flow, the motion of the wire will be opposed due to Lenz’s law, and if there is no external force to move it, it will slow down. As the wire slows down and eventually comes to rest, the induced current will reduce in size and eventually drop to zero. Hence to maintain a constant production of current, force must be applied to maintain the motion of the wire. Thus work or mechanical energy is applied and is converted into electrical energy through electromagnetic induction; no energy is created or destroyed! This forms the basis of electric generators, which will be discussed in Chapter 7. Remember, there is no free lunch!

Again, consider a wire with an initial velocity v that is moving through a magnetic field directed downwards. Since the wire is moving through the magnetic field, a changing flux will be experienced and an EMF will be induced. By applying the right-hand palm rule, one can deduce that the induced EMF will cause a current to flow into the page (towards A). Although conventional current may A appear to be a stream of moving positive particles moving towards A, in Wire fact the current is created by electrons flowing in the opposite direction; hence, we know that the electrons will move towards B. V The migration of the electrons will make terminal B negative and the lack of electrons will make terminal A positive. This build up of charge will act to resist further flow of electrons. Momentarily later, the electrostatic force will be in equilibrium with the induced EMF and the migration of the charges will stop. Current stops flowing even though the EMF is still being induced due to the motion of the wire.

chapter 6  electromagnetic induction

Analogy: A battery sitting by itself has an EMF or voltage, but will not have any current flow.

However, when the terminals A and B are connected via a long wire outside the magnetic field (the external circuit), the circuit will be completed. Electrons will be able to move out from terminal B and back to fill the electron deficiency at A. Consequently, a continuous flow of current is established. Also note that through the external circuit, current flows from A to B, that is, from the positive terminal to the negative. Also, it is very important to note that at least part of the external circuit must be outside the magnetic field. Otherwise, it will be no different to the isolated wire without an external circuit. To conclude, even if there is an EMF induced, if there is no external circuit, there will only be a momentary flow of current that will then stop.

Example 1

Consider the situation shown in Figure 6.9. If there is no external circuit connecting terminals A and B, describe the subsequent motion of the wire. Solution

As the wire moves through the magnetic field, an EMF will be induced. This will cause a current to flow from B to A. By applying the right-hand palm rule, the force acting on the wire will be to the left. This will result a deceleration of the wire and the wire will slow down. However, due to the lack of the external circuit, momentarily later, current will stop flowing. The deceleration force will disappear and the subsequent motion is that the wire will continue moving at a constant velocity to the right after a brief moment of deceleration. Note: Conventional current flows from the positive terminal to the negative terminal in an external circuit, but from negative to positive inside the power source, such as within the wire in Figure 6.9.

Example 2

A rectangular loop of wire is moving through a magnetic field as show below:

A

B

C

Motion of the loop Figure 6.10  A rectangular loop moving through a magnetic field directed into the page

Describe the flow of current in the loop when the loop is at positions A, B and C.

125

motors and generators

Solution

First, we have to note that for all three positions, the top and bottom portion of the rectangular loop never ‘cut’ the magnetic field. Hence they have no role in the electromagnetic induction and consequently can be neglected. At position A: The right portion of the loop is subject to a changing magnetic field (cutting the field lines). We point our palm to the left, opposite to the direction of the motion of the coil, and fingers into the page and the current (thumb) is up. Hence the current flows anti-clockwise. At position B: A similar approach can be used. Both left and right portions have current flowing upwards. Therefore, they will cancel each other; no current will flow in the loop. Note: This is what happens if the external circuit is inside the magnetic field.

At position C: The left portion is subject to a changing magnetic field. Similarly, we can determine that the current is up. Hence the current flows clockwise in the loop.

6.7

Another application of Lenz’s law: back EMF in DC motors n

Explain that, in electric motors, the back emf opposes the supply emf

When the coil of a DC motor is spinning inside the magnetic field, at the same time, the coil is subject to a changing magnetic field caused by the relative motion between the coil and the field. This changing magnetic flux will induce an EMF in the coil. As a consequence of Lenz’s law, the induced EMF will cause the current to flow in such a way that it will oppose the cause of induction—the rotation of the coil. Hence the induced current will flow in the opposite direction to the input current, thus limiting the size of the input or forward current. This will decrease the torque, slowing down the rotation of the motor. Since the induced EMF works against the input voltage, it is referred to as the back EMF. When a DC motor is switched on, its rotational motion does not reach the maximum instantaneously due to its inertia; instead it starts off from rest and builds up its speed to the maximum quite rapidly. When the rotational speed is low, the coil is subject to a small rate of change in magnetic flux, so the back EMF induced is also small. Small back EMF results in almost no opposition to the forward EMF or current, which is useful in creating a large torque to accelerate the rotation. The drawback is that at this time, the large current may burn out the coil. To partially limit the size of the current, a device called a starting resistance may be employed. This is a resistive load that is connected in series with the coil when the motor is starting off. By increasing the resistance, it is able to reduce the forward current, thus protecting the coil from burning out. As the rotational speed of the coil increases, the rate of change in magnetic flux also increases, so the size of the back EMF increases. This limits the forward EMF so

126

chapter 6  electromagnetic induction

that the starting resistance can be removed. Soon after, the back EMF will achieve equilibrium with the forward EMF. At this stage, the motor is rotating at its working speed and will maintain a constant velocity thereafter, since the back EMF limits the forward EMF so that the net current produces torque just large enough to balance the friction and the load on the motor.

6.8

Eddy currents n

Explain the production of eddy currents in terms of Lenz’s law

Horseshoe magnet

As we have discussed, whenever there is a changing magnetic field or magnetic flux, EMF will be induced. The EMF will cause a current to flow in a wire provided there is an external circuit. Quite differently, in the case of a solid conductor, the EMF will cause loops of current to flow, and these circular currents are referred to as eddy currents. (Eddy = circle, loop.) Eddy currents flow in solid conductors.

Support

The induced eddy currents also follow Lenz’s law, that is, they circulate in such a way as to oppose the cause of induction. This principle is illustrated in Figure 6.11.

Aluminium disk (free to rotate) Figure 6.11  (a) A spinning aluminium disk

Note: Eddy currents are not ‘new’ things, they are simply circulating currents.

As shown in Figure 6.11, if a freely rotatable aluminium disk is spun, it will rotate for quite a while before it comes to rest. However, when a horseshoe magnet is brought next to it, the disk spins for a much shorter time and comes to rest much more quickly. The explanation for this phenomenon is that as the horseshoe magnet is brought next to this rotating disk, the disk is made to spin inside the magnetic field, so the disk experiences a changing magnetic field. Consequently, eddy currents are induced in the disk. These eddy currents follow Lenz’s law and circulate to generate a magnetic field to oppose the cause of induction, that is, the spinning motion of the disk. Hence, the disk is brought to rest very promptly. Consider what will happen to this disk if slots are cut in it, or it is replaced by a plastic disk. The answer is simple: the disk will spin as fast and as long as without the horseshoe magnet. This is because cutting slots and plastic itself (being an insulator) will impede the flow of eddy currents, so the effect of eddy currents is minimised.

Figure 6.11  (b) A spinning aluminium disk

Note: The detailed flow pattern of eddy currents is not required by the syllabus.

127

motors and generators

Applications of induction and eddy currents secondary source investigation

Induction cooktop n

PFAs H3 physics skills H13.1a, b, c, e H14.1d, g, h H14.3d

Gather, analyse and present information to explain how induction is used in cooktops in electric ranges

Induction cooktops use a changing magnetic field to generate eddy currents in order to produce the heat that cooks the food. The schematic diagram of an induction cooktop is shown in Figure 6.12. The functional principle of the induction cooktop is as follows. When an AC current flows through the coil, the changing current produces a changing magnetic field. This magnetic field passes through the ceramic cooktop and the saucepan or any other metal containers. Within the saucepan’s base, eddy currents are generated. The circulation of eddy currents in the presence of the resistance in the saucepan generates heat. This heat can be used to heat the food content. Note: A current flowing through any conductor that has resistance will generate heat.

n

n n n

The advantages of this type of cooktop over others are: They are very efficient in converting electrical energy to heat energy compared to normal electric cooktops that rely on conduction. The source of heat is in direct contact with the food being cooked. There is no open fire so it reduces the possibility of a fire hazard in the kitchen (unlike gas cooking). The ceramic cooktop is very easy to clean, so it improves hygiene maintenance. The cooktop itself does not generate heat, so burns to children are less likely. Saucepan

Food content Eddy currents Ceramic cooking top Magnetic field (changing)

AC power source

Figure 6.12  Induction cooktop

128

Induction cooktops only heat up in contact with metal. Notice how the ice cubes have barely melted.

chapter 6  electromagnetic induction

Eddy current braking n

Gather secondary information to identify how eddy currents have been utilised in electromagnetic braking

As discussed, a rotating disk in the presence of a magnetic field will be brought to rest much more promptly than without the magnetic field. This principle is used in braking. Using a train as an example, suppose a train wants to stop; very powerful magnets are lowered down next to the metal wheels of the train. The rotating wheels in the presence of the magnetic field slow down very rapidly due to the production of eddy currents within the wheels that oppose the motion of the wheels as described above. It is important to note that the braking effect reduces as the speed of the train decreases. This is because the size of the eddy currents, and therefore of the force that opposes the motion of the wheels, is proportional to the rate of change in magnetic flux (Faraday’s law) which consequently depends on the relative motion between the wheels and the external magnets. Hence, when the train reaches very low speeds, eddy current braking is no longer useful and at that point a mechanical brake is applied to stop the train completely. The advantages of this type of braking are: n It is smooth—the braking force is greatest when the train is at its operating speed, but gradually becomes smaller to an unnoticeable force as the train slows down. n There is no wear and tear as there is no physical contact between the braking system and the wheels. Consequently, there is a low maintenance effort and cost. The main problem with this system is that it only works for metal wheels and at higher speeds.

secondary source investigation PFAs H3 physics skills H13.1a, b, c, e H14.1d, g, h H14.3d

Eddy current braking

Electromagnetic induction n

Perform an investigation to model the generation of an electric current by moving a magnet in a coil or a coil near a magnet

An interesting class activity can be used here prior to commencing this investigation, as follows: n Connect the ends of a long (approximately 40 m) wire to the terminals of a galvanometer. n Run a section (about 18 m) of this wire in an east–west direction. n Have two students spin this east–west section of wire like a skipping rope, while other students observe the galvanometer. n When the wire moves up, the galvanometer moves in one direction, and then in the opposite direction as the wire moves down. n The amount of deflection of the galvanometer can be increased by increasing the speed of the ‘skipping rope’. Students should acknowledge that, as the resistance of the wire is constant, the current shown by the galvanometer is proportional to the EMF generated in the wire as it moves through Earth’s magnetic field (Ohm’s law).

first-hand investigation PFAs H1, H3 physics skills H12.1a, d H12.2b H12.3d H13.1e H14.1a, d, g

TR Risk assessment matrix

129

motors and generators

The effects of magnets on electric currents first-hand investigation

n

physics skills H11.1a, b, e H11.2a, b, c, e H11.3a, b H12.1a, d H12.2b H13.1c H14.1a, d, g H14.3c TR



Plan, choose equipment or resources for, and perform a first-hand investigation to predict and verify the effect on a generated electric current when: – the distance between the coil and magnet is varied – the strength of the magnet is varied – the relative motion between the coil and the magnet is varied Note: This procedure requires you to plan the experiment, including selecting appropriate equipment. A discussion of what apparatus is available would be beneficial before commencing this investigation.

Topics 1. Demonstrate the production of electricity by using a moving magnet and coil. 2. Demonstrate the effect of following on the generation of electricity using a moving magnet

Risk assessment matrix

and coil: (a) the distance between the magnet and coil is changed (b) the strength of the magnet is changed (c) the relative motion between the coil and magnet is changed

Aim A suitable aim for the investigation should be written, using the syllabus points as a guide. (The syllabus point actually provides the aim in this case.)

Hypothesis An educated prediction of what will happen in each of the situations should be made using the theory of electromagnetic induction.

Apparatus A straightforward list of required equipment should include a sensitive galvanometer or microammeter, as the generated current is quite small. The strongest magnets available should be used.

Method A lab shot of the apparatus

Clear, step-wise instructions should be written that will enable another reader to reproduce the experiment as it was performed. Diagrams should be drawn to illustrate what was done. The set up of both experiments may follow the one shown in Figure 6.3. It is important to emphasise that when doing the second experiment, only one variable can be changed a time and the others need to be kept as the controls. This idea is demonstrated in the chapter exercise questions.

Results and observations This is where the outcome of the above method is recorded carefully. In this experiment, qualitative results are sufficient, that is, the actual current generated does not matter, but the effect on the current does.

130

chapter 6  electromagnetic induction

Discussion Discuss the reliability (repeatability) of the experiment and the validity (correctness) of the results. Ways to improve the experiment should be noted here too.

Conclusion A brief summing up of the experiment’s success or otherwise is made here.

chapter revision questions   1. Outline the procedure of Faraday’s mutual induction experiment, and describe the

findings and theory of this experiment.   2. Two identical solenoids are placed next to each other as shown in the figure below.

Power source



Primary coil

Secondary coil

(a) Explain why, when a DC current is switched on in the primary coil, there is a flickering movement of the needle of the galvanometer connected to the secondary coil. (b) What manoeuvre can be used so that a current is always registered by the galvanometer in the secondary coil? (c) Explain the phenomenon of the flickering becoming much larger when a soft iron core is inserted inside the secondary coil.   3. A circular loop of coil is placed inside a magnetic field as shown in the figure below. If

now the coil is stretched to form a square, will there be an EMF induced and what is its direction? Explain your answer.

  4. A bar magnet is moved from point A towards a solenoid to point B at a constant

speed (V). (a) Describe what will occur in the solenoid. Describe the change to the size of the induced EMF if:

131

motors and generators

Solenoid

Bar magnet

V

A



B

C

(b) The magnet is moved from A to B at a doubled speed, that is, 2 V. (c) The magnet is moved from A to C at V, such that AC is twice as long as AB. (d) The magnet is changed to one which is twice as weak. (e) The south pole faces the solenoid. (f) The number of turns of the solenoid is increased by 10 times.   5. Use Faraday’s law and Lenz’s law to determine the direction of the induced current for

the following conductors (the thicker lines):

(a)

(b)

B

(c)

B

B Moving into the page

V V











(e)

(d)



(f)

B

V V





(g)







Magnetic field strength increases

(h)

B

V

V



132

    







chapter 6  electromagnetic induction

(i)

(j) Current increases Current increases



     

(k)

  (l)

Current increases

Magnetic field strength increases



    



No EMF induced

  6. Lenz’s law is an extension of the law of conservation of energy. Discuss this statement.   7. A small magnet is projected horizontally towards a solenoid at a constant velocity.

Assume there is no friction. Describe the motion of the magnet when: (a) the ends of the solenoid are not connected. (b) the ends of the solenoid are connected via a long wire. (c) the solenoid has its ends connected and a soft iron core is placed inside the solenoid.   8. (a) Back EMF limits the speed of electric motors. Discuss the principle behind this

statement. (b) Using the idea of back EMF, explain why an electric drill, when jammed in a wall, is very likely to burn out if it is not switched off immediately.   9. A small magnet is dropped vertically through an infinitely long hollow copper tube.

(a) Describe what will occur in the copper tube as the magnet falls. (b) Sketch a graph to describe the change in the velocity of the magnet as a function of time. (c) If the copper tube has numerous holes in its wall, will this change the answer to part (b)? 10. Eddy currents can be beneficial sometimes, whereas at other times, they have adverse

effects. (a) Describe two situations where eddy currents are beneficial. (b) Describe one situation where eddy currents are a nuisance to the system. 11. What consideration(s) should be made when one is to choose cooking wares that are to

SR

be used with an induction cooktop? 12. Describe the principle of eddy current braking and evaluate its advantages. 13. Design an experiment that demonstrates how the strength of the magnetic field will

influence the size of the induced EMF.

Answers to chapter revision questions

133

motors and generators

CHAPTER 7

Generators Generators are used to provide large scale power production

7.1

Generators n n

Describe the main components of a generator Compare the structure and function of a generator to an electric motor

So far we have discussed the principle of magnetic induction in which mechanical energy can be converted to electricity via a changing magnetic flux. In this chapter, we will look closely at a specialised device that does this, the generator. Definition

An electric generator is one that converts mechanical energy to electrical energy using the principle of electromagnetic induction.

Figure 7.1  A simple DC generator

Generators are the key functional unit of power stations. In power stations, various energy sources, such as fossil fuels, are used as the source of energy for the generators to produce electricity. To describe a generator in very simple terms, we can say a generator has essentially the same design as an electric motor; however, it functions in the opposite way compared to that of a motor. Just like a motor, there are DC and AC electric generators. A schematic diagram of a simple DC generator is shown in the diagram below: Turbine/Handle a

b

Coil

Magnetic field

d

c

Brush Split ring commutator

134

chapter 7  generators

Note: In the context of this chapter, DC generators do not include galvanic cells (batteries).

The main functional components of an electric generator The main components of a generator are: the magnetic field an armature a commutator the carbon brushes

■■ ■■ ■■ ■■

A generator at a power station

Magnetic field

As noted before, a changing magnetic flux is essential for the generation of electricity. Hence a magnetic field is essential for the function of a generator. The magnetic field can be provided by either permanent ferromagnets or electromagnets, as with motors. The changing magnetic flux is often created by creating a relative motion between the magnetic field and coil. The magnets are usually the stator in simple generators such as those described in this chapter, but in industrial motors, they are often the rotor. The relative motion, for instance, rotating the coil inside a magnetic field, is often achieved by the turning motion of a turbine, which is in turn powered by steam generated by burning of fossil fuels, or from hydropower or wind.

Magnets

Armature

An armature refers to the coil of wire wound around a soft iron core. It can be a stator or rotor depending on the design of the generator. Multiple coil armatures are common for large scale electric generators.

An armature Commutators and brushes

135

motors and generators

Commutator

The structure of commutators in generators is similar to that of motors. As in motors, DC generators have split ring commutators, whereas AC generators have slip ring commutators. Carbon brushes

The carbon brushes in generators have the same structures, and are used for the same purpose as in electric motors. Refer to Chapter 5.

7.2

Magnetic flux, changing of flux and induced EMF A few rules must be clarified before we move on. Figures 7.2 and 7.3 illustrate the different positions of the coil in a DC generator rotating clockwise inside the magnetic field. In Figure 7.2, the coil is at a position that is parallel to the magnetic field. At this position the magnetic flux linked with the coil is zero as we have discussed before. However, it is very important to note that side ‘ad’ and ‘bc’ of the coil are ‘cutting’ the magnetic field lines perpendicularly, that is a maximum cut, so it follows that the changing flux at this position is the greatest. Consequently, the EMF induced when the coil is at this position is the greatest. Note: Recall that a maximal ‘cutting’ of the magnetic field by convention means a maximum change in flux. Note: Side ‘ab’ and ‘dc’ will not cut the magnetic field lines at any position during the course of rotation, therefore they do not participate in electromagnetic induction.

Figure 7.2   When the coil is parallel to the magnetic field

Turbine/Handle a

b Cutting of flux at 90º

Magnetic field

d

136

c

chapter 7  generators

a

Not cutting the flux

d b

c

As the coil rotates, the amount of the magnetic flux through the coil gradually increases, and the change of magnetic flux (the cutting), and hence the size of induced EMF, gradually decreases. Eventually, when the coil reaches a vertical position, as shown in Figure 7.3, the magnetic flux linked with the coil increases to a maximum. However, at this position, the side ‘ad’ and ‘bc’ are moving parallel to the magnetic field lines so there is no cutting of the field lines. Under this circumstance, the changing of magnetic flux is zero, therefore the size of the induced EMF drops to zero.

The difference between a DC generator and an AC generator n

Figure 7.3  When the coil is perpendicular to the magnetic field

7.3

Describe the differences between AC and DC generators

The major difference between a DC generator and an AC generator is the commutator used.

DC generator and split ring commutator To understand the role and function of a split ring commutator in a DC generator, we need to look at it in conjunction with the nature of DC, as well as the way electricity is induced in the generator. Direct current (DC) by definition is one that flows in one direction only. Consider the generator in Figure 7.4. The coil is at a position that is parallel to the magnetic field and is rotating clockwise between the poles of the permanent magnets. As explained before, at this position, the EMF generated is a maximum. The direction can be determined by using the ‘right-hand palm rule’. For example, for

137

motors and generators

a

I

b

c

d

 A

I

Figure 7.4  A DC generator with a split ring commutator

A 

B   B

External circuit

I

side ‘ad’, which is moving up, the palm of the right hand pushes down (Lenz’s law), fingers to the right. The thumb will point from d to a, hence the current flows from d to a. Similarly, the current flows from b to c, consequently the current flows clockwise. Hence the commutator A and the brush A will be negative and commutator B and brush B will be positive. Note: Conventional current leaves at the positive terminal.

A A

B B

Figure 7.5  DC generator, coil at the vertical position

138

The size of the induced EMF decreases gradually and reaches zero when the coil is at the vertical position. This is shown in Figures 7.3 and 7.5. However, as soon as the coil goes pass this point, induced EMF starts to increase again as the coil now starts to cut the field lines again. As shown in Figure 7.6, by applying ‘right-hand palm rule’, the current is now flowing from a to d on side ‘ad’ and b to c on side ‘bc’. More importantly the commutator A is now positive and the commutator B is negative. In other words, the polarity of the commutator is now reversed. The importance of the split ring is that it allows the positive commutator A to contact brush B and negative commutator B to contact brush A so that brushes A and B retain the same polarity at all times, hence the same current flows in the external circuit even when the polarity of commutators A and B reverses. This makes the current a DC. To summarise, the role of a split ring commutator in a DC generator is to allow each half of the commutator to contact a different brush every half rotation at the vertical positions. This is to ensure that as soon as the polarity of the half commutator reverses at the vertical positions, contact with the brush is also reversed. This is to ensure the brushes always maintain the same polarity, so that the direction of the output current can be maintained in one direction.

chapter 7  generators

a

>

b

d

c

+ A

– B

– A >

+ B

> external circuit

Figure 7.6  DC generator, coil at an inclined position

The EMF generated by a DC generator

To summarise all of the above, we can plot a graph of the EMF output of a DC generator against time or the position of the coil as it rotates through the magnetic field over one complete revolution:

V

Note: This DC generated is somewhat different from that generated by a galvanic cell (batteries).



t



AC generator and slip ring commutator

a

The structure of a slip ring commutator is shown in Figure 7.8. The purpose and functional principle are simple as it is only a device to conduct electricity into and from the external circuit without tangling up the wires. As the coil rotates inside the magnetic field, the polarity of two parts of the slip ring commutator reverses every half cycle at the vertical positions Spring as in a DC generator (due to the reverse in the current direction in the coil). Slip ring commutator However, there is now no means by which they can change their contact with the brushes, so the polarity of the

b d

a

b

c b

d

b

a c

a

d a

c

c

b d

c

d Figure 7.7  DC output

Carbon brushes

Spring AC power source

Figure 7.8  The structure of a slip ring commutator

139

motors and generators

V

 t



a

a

b d

b d

c b

a c



a

b

c

d a

c Figure 7.9  AC output

b

d

c

d

brush will also change every half cycle. This results in the current generated varying direction constantly, hence the name alternating current or AC. To summarise events of the production of AC, we can also plot a graph of the EMF output of an AC generator against the position of the coil as it rotates through the magnetic field over one complete revolution, shown in Figure 7.9. It is important to point out that for both DC and AC output from their respective generators, the period of the wave and so the frequency, as well as the amplitude of the current, depends on the speed of rotation. The period of the EMF varies inversely with the rotational speed, whereas the frequency and amplitude are directly proportional to the speed of rotation.

Three-phase AC generator Compared to simple AC generators, large scale AC generators are often slightly modified in terms of their structures and functional principles. At power stations and in industries, three-phase AC generators are commonly used. The structure of a three-phase AC generator is shown by a schematic drawing in Figure 7.10. It consists of three stationary coils (stator) situated at 120o to each other and a magnet that is made to rotate at the centre as the rotor. Each stationary coil has its own power output. Consequently, the net output of this generator is three-phase AC current, where the AC waves are out of phase by 120° as shown in Figure 7.11. The electricity produced by a three-phase AC generator is carried to consumers by three separate "ÕÌ«ÕÌÊ£ active wires. The electricity is returned to the generator by a single wire to complete the circuit. At a domestic Î level, usually only one active wire is used; however, in Ê Ì Ì«Õ "Õ many industries, all three active wires may be used.

Ì«Õ

ÌÊÓ

*ˆÛœÌ



Î

ÕÌÊ

Ì« "Õ

Ì«Õ

ÌÊÓ

EMF 

1

2

3

"ÕÌ«ÕÌÊ£ œÌiÊ̅>ÌÊ̅iÀiÊ>ÀiÊÎÊÃiÌÃʜvÊ«œÜiÀÊ œÕÌ«ÕÌÃ]Êi>V…ʈÃÊ£Óä¨Ê̜Ê̅iʜ̅iÀÃ

time 

Figure 7.10  A schematic drawing of a three-phase AC generator

140

Figure 7.11  Three-phase AC current. All current waves areidentical but are out of phase by 120º

chapter 7  generators

The transmission wires Huge generators that provide electricity for a very large region are usually situated far away from the actual sites where electricity is used. Hence, electricity has to be distributed via long transmission wires. These wires run overhead over a very long distance and are supported intermittently by metal towers. Besides the cost of setting up such a distribution system, there are other issues affecting the electrical distribution system.

7.4

Insulating and protecting n

Gather and analyse information to identify how transmission lines are: – insulated from supporting structures – protected from lightning strikes

Protection from lightning

secondary source investigation physics skills H13.1a, b, c H14.1b, g, h

When transmission wires are struck by lightning there is the risk of the system being damaged, or overloaded and shutting down, as well as damage to infrastructure like transformers, power poles and wires. To protect these wires from lightning, there is usually another wire that runs over and parallel to the transmission wires, connected to earth. This wire does not carry current but in the case of lighting strikes, lightning will hit the overhead wire first and through it the huge current of the lightning will be diverted to earth, leaving the transmission wires untouched.

Insulation of transmission wires from supporting towers Overhead transmission wires are usually bare. Therefore, if they make contact with the supporting metal towers, two things will happen: 1. The metal towers will become live so anything that makes contact with the towers will experience an electric shock. 2. The wires will short circuit and disrupt the electricity distribution. Neither of these is desirable, so the wires must be insulated well from the metal towers. To do so, the wires are suspended from the towers by insulators that consist of stacks of disks made from ceramic or porcelain. Porcelain is chosen because it is strong and retains its insulation properties even under a very high voltage. The disk-shaped insulators also increase the effectiveness of the insulation by minimising the chance of a spark jumping across the gap.

Transmission wires

The porcelain disks

141

motors and generators

Energy lost during transmission n

Discuss the energy losses that occur as energy is fed through transmission lines from the generator to the consumer

Transmission of electricity is not 100% energy efficient; rather, energy is lost during transmission from the power station to consumers. The energy lost is mainly in the form of heat. This is because as a current flows through a conductor that has resistance, heat will be dissipated. Since the resistance of a conductor is proportional to its length, wires that run over a very long distance will have a significant amount of resistance, so the heat dissipated (energy lost) is also quite substantial. The heat lost during transmission can be quantitatively described by using the equation:

P = I2R

Where: P = heat lost during transmission, measured in J I = the current flow through the wire, measured in A R = the total resistance of the wire, measured in Ω This equation can be derived by combining the power equation P = IV and Ohm’s law V = IR.

In addition to the heat lost in the transmission wires, transformers used to change the voltage in the transmission process are less than 100% efficient. Approximately 6%, or 600 MW, of the power generated in New South Wales is lost as heat due to the resistance in the wires and transformers. With more than 500 km of wires between the Snowy Mountains hydro electric power stations and the end users in Sydney, such losses are unavoidable. Energy or heat lost for AC transmission is much less than DC transmission. The reasons for this are discussed in Chapter 8.

Advantages and disadvantages of AC and DC generators secondary source investigation physics skills H13,1a, b, c H14,1e

142

n

Gather secondary information to discuss advantages/ disadvantages of AC and DC generators and relate these to their use

So far, we have considered the way electricity is produced by generators and distributed to consumers. An obvious question is, ‘Why do we have two different types of generators?’ This is because AC and DC generators have their own advantages and disadvantages. In the following section, the advantages and disadvantages of AC and DC generators are compared. Note that the advantages of one type of generators are generally the disadvantages of the other, and vice versa. Discuss whether the advantages and disadvantages of AC and DC generators influence their uses.

chapter 7  generators

Disadvantages of DC generators n A DC generator requires a split ring commutator for its function. This inevitably complicates the design of the generator. This results in more expensive construction as well as more cost and effort for maintenance. n The gap present in the split ring commutator results in sparks being produced during the generation of electricity. n As described in the previous section (see also Chapter 8), the output of DC generators (DC electricity) loses more energy than that of AC generators during transmission.

Advantages of DC generators n Some devices, such as battery rechargers and cathode ray tubes, rely solely on DC currents for their function. Although AC current can be converted to DC current using electronic devices, it will be more convenient and cost effective to produce DC directly using a DC generator. n For a given voltage, DC current is generally more powerful than AC, so that DC is preferred in heavy-duty tools. Under this circumstance, DC generators are superior.

A DC powered drill

Disadvantages of AC generators n Refer to the advantages of DC generators. n For correct integration of electricity throughout the nation, the frequencies of AC generators in different regions must be synchronised, that is, the AC outputs must have the same frequency and be in-phase. This requires extra coordination. n The output of AC generators (AC) is 10 times more dangerous than the equivalent DC generator output. This is because AC, with its conventional 50 Hz frequency, can most readily cause heart fibrillation.

Advantages of AC generators n Refer to the disadvantages of DC generator. n Three-phase AC currents are made possible, as described earlier. Three-phase AC has a variety of applications, such as powering induction motors (see Chapter 9). n AC voltage is easily increased or decreased using transformers (see Chapter 8).

It is important to note that, although most power stations use AC generators, this does not mean that AC generators are absolutely more superior to DC generators.

The competition between Westinghouse and Thomas Edison n

Analyse secondary information on the competition between Westinghouse and Edison to supply electricity to cities

What were the contributions made by Thomas Edison and his system of supplying electricity? Thomas Edison was a famous inventor. He collected many patents throughout his life. He was famous for the invention of electric light bulbs. Edison was the first person to set up business to supply electricity (in 1878), and his system at the time was DC based. He initially installed light bulbs for homes, then street lights, which were all running on DC. His electricity and lighting system lit up New York City, and was shown to be very successful. He also developed DC motors and other appliances that ran on DC.

secondary source investigation physics skills 12.3A, B, D 12.4A, c, e, f

143

motors and generators

Thomas Edison

Nikola Tesla

George Westinghouse

Who was Nikola Tesla? Nikola Tesla was the first person to demonstrate the production of AC and its transmission system in 1883. He worked for Edison for a few years. During this time, Edison did not approve his proposal to develop AC systems. Nikola Tesla also invented AC motors, which made AC even more efficient to use.

Who was George Westinghouse and what was his system of supplying electricity? Westinghouse was a wealthy businessman who bought the patent of the AC system from Tesla and opened up his own electric company (in 1885) in order to compete with Edison. His electricity supplying system was based on AC generators and transmission systems.

Who won and why? Westinghouse was the overall winner, as the AC system was more efficient for two reasons. The split ring commutator in DC generators posed a problem with high speed rotation. Most importantly, AC transmissions through the action of transformers were much more energy efficient. This allowed electricity to be transmitted over longer distances with only a small amount of energy loss. In 1886, there was a competition for inventors to propose plans to build a power plant using the power of Niagara Falls to supply electricity to distant cities. Both Edison and Westinghouse participated. Westinghouse proved the high energy efficiency of his AC system through many demonstrations, and eventually won the competition. He built his AC system at Niagara Falls a few years later, which confirmed the superiority of his system over Edison’s DC system.

7.5 SR

‘Assess’

144

Assess the impacts of the development of AC generators on society and the environment n

Assess the effects of the development of AC generators on society and the environment

This is an ‘assess’ dot point, in which students need to give their own opinions and support them by examples and evidence. It is important to note that there is no perfect answer; however, the arguments must be logical and succinct.

chapter 7  generators

The following sections discuss some impacts of AC generators. You may come up with your own opinions or ideas and may also research your own examples and evidence.

Wood-fired stove

Positive effects and impacts Improvements in the standard of living

AC generators enable people to have electricity at home. Electricity makes people’s lives more comfortable and luxurious. It allows people to have lights, heaters, air conditioners, electric tools, and so on. It provides people with entertainment, for instance, TV, movies, computers, all of which make life more exciting.

Electric oven

Efficient and clean energy

Electricity is very efficient energy. It is a matter of switching on an electrical device. Compare switching on a light bulb or an electric stove to lighting up a diesel lamp or a wood-fire stove— electricity is instantaneous and far more efficient. Domestically, electricity appears to be the cleanest form of energy: compare an electric heater to a coal-fire heater that generates dirt and smoke. However, electricity is not an absolutely clean energy, because at the global level, pollution released by power plants still poses a major threat to the environment. Concentration in production of electricity

If AC generators had not been developed, electricity production would be, at best, at very small regional levels. The invention of AC generators allows electricity to be generated at one centre. Such a concentration of energy production makes pollution much easier to manage. Also, because these big power plants are often situated far away from urban areas, the pollution effects on cities and people are reduced. Regulation of energy production

Concentration of energy production also means energy production and consumption can be recorded and monitored. This is important for managing energy production as well as research. Minimisation of energy lost through transmission

A motor used in industry

Because of the high energy efficiency in transmission of the AC system compared to the DC system, AC power plants can generally be situated far away from the site of energy consumption without losing significant amounts of energy during transmission. This results in less waste and less demand on energy resources. It also makes electricity cheaper to use. Development of industry

Development of AC generators, especially with the invention of AC induction motors, stimulated the development of industry. AC generators also have an overall effect on stimulating the progress of technology.

145

motors and generators

Negative effects Environmental pollution

Currently, the main source of energy used for AC generators is fossil fuels. Burning fossil fuels releases a variety of pollutants that can contaminate the atmosphere. The major one is CO2, which contributes to the enhanced greenhouse effect. Hot water discharges from power plants into local waterways leads to thermal pollution of rivers and streams. Nuclear energy also has a position in electricity production. Wastes from nuclear energy can pose a major threat to the environment.

Environmental pollution

Disturbance of natural habitats

Constructing large AC power plants requires modification to the landscape. This will inevitably disturb or even destroy local habitats or natural heritages. For instance, building a hydro-electric power plant might raise surrounding water levels by many metres, severely disturbing natural habitats. Nuclear power stations may also disturb natural habitats. Mining of energy sources (coal) has a similar effect. Accidents

Injuries and deaths from electric shocks have become more common with the widespread use of AC power. Fire hazards and other accidents associated with AC powers are also very common. Accidents at nuclear power stations can be disastrous, such as the accidents in Chernobyl and Three Mile Island. Replacement of labour

Many tasks that used to be done by humans have now been replaced by electric tools and machines. This causes unemployment, adding a burden to society. Overwhelming industrialisation

Development of AC power enhances industrialisation. However, excessive industrial development brings a lot of adverse effects to both society and the environment. Note: In your answer, you can also assess the positive and negative impacts of the development of AC generators, but at the end, your own opinion and value should be expressed very clearly.

The production of an alternating current first-hand investigation physics skills H11.1a H11.2e H11.3a, b, c H12.1a, d H12.2a, b

146

n

Plan, choose equipment or resources for, and perform a first-hand investigation to demonstrate the production of an alternating current

Demonstrate the production of an alternating current In this experiment, you need to produce AC using a simple AC generator. It is important to note that AC output cannot be measured using a galvanometer. In order to assess the frequency and size of the AC produced, a cathode ray oscilloscope (CRO) needs to be used. When conducting the experiment, note: n What effect does the speed of rotation have on the amplitude or size of the AC output?

chapter 7  generators

n What effect does the speed of rotation have on the frequency of the AC output? How can the frequency of AC be determined from the information displayed on the CRO? n Optional: How would the variation in size and frequency of the AC output be demonstrated in a load, such as a light bulb?

Display of the electricity output using a CRO Laboratory shot of a small AC generator

TR

Risk assessment matrix

chapter revision questions 1. (a) List the essential components of a DC generator.

(b) How does the armature of a DC generator differ from a DC motor? 2. Describe the role of a split ring commutator in a DC generator. How does that compare to

that in a DC motor? 3. A simple AC generator is shown in the diagram below: Rotational axis

A

B

(a) Sketch a graph that describes the changes in magnetic flux as the coil completes one revolution within the magnetic field. (b) Sketch a graph that describes the rate of change in magnetic flux as the coil completes one revolution within the magnetic field. (c) What can you say about the similarity and difference between these graphs?

147

motors and generators

(d) Based on the graph you drew for (b), sketch another graph that describes the size of the induced EMF as the coil rotates once inside the magnetic field (with respect to A). On the same axis, sketch another graph when the number of turns of the coil is increased by 20 times. (e) What changes would you make to this design in order to generate a DC current? 4. The following graph shows the voltage profile of a AC generator over time. Voltage (V)

5

t1

5

t2

t3

Time (s)



(a) Draw a graph to describe the voltage profile when the magnets of the generator are changed to ones that are twice as strong. (b) Draw a graph to describe the voltage profile when the armature of the generator is turned twice as fast. 5. A generator can also be made when the magnet is made to spin in between sets of coils.

See the drawing below.

Rotational axis



TR

Coil (stationary)

Coil (stationary)

(a) What is the advantage of this design? (b) What is the nature of the output voltage? Is it AC or DC? (c) Use this idea to illustrate how a three-phase AC generator can be constructed? 6. (a) Describe the infrastructures used to deliver electricity from a power station to

consumers. (b) Describe how these infrastructures are protected from lightning strikes. Verb scaffolds

SR

7. (a) Justify why AC generators are more commonly used than DC generators in today’s

society. (b) Identify one drawback of using AC generators. 8. Assess the development of AC generators on society and the environment. 9. A student was turning the handle of an AC generator in a school laboratory to power a

Answers to chapter revision questions

148

small light bulb. The student realised that the handle was much harder to turn when the switch for the light bulb was on. If the student came to you for advice, how would you explain this phenomenon?

chapter 8  transformers

CHAPTER 8

Transformers Transformers allow generated voltage to be either increased or decreased before it is used Introduction Transformers are devices that are able to change the size of input voltage via electromagnetic induction. They are found everywhere! They can take the form of large rectangular boxes that are found buried underground or hanging on telegraph poles. Smaller versions are found at the end of power cords for various types of chargers and electronic devices at home. Transformers have important impacts on both the society and the environment, and these will be discussed in this chapter.

Transformers: What are they? n

Describe the purpose of transformers in electrical circuits

8.1

Definition

Transformers are devices that increase or decrease the size of the AC voltage as it passes through them. The simplest transformer consists of: A primary coil where the AC voltage is fed in. A secondary coil as the output that will be connected to a load. The secondary coil will have different turns of coil to the primary coil depending on whether it increases or decreases the voltage. ■■ Both the primary coil and secondary coil are wound around a soft iron core.

■■ ■■

Note: The ‘soft iron’ in this context refers to pure irons. Unlike steel,

pure irons are generally quite soft.

Soft iron core Primary coil which is connected to an AC power source Figure 8.1  A transformer

Secondary coil which is connected to a load

Primary coil

Secondary coil

Soft iron core

Figure 8.2  Transformer circuit symbol

149

motors and generators

Principle of operation How does a transformer change the voltage of the AC power input?

When the AC power is fed into the primary coil of the transformer, the changing current of the AC produces its own changing magnetic flux. The magnetic flux is linked to the secondary coil via the soft iron core. In the secondary coil, this changing magnetic flux will induce an EMF as the output. Since the size of the magnetic flux developed in the primary coil depends on the number of turns of the primary coil, and the EMF induced in the secondary coil also depends on the number of turns of the secondary coil, logically, by varying the number of turns of coil, the voltage induced in the secondary coil can be either larger or smaller than that in the primary coil. Note: Because a changing magnetic flux is essential in electromagnetic induction, transformers do not work for DC due to the constancy in its current and the associated magnetic flux.

The role of the soft iron core in transformers

Besides acting as a support frame on which the coils can be wound, the soft iron core in transformers has two additional very important roles: 1. The soft iron acts as a medium through which magnetic flux can flow. Analogy: Just as sound travels best in dense solids, magnetic flux travels best in (soft) iron.

Therefore, the soft iron core is responsible for linking the magnetic flux from the primary coil to the secondary coil. 2. More importantly, soft iron is used as the core in order to amplify or intensify the magnetic flux, so that the mutual induction process becomes more efficient. The way soft iron can magnify magnetic flux is by aligning its randomly orientated domains with the external magnetic field, becoming a temporary ferromagnet that produces its own magnetic flux, which is added to the existing magnetic flux. This idea is illustrated in Figure 8.3 (a) and (b). Note: Domains are the smallest units in a material that possess a net magnetic field. Each domain contains billions of atoms. The magnetic field of the domains is produced by the moving electrons of the atoms. Note: Similarly, for the purpose of magnifying magnetic flux, coils in motors and

generators are wound on soft iron cores.

Net magnetic field of a domain (aligned and summed) Net magnetic field of a domain

Net magnetic field of the soft iron bar

Domain Domain

Figure 8.3  (a) A soft iron bar External magnetic field

Figure 8.3  (b) A soft iron bar within an external magnetic field

150

chapter 8  transformers

Energy lost in transformers n

Gather, analyse and use available evidence to discuss how difficulties of heating caused by eddy currents in transformers may be overcome

If a transformer is 100% energy efficient, then the energy of the AC power input into the primary coil should equal to the power output of the secondary coil. However, in reality, transformers are not perfectly efficient. A portion of the input energy is lost during the process of mutual induction. The energy loss is mainly in the form of heat, which is dissipated by both the primary and secondary coils, but, more extensively, by the soft iron core. The reason for the heat dissipation by the soft iron core is as follows: just like the secondary coil, the soft iron core is also subject to the changing magnetic flux produced by the current in the primary coil. Being a solid conductor, there will be eddy currents generated in it. The circulation of eddy currents in the core then generates heat—similar to how induction cooktops use eddy currents to heat food—which is then dissipated to the surroundings.

secondary source investigation PFAs H3 physics skills H13.1a, b, c, d H14.1g, h

How can the heat lost by the core be minimised? To minimise the heat dissipation by the soft iron core, the core is laminated. Lamination means the core is constructed using stacks of thin iron sheets, each coated with insulation materials so that it is electrically insulated from the neighbour iron sheets (see Fig. 8.4). Lamination effectively increases the resistance of the core to the flow of eddy currents, therefore restricting the circulation of large eddy currents. This leads to a decrease in the heat dissipation by the core, and increases the overall energy Figure 8.4  Lamination efficiency of the transformer.

Soft iron core showing the lamination

Iron sheet

Types of transformers n

Compare step-up and step-down transformers

8.2

Generally, there are two types of transformers, classified based on whether they increase or decrease the input voltage.

Step-up transformers For step-up transformers, the voltage output from the secondary coil is larger than the voltage input into the primary coil. (Hence ‘step up’.) The secondary coil has more turns than the primary coil.

151

motors and generators

Step-down transformers For step-down transformers, the voltage output from the secondary coil is smaller than the voltage input into the primary coil. (Hence ‘step down’.) The secondary coil has fewer turns than the primary coil.

8.3

Calculations for transformers n

n

Identify the relationship between the ratio of the number of turns in the primary and secondary coils and the ratio of primary to secondary voltage Explain why voltage transformations are related to conservation of energy

Solve problems and analyse information about transformers using Vp np = Vs ns

■■

Calculating the voltage change To quantitatively calculate the voltage change for a given transformer, we need the formula:

Vp np Vs = ns

Where: Vp = voltage input into the primary coil, measured in V Vs = voltage output from the secondary coil, measured in V np = number of turns of the primary coil ns = number of turns of the secondary coil

Note: the ‘voltage’ in this section refers to AC voltages.

Example 1

A transformer has a primary coil of 2500 turns and a secondary coil of 500 turns. (a) Classify this transformer. (b) If the input voltage of the primary coil is 240 V, what will the voltage output of the secondary coil be? Solution

(a) This is a step-down transformer, as there are fewer turns of coil in the secondary than the primary.

152

chapter 8  transformers

(b)

Vp np = Vs ns



240 2500 = Vs 500



Vs = 48 V

Example 2

A discman will only function if it is fed a voltage of 4.5 V. The common household voltage is 240 V. What will be the required turn ratio between the primary and secondary coil if a transformer is to be used? Solution

Vp np = ns Vs np 240 = ns 4.5 np 160 ∴ n = 3 s ∴ the turn ratio between the primary coil and the secondary coil should be 160:3.

Calculating the power for transformers Recall that power in electricity is the product of voltage and current. Or mathematically:

P = IV

Where: P = power, measured in W I = current, measure in A V = voltage, measured in V

Note: For a perfectly energy efficient transformer, the power input should be equal to the

power output. This is known as the law of conservation of energy.

Example 1

For a transformer, the voltage and current input into the primary coil is 240 V and 2.00 A respectively. (a) Calculate the power input of the transformer. (b) Calculate the power output if the transformer is 100% energy efficient.

153

motors and generators

Solution

(a) Pp = VpIp Vp = 240 V Ip = 2.00 A ∴ Pp = 240 × 2.00 = 480 W (b) 480 W. If the transformer is 100% energy efficient, then the power input should equal the power output as a consequence of the law of conservation of energy.

Example 2

The transformer of a battery recharger has a voltage and current input of 240 V and 3.00 A respectively. If the transformer dissipates 30% of the input energy, what will be the power output from the secondary coil? Solution

Pp = VpIp = 240 × 3 = 720 W Since the primary coil loses 30% of its energy, therefore the power in the secondary (Ps) will be 70% of that of the Primary (PP) ∴ Ps = 70% × Pp = 0.7 × 720 = 504 W

Calculating the current change In these calculations, for convenience, we will assume the transformers are 100% energy efficient. Therefore: Pp = Ps P = VI VpIp = VsIs Vp Is = Ip Vs Combined with the equation on page 152, we have:

np Vp Is = = ns Vs Ip

154

Where: np = the number of turns of the primary coil ns = the number of turns of the secondary coil Vp = voltage input into the primary coil, measured in V Vs = voltage output from the secondary coil, measured in V Ip = current input into the primary coil, measured in A Is = current output from the secondary coil, measured in A

chapter 8  transformers

Example 1

A transformer at a power plant has a primary coil with 1000 turns. The voltage input into the primary is 23 kV and is stepped up to 330 kV. Assume this transformer is 100% efficient. (a) Find the number of turns of the secondary coil. (b) If the power input of the primary coil is 230 MW, calculate the current in the secondary coil and compare it to that of the primary. Solution

Vp np = Vs ns

(a)

1000 23 × 103 = 3 ns 330 × 10



∴ ns ≈ 1.4 × 104 turns



(b) Since the power input is 230 MW, the power in the secondary must also be 230 MW if the transformer is 100% efficient, therefore: Ps = VsIs Ps = 230000000 W Vs = 330000 V Is ≈ 697 A Note: Therefore, as the voltage is stepped up, the size of the current in the coil is

reduced, and vice versa.

Example 2

An industrial transformer has a primary coil with 6000 turns. The voltage of the primary coil is 415 V, and the voltage of secondary is 12 V. Assume the transformer is 100% energy efficient. (a) How many turns are there in the secondary coil? (b) If the current in the primary coil is 12 A, calculate the current in the secondary coil. (c) Explain why the thickness of the wires in secondary coil is much larger than that in the primary coil. Solution

(a)

Vp np = Vs ns



415 6000 = 12 ns



∴ ns ≈ 174 turns

155

motors and generators

(b)

Is np = Ip ns 6000 Is = 174 12 Is ≈ 415 A

(c) A thicker wire in the secondary coil is needed to accommodate the large current, as a thin wire would overheat.

8.4

Voltage changes during the transmission from power plants to consumers n

Explain the role of transformers in electricity sub-stations

At a power plant, various energy transformations are used to produce electricity. For example, burning of fossil fuels converts water to steam, which is then used to turn the turbines of an (AC) electric generator. Electricity is generated based on the principle of electromagnetic induction as discussed in Chapter 6. The voltage change during the transmission from the power plant to consumers can be described using a flow chart: Electricity is usually generated by a three-phase AC generator; generally the voltage generated is as big as 23 000 V and current output from each set of the coil is almost 10000 A.  For long distance transmissions, the electricity is then fed into a step-up transformer that increases the voltage to 330 000 V and correspondingly decreases the size of the current. (See example on page 155.) This is done to increase the efficiency of the transmission of the electricity as explained below.  After this electricity has been transmitted over a long distance, the voltage is stepped down at different regional sub-stations, mainly for safety reasons. Correspondingly, the current increases.  Eventually, the voltage is stepped down to 240 V at the local telegraph pole transformers for domestic uses; industries may use slightly higher voltages.

156

chapter 8  transformers

A switching yard at a regional substation

SR

Electricity transmission infrastructure of eastern Australia A transformer at a regional substation

A transformer mounted on power poles

The role of transformers for long-distance transmissions n

Gather and analyse secondary information to discuss the need for transformers in the transfer of electrical energy from a power station to its point of use

Why is there a need to step up the voltage for long-distance transmissions? As a current passes through conductors, energy, mainly in the form of heat, is lost to the surroundings. The amount of energy lost is related to the size of the current as well as the resistance of the conductor, and can be described by the formula P = I2R—where P is the energy lost in J, I is the size of the current in amperes and R is the total resistance of the conductor in Ω (ohms). Since resistance is proportional to the length of the conductor, a long transmission wire will inevitably have a high resistance; consequently there will be a large amount of heat dissipated when electricity passes through it. Increasing the voltage of the electricity for transmission effectively decreases the size of the current running through the wire without changing the power transmitted. Since the energy lost is proportional to the square of the size of the current, having a smaller current decreases the energy lost during transmission dramatically, thus making the transmission much more efficient. Making wires from materials that have very low electrical resistance, such as aluminium or copper, will also reduce the energy loss during transmission.

secondary source investigation PFAs H3 physics skills H13.1a, b, c

Note: The fact that transformers do not work for DC means there is no easy way of

stepping up a DC voltage for transmission. Large currents would run through transmission wires, which would lead to enormous amounts of energy loss in the form of heat. This makes DC transmission extremely inefficient, which is the main reason why DC is not used on a large scale today. This is also why Westinghouse’s AC system became more widespread than Edison’s DC system.

157

motors and generators

8.5

An example of a transformer used at home

The need for transformers in household appliances n

Discuss why some electrical appliances in the home that are connected to the mains domestic power supply use a transformer Large power plants, regional sub-stations and many household appliances have transformers. Many household appliances function at voltages other than the standard domestic voltage of 240 V: ■■ Some appliances require step-up transformers: For instance, a TV requires thousands of volts for its operation in the cathode ray tube. ■■ Some appliances require step-down transformers: For instance, scanners, toys and computers require lower voltage for their correct operation as well as for safety reasons. Also, some electric ovens and cooktops need to step down the input voltage in order to increase the size of the current, which effectively increases the heating effect of such devices.

8.6 SR

‘Discuss’

Power transmission lines

158

The impact of the invention of transformers n

Discuss the impact of the development of transformers on society

Once again, you need to come up with your own opinions and support them by examples and evidence based on your own research. Arguments must be logical and succinct. Here are some ideas for the impact of the development of transformers. ■■ A shift from DC usage to AC usage: As described in Chapter 7, Edison’s DC system dominated the electricity market during the early years. However, this was revolutionarily shifted to Westinghouse’s AC system, mainly due to the invention of transformers, which allowed AC to be easily stepped up or down for efficient transmissions. ■■ The efficiency in transmission of electricity: The power loss during transmission is dramatically reduced as a consequence of the development of transformers. This means electricity transmission becomes more economical, thus reducing the price of electricity. Less energy loss also means we are indirectly saving our energy expenses thus decreasing our consumption of fossil fuels. Moreover, fewer fossil fuels will need to be burnt to provide us with the same amount of electricity, which is more environmentally friendly. ■■ Allows distant location of power stations: The development of transformers led to efficient transmission, which means we can afford to put large power plants a long distance away from the site of electricity consumption. This allows the generation of electricity to be concentrated at one place, as well as allowing these power plants to move away from metropolitan

chapter 8  transformers

areas, which decreases the level of pollution or other types of hazards in metropolitan areas. Imagine if you had a huge power plant right next your house! ■■ Allows the development of appliances which run at different voltages: Only with the development of transformers are appliances such as TVs, scanners and computers, which run at voltages other than 240 V, made possible.

Producing secondary voltage n

Perform an investigation to model the structure of a transformer to demonstrate how secondary voltage is produced

In this investigation, we model the structure and demonstrate the operation of a transformer. Procedure 1. Wind a primary coil around a soft iron bar, carefully counting the number of turns. 2. Wind a secondary coil—for better linkage of magnetic flux, the secondary coil can be

wound on top of the primary coil—and also note the number of turns. 3. Connect the primary coil to an AC power source from a power pack with known voltages. 4. Measure the AC voltage outputs from the secondary coil using a cathode ray oscilloscope. Caution: Ensure the secondary coil has fewer turns than the primary coil. 5. Compare the measured values to the theoretically calculated values based on the turn ratios. 6. Compare the structure of your model transformer with that of an induction coil.

first-hand investigation physics skills H12.1a, b, d H12.2a H12.3a H14.1a, b, c, d TR

Risk assessment matrix

Some questions for you to think about n Why do the measured voltages differ from the theoretically calculated voltages based on the turn ratios? n What will be the effect of removing the soft iron bar? n What will be the effect of winding the secondary coil on a different soft iron bar? What will happen to the secondary voltage as you move the primary and secondary coils further and further apart?

chapter revision questions 1. (a) Explain why transformers do not work for DC.

(b) Name one device that can be used to change the voltage for a DC source. 2. For the following transformers, find:

(a) the secondary coil voltage, given that the primary coil is 50 turns, the secondary coil is 75 turns and the voltage of the primary coil is 11 V (b) the primary coil voltage, given that the primary coil is 2000 turns, the secondary coil is 6500 turns and the voltage of the secondary coil is 66 kV (c) the secondary current, given that the primary coil is 825 turns, the secondary coil is 175 turns and the current in the primary coil is 12 mA

159

motors and generators

3. A household step-down transformer has 500 turns in its primary coil.

(a) Define the term ‘step down’. (b) Assume this transformer is perfectly energy efficient. If the voltage is reduced from 240 to 12 V by the transformer, how many turns of coil are there in the secondary? (c) Use the information in part (b), but now assume this transformer is only 80% energy efficient. If 2.0 A is fed into the primary coil, what is the size of current flowing through the secondary coil? (d) Account for why transformers are often not 100% energy efficient. 4. Describe in detail how electricity is transmitted from a power station to households. In

your answer, you should include the changes in voltage involved. 5. Name two electric appliances that require transformers for their function. 6. Evaluate the impact of the development of transformers on society and on the

environment. 7. In many school experiments conducted with transformers, a primary coil is wound on a



steel rod, and the secondary coil is wound over the primary coil. (a) Justify why the secondary coil is wound over the primary coil. (b) What is the role of the steel rod? Both the primary coil and the secondary coil now have a voltmeter and an ammeter connected. Readings are recorded in the table below: Primary voltage (V)

SR

Answers to chapter revision questions

160

Secondary voltage (V)

Primary current (A)

Secondary current (A)

2.0

4.1

1.0

0.3

3.9

5.9

2.2

0.8

5.8

12.0

3.4

1.3

8.1

16.0

4.0

1.7

10.0

20.3

5.1

2.4

(c) If the primary coil has exactly 50 turns, how many turns would the secondary coil have? Show all working. (d) Is the transformer perfectly efficient? Comment. (e) Suggest one thing that can be done (at least in theory) to make this transformer more efficient.

chapter 9  ac motors

CHAPTER 9

AC motors Motors are used in industries and the home usually to convert electrical energy into more useful forms of energy

9.1

AC electric motors n

Describe the main features of an AC electric motor

Early in Chapter 5, we discussed the principle of DC electric motors. AC electric motors function in a similar way; however, there are a few subtle differences. Structurally, the essential difference between AC and DC motors is that AC motors have slip ring commutators, in contrast to split ring commutators used in DC motors. It is essential to note that, unlike split ring commutators, which are responsible for reversing current direction, slip ring commutators simply have the role of conducting electricity from the power source without interfering with the rotation of the coil. The structure of a simple AC motor is shown in Figure 9.1. The coil rotates anticlockwise

Figure 9.1  The structure of an AC motor

c

>

Direction of the force is up Coil

b Magnetic field

d

Direction of the force is down

Carbon brushes

a Spring

Slip ring commutator

Spring AC power source

161

motors and generators

In Figure 9.1, the clockwise direction of the current in the coil will result in an upward c c force on ‘cd’ and a downward force on ‘ab’, consequently the coil rotates in an antiI Fb clockwise direction. As the coil rotates past b F I the vertical position, one would expect it to F I d d F I be pushed back and consequently oscillate about the vertical axis, since there is no action a a of the ‘split ring commutator’ (see Chapter 5). However, this is not the case if we examine the nature of AC: the voltage, hence the current, of AC power constantly changes polarity. Consequently, the current direction changes by itself without the split ring commutator at the vertical position, which allows the force to also reverse direction. This then enables the coil to rotate in a constant direction (see Fig. 9.2). Not surprisingly, however, how fast an AC motor can spin will not only depend on the factors that will determine the torque of the coil, but also how fast the polarity of the AC power can change. Hence the speed of an AC motor is often limited by the frequency of the AC power. rotational axis

Figure 9.2  The functional principle of an AC motor

Note: The frequency of the AC power determines how many times the AC power changes its polarity in one second.

9.2 Figure 9.3  (a) A schematic drawing of an AC induction motor (b) A close up of a squirrel cage covered by a sheet of laminated soft iron

AC induction motors Structure The AC induction motor, just like any other motor, consists of a stator and a rotor. Input current is fed into the stator and is responsible for creating magnetic fields. The rotor is made up of parallel aluminium bars that have their ends embedded in a metal ring at each terminal, forming a cage structure called the squirrel cage (although it resembles a bird cage). The rotor is usually covered by laminated soft iron and is embedded inside the stator. A schematic drawing of an Squirrel cage covered Metal ring AC induction motor is shown in by laminated soft iron Figures 9.3 (a) and (b). Stator

AC power input Coil in the interior

Shaft Aluminium bars Rotor: squirrel cage embedded in the stator (a)

162

Enlarged

(b)

chapter 9  ac motors

Functional principle Based on its structure, you might wonder: If there is no current fed into the rotor, how does it actually create a torque and consequently rotate? The fundamental principle upon which the induction motor operates is shown through the example below. Study it carefully. Example 1

Consider the set-up shown in Figure 9.4. An aluminium disk is mounted so that it is free to rotate. Suppose it is stationary to start with. Describe the subsequent motion of the disk when a magnet is brought nearby and is made to rotate about the disk as shown. Solution

Horseshoe magnet

Rotation side to side

Figure 9.4  Simplified illustration of induction motor mechanisms

Support Aluminium disk (free to rotate)

The disk will spin in the same direction as the Rotate about the disk in direction the magnet is moved, that is, it follows this direction the magnet. This is because as the magnet is made to move next to the disk, there will be a changing magnetic flux created. This changing magnetic flux is linked with the disk, which results in an EMF being induced in the disk. The EMF induced will result in eddy currents flowing inside the disk in such a way that they oppose the cause of the induction (Lenz’s law), and in this case, the movement of the magnet. Since it is not possible to stop the moving magnet (as the magnet is forcibly moved), it follows that the disk will have to follow the motion of the magnet in order to minimise the relative movement between it and the magnet. Note: For all motions, minimising the relative movement is equivalent to making something stationary.

In a sense, the interaction between the induced eddy currents and the external magnetic field allows the magnetic field to drag along the disk.

Returning to the AC induction motor: AC current is fed into the multiple coiled stator to create magnetic fields. The current is fed in such a way that there will be a rotating magnetic field created inside the stator. This rotating magnetic field will induce eddy currents within the squirrel cage rotor. The eddy current will flow in such a way that the rotor will rotate in the direction of the rotating magnetic field created by the stator, similar to how the disk spins in the direction of the rotating magnet. It is important to emphasise that for the AC induction motor, no current is fed into the rotor; current is induced inside the rotor, which then interacts with the external magnetic field of the stator to result in rotation, hence its name induction motor. Note: The detail of how the rotating magnetic field is created is not required by the HSC

course.

163

motors and generators

Energy transformation secondary source investigation PFAs H3 physics skills H13.1a, b, c, e H14.1e, g, h

n

Gather, process and analyse information to identify some of the energy transfers and transformations involving the conversion of electrical energy into more useful forms in the home and industry

You are required to identify some conversions and transformations of electrical energy into other forms of useful energies. It is important to re-emphasise the law of conservation of energy: energy cannot be created or destroyed, it can only be transformed. Hence, electrical energy can be converted into other, different forms of energy through various electronic devices. This is summarised in Figure 9.5. It should be noted that none of these energy conversions is perfectly efficient. During the energy conversion, it is inevitable that portions of the original energy are converted into other forms of undesirable energies (e.g. heat) and are often dissipated into the environment; they are therefore a ‘waste’ of energy. Unfortunately, this can only be minimised, not completely avoided.

Figure 9.5  The conversion of electrical energy by various electrical devices

Form of energy

Examples Fans, drills, blenders

Kinetic (mechanical) energy

Light bulbs, neon lights Light energy

Speakers Electrical energy

Sound energy

Recharging batteries

Chemical energy

Induction cooktops, furnaces Heat energy

164

chapter 9  ac motors

Demonstrating the principle of an AC induction motor n

first-hand investigation

Perform an investigation to demonstrate the principle of an AC induction motor

physics skills

This investigation does not necessarily need to be planned by students. It should demonstrate how a changing magnetic field can cause motion in another object not in physical contact with the source of the magnetic field. Such a demonstration can be made using the following apparatus: n n n n n n n

H12.1b, d H14.1e, g

an old CD aluminium foil cotton blu-tack a strong bar magnet a dowel stick or pencil (about 10 cm long) an electric drill

Method

Thread

1. Wrap the CD in aluminium foil so that the foil is smooth (the aluminium base of a drink

can could also be used here). 2. Pack the centre of the CD with blu-tack and pass a piece of cotton through it so that the

Disk

CD can be suspended evenly and is able to spin smoothly. 3. Drill a hole in the exact centre of the bar magnet (the diameter of the hole should match the diameter of the dowel stick or pencil). 4. Using a suitable glue, insert the dowel stick or pencil into the hole in the magnet. If a hole cannot be drilled, use araldite or a similar glue to glue the dowel stick to the magnet. 5. Insert the other end of the pencil or dowel stick into the chuck of an electric drill. 6. With the CD suspended horizontally, hold the drill so that the magnet can rotate in a horizontal plane underneath the CD (but not touch it). 7. Observe the motion of the aluminium disk as the magnet rotates in a horizontal plane beneath it. 8. By applying Lenz’s law, describe the eddy currents produced in the disk.

Drill chuck The changing magnetic field causes motion

chapter revision questions 1. What is a squirrel cage? Briefly describe its structure. 2. What type of power source must be used in AC induction motors? Account for this. 3. List two advantages of an AC induction motor. Relate these properties to its structure. 4. List two advantages of an AC induction motor. 5. Name one appliance at home that employs an AC induction motor.

SR Answers to chapter revision questions

165

SR

Mind map

from ideas to implementation

from ideas to implementation

CHAPTER 10

From CRTs to CROs and TVs Increased understandings of cathode rays led to the development of television Introduction The first observations and experiments with cathode rays needed a number of other technical advances to be made first. These included: 1. a way of producing DC electricity 2. Faraday’s work on electromagnetic induction and the induction coil 3. an improved vacuum pump 4. glass-blowing skills, to make the vacuum tubes of sufficient quality The original investigations came about from a desire to see what would happen if a spark was made in a vacuum. Crooke’s discharge tubes, a development of the original Geissler tubes, allowed the first observations of cathode rays. The study of cathode rays helped lead scientists to the realisation that the atom was indeed made up of smaller parts. This in turn opened the door to further refined models of the atom, which were used to explain newly found observations such as the Balmer series of emission lines of hydrogen, and to enable the progress of technology to develop cathode ray tubes into useful tools in televisions and oscilloscopes. These cathode ray tube devices have only just been superseded by liquid crystal display and plasma screen technologies.

10.1 A cathode ray tube

A cathode ray tube: the idea What is a cathode ray tube and what are cathode rays? A cathode ray tube (CRT) consists of an evacuated glass tube (almost all gas removed) and two metal electrodes, one embedded at each end of this glass tube. The two electrodes are connected to a power source, usually via an induction coil. The electrode connected to the negative terminal of the power source is named the cathode, while the electrode connected to the positive terminal is named the anode. When the power is on, the cathode rays, which we know today are in fact moving electrons (this was not understood historically; see later sections), flow from the negative cathode to the positive anode inside the tube just like in an electric circuit. Figure 10.1 is a schematic drawing of a fundamental CRT. This structure is modified to form the basis of many useful electronic appliances, as discussed later. Note: A CRT is sometimes also known as a discharge tube.

168

chapter 10  from crts to cros and tvs

Essential requirements for a functional CRT Low pressure: A functional CRT must have its glass tube evacuated to a very low gas pressure, preferably close to vacuum. This is because the cathode and anode are separated by quite a large distance inside the tube. The low pressure inside the tube ensures minimal collisions between the air molecules inside the tube and the electrons (cathode rays) as they make their way from the cathode to the anode. (The effect of various pressures on the nature of the discharge inside a cathode tube is briefly discussed in the first-hand investigation section.) ■■ High voltage: Low pressure alone is not enough to ensure the electricity can ‘jump’ across such a big gap. Extremely high voltage is required to pull the electrons off the cathode and have enough kinetic energy to make their way from the cathode to the anode. It is important to note that CRTs only work on DC; hence, transformers cannot be used to step up the voltage to the required value (Why?). Instead, an induction coil is used to step up the voltage. ■■

Evacuated glass tube (the pressure inside the tube is very low, say 0.01 kPa)

Anode 

Cathode 

Path of electrons Non-luminous space

 Induction coil

Before we can continue our study of CRTs, we must first learn about electric fields and how charged particles behave in electric and magnetic fields.

Electric fields n n

n

Green glow on the glass wall at the anode

Cathode ray (visible in this case)

Identify that charged plates produce an electric field Discuss qualitatively the electric field strength due to a point charge, positive and negative charges and oppositely charged parallel plates Describe quantitatively the electric field due to oppositely charged parallel plates



Power source Figure 10.1  Induction coil and power source

10.2

Definition

An electric field is a region in which charged particles experience a force. An electric field is a vector quantity, which means it must have both magnitude and direction: Definition

The strength of an electric field at any point is defined as the size of the force acting per unit of charge. The direction of the electric field at any point is defined as the direction of the force a positive charge would experience placed at this point.

169

from ideas to implementation

Electric fields associated with various charged objects Electric field around a point charge

E E 



Figure 10.2  Electric field around a positive charge

Figure 10.3  Electric field around a negative charge

Note: The density of the field lines represents the strength of the electric field, and the arrows point in the direction of the electric field.

Electric field between charges E





Figure 10.4  Electric field between a positive and negative charge

Electric field between a pair of parallel electric plates

DC power source

Metal plates Figure 10.5  Electric field between a pair of parallel electric plates

Note: The parallel field lines represent uniform electric field.

170

chapter 10  from crts to cros and tvs

Parallel plates

The strength of the electric field between a pair of parallel electric plates is proportional to the size of the applied voltage and inversely proportional to the distance separating the plates. It is governed by the equation:

E=

■■

Where: E = the strength of the electric field, measured in V per metre (E can also be measured in N C–1) V = the supplied voltage, measured in V d = the distance of separation between the electric plates, measured in m

V d

Solve problems and analyse information using: E =

V d

Example

A capacitor is built using a pair of parallel electric plates. If the voltage applied to the plates is 1400 V, and the distance separating the two plates is 0.70 cm, determine the strength of the electric field produced.

SR

Worked examples 17, 18

Solution

V d V = 1400 V d = 0.70 cm = 0.0070 m E =

See Section 10.3 for examples of E =

1400 0.0070 = 2.0 × 105 V m–1

E=

V using F = qvBsin θ and F = qE. d

171

from ideas to implementation

10.3

Forces acting on charged particles in electric and magnetic fields n n

SR

■■

Identify that moving charged particles in a magnetic field experience a force Describe quantitatively the force acting on a charge moving through a magnetic field F = q vBsin θ Solve problems and analyse information using: F = qvBsin  and F = qE

Recall that a charge experiences a force inside an electric field, and a charge that moves at a constant velocity also experiences a force inside a magnetic field. Worked examples 17, 18

Force on a charge in an electric field Magnitude

The magnitude of the force acting on a charge when it is in an electric field will be equal to the product of the strength of the field and the size of the charge; it is governed by the equation:

F = qE

Where: F = the size of the force acting on the charge, measured in N E = the strength of the electric field, measured in V M–1 (or N C–1) q = the size of the charge, measured in C

Direction

For positive charges, the forces act in the direction of the electric field. For negative charges, the forces act in the opposite direction to the electric field. Note: One can also deduce the direction of the forces by using the fact that charges

are attracted to the opposite polarity and repelled by the same polarity; for instance, in the case of a pair of electric plates, a positive charge will be repelled by the plate that is positive and attracted to the negative plate Example 1

A small object carrying a positive charge of 5.0 mC is placed inside an electric field with strength of 2.4 × 103 V m–1 directed towards the right. Determine the strength and direction of the force acting on this object. Solution:

F = qE F = 5.0 × 10–3 × 2.4 × 103 q = 5.0 × 10–3 C = 12 N E = 2.4 × 103 V m–1 To the right, because for positive charges, the forces act in the direction of the electric field.

172

chapter 10  from crts to cros and tvs

Example 2

Suppose an oil drop carrying a negative charge of 6.40 × 10–19 C is placed carefully inside a uniform electric field created by a horizontal pair of parallel electric plates. The parallel plates have a voltage supply of 1550 V with the top plate positive, and the separation distance is 1.20 cm. (a) Determine the strength and direction of the electric field produced by the electric plates. (b) If the oil drop is to be suspended mid-way between the two plates, find the mass of the oil drop. (c) If the polarity of the parallel plates is suddenly reversed, calculate the acceleration of the oil drop. Solution

V d V = 1550 d = 0.012 m 1550 E= 0.012 ≈ 1.29 × 105 V m–1 Direction: down, that is, the electric field runs from the top positive plate to the bottom negative plate. (b) The oil drop can only be suspended if its weight force is exactly balanced by the upward force it experiences due to the electric field. That is: Weight force = force due to the electric field Thus: i.e. Fg = FE Fg = 8.27 × 10–14 N FE = q E Also Fg = mg FE = (6.4 × 10–19) × (1.3 × 105) m = mass of the oil drop ≈ 8.27 × 10–14 N g = 9.8 m s–2 Fg m= g

(a) E =

8.27 × 10–14 9.8 ≈ 8.44 × 10–15 kg (c) When the polarity is reversed, the net force acting on the oil drop will no longer be zero. The oil drop will experience a downward force which is the sum of its weight and the electric force, thus an acceleration results. F F = ma a= m F = 8.27 × 10–14 × 2 N 8.27 × 10–14 × 2 m = 8.44 × 10–15 kg = 8.44 × 10–15 ≈ 19.6 m s–2, down (since the force is acting downwards)

=

173

from ideas to implementation

Example 3

The top plate of a horizontal pair of parallel electric plates is positive. The plates produce a uniform electric field of 9500 N C–1 between them. An electron is projected into the electric field horizontally to the left. (a) Determine the force experienced by the electron due to the electric field as it passes through the field. (b) Describe the motion of the electron as it moves through the electric field. (c) Does gravity affect the motion of the electron? Discuss. Solutions

(a) F = qE = 1.6 × 10–19 × 9500 = 1.52 × 10–15 N, up Note: Here, the unit for the electric field is N C–1, and is equivalent to V m–1.

(b) The horizontal component of the electron’s motion—to the right—will be subject to no net force (assuming no friction), and thus remains constant throughout. The vertical component will be subject to an upward force, hence acceleration. Therefore, the electron will move towards the right while at the same time curving upwards, like an upside-down projectile. (c) The weight force of the electron acts downwards, which therefore in theory should counteract the upward force due to the electric field. However, the weight force on the electron is as little as 9.11 × 10–31 × 9.8 = 8.9 × 10–30 N, which is almost 15 orders of magnitude smaller than the force due to the electric field; thus in reality the weight force will have virtually no effect on the motion of the electron.

Force on a charge inside a magnetic field Magnitude

The magnitude of the force acting on a charged particle as it moves through a magnetic field is governed by the equation:

F = q vBsin θ

Where: F = the size of the force acting on the charge, measured in N q = the size of the charge, measured in C v = the velocity of the charge relative to the magnetic field, measured in m s–1 B = the strength of the magnetic field, measured in T θ = the angle at which the charge enters the magnetic field, measured in degrees (°)

Note: Students may want to remember ‘qvB’ as ‘Queen Victoria Building.’

174

chapter 10  from crts to cros and tvs

Direction

The direction of the force acting on a charge as it moves through a magnetic field can be determined by applying the right-hand palm rule.

Direction of the movement of a positive charge OR opposite direction to the movement of a negative charge

Force

Magnetic field

Definition

The right-hand palm rule states: when the thumb of the right hand points to the direction in which the positive charge is moving (or to the opposite direction to which the negative charge is moving), and the fingers point to the direction of the external magnetic field, then the palm pushes in the direction of the force (see Fig. 10.6).

Figure 10.6  Applying the righthand palm rule to moving charges

Note: Recall a negative charge moving in one direction is equivalent to a positive charge moving in the opposite direction

Example 1

Determine the size and direction of the forces acting on the following charges: (a)

6.0 0m

s 1

B 2.40 T

30º

Proton Force on a charge inside a magnetic field

Solution

F = q vBsin θ q = charge of a proton = 1.602 × 10–19 C v = 6.00 m s–1 B = 2.40 T θ = 30° ∴ F = (1.602 × 10–19) × 6.00 × 2.40 × sin 30° F ≈ 1.15 × 10–18 N Direction: Into the page, that is, thumb points in direction of v, fingers to the right, palm pushes into the page.

175

from ideas to implementation

(b)

B  10.0 T

Electron

1.00 s 106 m s–1

Force on a charge inside a magnetic field

Solution

F = qvBsin θ q = charge of an electron = –1.602 × 10–19 (negative) v = 1.0 × 106 m s–1 B = 10 T θ = 90° since the pathway of the e– is ⊥ to the field lines ∴ F = (1.602 × 10–19) × 1.0 × 106 × 10 × sin 90° F = 1.6 × 10–12 N Direction: Down the page. Positive charge 4.0 C

150 m s–1

(c)

B 1.8 T Force on a charge inside a magnetic field

Solution

F = qvBsin θ θ = 0 °, since the pathway of the charge is parallel to the magnetic field lines ∴ F = 4.0 × 150 × 1.8 × sin 0° F=0N

(d)

10 m s–1

B = 1.0 T

Bparticle Force on a charge inside a magnetic field

176

chapter 10  from crts to cros and tvs

SR

Solution

F = q vBsin θ q = 2 × charge of a proton = 2 × 1.602 × 10–19 θ = 90° ∴ F = (2 × 1.602 × 10–19) × (10) × (1.0) × sin 90° F = 3.2 × 10–18 N Direction: Down, because magnetic field runs from ‘N’ to ‘S’, so fingers point to the right. Thumb into the page.

Simulation: magnetic forces

Example 2

A helium nucleus is moving to the right at a constant velocity. As it passes through a magnetic field with a strength of 2.0 T perpendicularly, it experiences an upward force of 1.28 × 10–17 N. (a) Determine the direction of this magnetic field. (b) Calculate the velocity at which the helium nucleus is moving. Solution

(a) Into the page, that is, thumb to the right, palm pushes up, thus fingers are into the page. (b) F = qvBsin θ

1.28 × 10–7 N 2 × 1.6 × 10–19 C (since the helium nucleus has a double positive charge) ? 2.0 T 90° F v= q Bsin θ 1.28 × 10–17 v= (2 × 1.6 × 10–19) × 2.0 × sin 90° v = 20 m s–1, to the right.

F= q= v= B= θ=

Figure 10.7  Force on an electron inside a magnetic field

Example 3

Suppose a very strong magnetic field is directed into the page. An electron that is moving at a constant velocity enters this magnetic field from the right, as shown in Figure 10.7. Describe the subsequent motion of this electron. Solution

The electron will describe a circle inside this magnetic field, as shown in Figure 10.7. The magnetic force (F = qvBsin θ) acting on the electron provides the centripetal force for this circular motion. Explanation: As the electron enters the magnetic field, the right-hand palm rule shows that it will experience an upward force that causes it

v

v FC

Circular pathway of the electron

FC FC

v Electron v  linear velocity at that point FC  centripetal force

177

from ideas to implementation

to curve up. If we then follow the charge and continue applying the right-hand palm rule on the charge as it curves inside the magnetic field, we will find that the magnetic force is always perpendicular to the direction in which the charge is travelling. By definition, a motion in which the forces always act perpendicularly to the linear velocity is a circular motion. Hence, it follows that the magnetic force provides the centripetal force for the circular motion described by this electron. Note: If the magnetic field is weak, then the electron will not describe a full circle within it; instead, it will curve up as an arc of a circle.

10.4

Debates over the nature of cathode rays: waves or particles? n

Explain why the apparent inconsistent behaviour of cathode rays caused debate as to whether they were charged particles or electromagnetic waves

Today, we know that the cathode rays produced by a CRT are beams of electrons. However, a little over 100 years ago, scientists were vigorously debating the nature of these cathode rays, particularly in deciding whether cathode rays were indeed waves (rays) or particles. German scientists including Heinrich Hertz strongly believed cathode rays were waves. Hertz mistakenly ‘proved’ that cathode rays could not be deflected by electric plates because there was a small amount of gas in his CRT. In addition, the fact that cathode rays could cast shadows and be diffracted as well as cause fluorescence provided evidence for the wave nature of cathode rays. On the other hand, English scientists strongly supported the particle nature of cathode rays. Experiments that showed cathode rays were able to charge objects negatively through interaction. Other experiments with paddle wheels showed cathode rays carried and were able to transfer momentum. These suggested cathode rays were particles. Each group was desperately trying to prove the other party wrong, but could not provide strong and convincing evidence to support their own theory. The debate was eventually settled in 1897, when British scientist J. J. Thomson conducted his famous experiment to measure the charge to mass ratio for cathode rays, consequently successfully proving the particle nature of cathode rays.

10.5

J. J. Thomson’s charge to mass ratio experiment n n

Explain that cathode ray tubes allowed the manipulation of a stream of charged particles Outline Thomson’s experiment to measure the charge to mass ratio of an electron NOTE: You should be able to confidently reproduce the content of this experiment and its implications, including sketching the diagrams.

178

chapter 10  from crts to cros and tvs

Aim The aim of the experiment is to measure the charge to mass (q /m) ratio of cathode rays.

Procedure Before conducting the experiment, Thomson assumed that cathode rays were negatively charged particles and were emitted from the cathode. He set up a CRT similar to the one shown in Figure 10.8 (a) and (b). The experiment involves two parts.

Thomson’s apparatus

Part I: Finding an expression for the velocity of the cathode ray ■■

■■

■■

■■

■■

A beam of cathode ray is emitted at the cathode and is made to accelerate towards the multi-anode collimators to enter the main part of the tube, as shown in Figure 10.8 (a). This ensures the cathode ray that enters the main tube is fine and J. J. Thomson well defined. The beam keeps travelling in a straight line to reach the end of the tube and strikes the mid-point of the fluorescent screen. Anode collimators: The electric field is turned on by switching on the the cathode ray is accelerated towards voltage supply to the electric plates. The beam is these collimators to deflected in the direction opposite to that of the enter the main tube Cathode: Fluorescent electric field, say for this case up (see Fig. 10.8a). where the screen cathode ray This idea was illustrated in example 3 on page 174 is emitted  Electric plate —electric charges experience a force inside an 2 external electric field, which causes a deflection. 1 The magnetic field is turned on by supplying a 3 current to the coil. The current is directed so that  Electric plate the magnetic field produced by the coil deflects  Very high   voltage the cathode ray in the opposite direction to that Coil: produces a supply magnetic field imposed by the electric field, say for this case into the page Cathode ray enters the down (see Fig. 10.8a). main tube as a fine and Main tube where the The strengths of the electric and magnetic fields well-defined beam deflections are done are adjusted so that the deflections created by each field exactly balance out. Consequently the beam 1 When undeflected, the cathode ray travels straight to the mid-point of the fluorescent screen. This happens again will travel to the end of the tube undeflected, and when the deflection due to the electric field is balanced will again hit the middle of the fluorescent screen. out by the deflection along the magnetic field Thus the force acting on the cathode ray from the 2 Deflection of the cathode ray due to the electric field electric field can be equated to the force acting 3 Deflection due to the magnetic field from the magnetic field; hence: Figure 10.8  (a) FE = FB A CRT used by Thomson to FE = q E and measure the ‘q/m’ FB = qv Bsin θ = q vB (since θ is 90°, because the cathode ray is always of cathode rays perpendicular to the magnetic field) Therefore: q E = q vB v=

E B

179

from ideas to implementation

Anode collimators Fluorescent screen Cathode 



Part II: Finding the charge to mass ratio of the cathode ray ■■

 

 he electric field is then turned off and the magnetic T field is left on. The cathode ray is deflected by the magnetic field only, and thus curves down in an arc of a circle as shown in Figure 10.8 (b). Note: This principle was outlined in example 3 on

  Coil: the magnetic field produced by the coil is left on

page 177.

Since the magnetic force (F = qvB) provides the electron with the centripetal force: Fc = FB

■■ Main tube Deflection due to the magnetic field alone. The cathode describes an arc of a circle, the radius of which can be easily measured Figure 10.8  (b) A CRT used by Thomson to measure the ‘q/m’ of cathode rays

mv2 ( where r is the radius of the arc r described by the cathode ray); and



Fc =



FB = qvB

Therefore: mv 2 = qvB r mv = qrB

v q = rB m

Since v =

E , therefore: B

E q = 2 rB m ■■ The strength of the electric field (E) and magnetic field (B) can be determined (by measuring the size of the applied voltage and current) and the radius r of the arc q described by the cathode ray can be measured. Thus the charge to mass ( ) ratio m can be calculated.

Conclusions from the experiment and their implications The experiment proved cathode rays were indeed (negatively charged) particles: The fact that the charge to mass ratio of cathode rays was successfully measured indicated that cathode rays had measurable mass, which in turn provided definitive evidence for the particle nature of cathode rays. (Waves do not have mass.) This effectively ended the debate over the nature of cathode rays. ■■ It showed that the particles had a large (negative) charge with very little mass (especially compared to alpha particles). ■■ It contributed to the discovery of electrons and the development of the models of atoms: The results from the experiment laid the foundation for Thomson’s discovery that cathode rays were in fact a new class of particles, later to be called electrons. The fact that the same charge to mass ratio was measured even when ■■

180

chapter 10  from crts to cros and tvs

different materials were used as the cathode indicated that cathode rays (electrons) are common to all types of atoms. This was one piece of evidence that led Thomson to believe electrons were subatomic particles, and later to propose the ‘plum pudding’ model of atoms. (See Chapter 14.) ■■ It allowed the mass of electrons to be calculated: Millikan’s famous oil drop experiment (not required by the syllabus) accurately determined the charge of electrons. Knowing the charge to mass ratio of electrons, the mass of electrons could be easily calculated. Later, a similar idea was used to measure the charge to mass ratio for protons, from which other useful information was deduced.

>WWW

Useful websites Further information on J. J. Thomson’s experiments: http://dbhs.wvusd.k12.ca.us/webdocs/AtomicStructure/Disc-of-Electron-Intro.html Hear J. J. Thomson speak of ‘the electron’: http://www.aip.org/history/electron/jjsound.wav

Properties of cathode rays n

Perform an investigation to demonstrate and identify properties of cathode rays using discharge tubes: – containing a Maltese cross – containing electric plates – with a fluorescent display screen – containing a glass wheel – analyse the information gathered to determine the sign of the charge on cathode rays

CRT containing a Maltese cross, electric plates, flourescent display screen, glass wheel

Even before J. J. Thomson had performed his experiment to measure the charge to mass ratio for cathode rays, scientists like William Crookes had built many high-quality CRTs to investigate the physical properties of cathode rays. Some of the physical properties of cathode rays are described below. Most of these properties can be easily demonstrated in school laboratories as shown in the photos above. These properties were stated or described long before the nature of cathode rays was identified. Now that we know that cathode rays are fast moving electrons, all these properties make a lot more sense.

first-hand investigation physics skills H14.1a, b, e, g, h

TR

Risk assessment matrix

Note: You should remember how cathode rays would behave physically in each situation

and be confident in reproducing the diagrams.

181

from ideas to implementation

The remainder of the tube shows green due to the cathode rays striking the glass and causing fluorescence



A clear shadow of the Maltese cross 

Maltese cross

Figure 10.9  CRT containing a Maltese cross

Faint green glow at the end of the tube as a result of cathode rays striking the glass

Fluorescent material





Cathode rays cause fluorescence and show a trace of itself

n Cathode rays are emitted at the cathode and travel in straight lines: This is shown by using a CRT containing a Maltese cross (see Fig. 10.9). The cathode rays illuminate the Maltese cross and cast a clearly defined shadow of it at the other end of the tube. Analogy: This property is similar to light, which is able to

cast a clearly defined shadow because it travels in a straight line. n Cathode rays can cause fluorescence: This is shown by using a CRT containing a background fluorescent material (see Fig. 10.10); as the cathode ray passes from the cathode to the anode, it causes this material to fluoresce and leave a clear trace of itself. Cathode rays are also able to cause the wall of the glass tube to glow, as shown in many of the other scenarios. n Cathode rays can be deflected by magnetic fields: When a pair of bar magnets is placed next to the CRT from the previous example (see Fig. 10.11), the cathode rays are deflected as predicted by the right-hand palm rule. n Cathode rays can be deflected by electric fields: Similarly, when a pair of electric plates is used (see Fig. 10.12), the cathode rays are deflected in the direction opposite to that of the electric field. Note: Hertz was not able to show cathode rays

Figure 10.10  CRT with fluorescent background material

being deflected by an electric field, and this led him to believe cathode rays were waves. n Cathode rays carry and are able to transfer momentum: This is shown by using a CRT containing a paddle wheel (see Fig. 10.13). As the cathode rays strike the paddle wheel, some of their momentum is transferred to the paddle, which makes the paddle wheel roll in the same direction as the cathode rays are travelling. n Cathode rays are identical regardless of the type of material used as the cathode: Cathode rays can also facilitate some chemical reactions and expose photographic films.

Cathode rays deflected by an external magnetic field as predicted by the right-hand palm rule

Fluorescent material



Figure 10.11  Cathode ray reflected by magnetic field



Cathode rays deflected by an external electric field

The paddle rolls as it receives momentum from the cathode rays

 Paddle wheel Figure 10.12  Cathode ray deflected by electric field

182

 

 Fluorescent material



Double rails

Figure 10.13  A CRT containing a paddle wheel



chapter 10  from crts to cros and tvs

10.6

Applications of cathode ray tubes (CRTs): implementation n

Outline the role of: – electrodes in the electron gun – the deflection plates or coils – the fluorescent screen in the cathode ray tube of conventional TV displays and oscilloscopes

Having learnt a lot about CRTs and cathode rays, we can now turn to the implementations of CRTs, that is, how CRTs are used in common electrical appliances, such as cathode ray oscilloscopes and televisions.

Cathode ray oscilloscope Standard cathode ray oscilloscopes (CROs) are commonly used in science to display the pattern and strength of electric signals in waveforms, from which useful measurements can be obtained. In school laboratories, CROs can be used to measure AC voltages, as mentioned in Chapter 7, or to study sound waves when a microphone is connected to them, as you may have done in the preliminary course. In more advanced settings, modified CROs were once used in radar systems. CROs are also used in electrocardiogram (ECG) machines, medical devices that display a person’s heartbeats as electric signals. So how are CRTs modified in order to carry out these functions?

Figure 10.14  (a) A CRO

Possible deflection done by the Y plates

Anodes

A CRO

A beam of cathode ray

Cathode: emits a beam of cathode ray via thermionic emission

Grid



 

A separate voltage supply to the cathode to cause thermionic emission

Y plates Electron gun

X plates

Deflection system

Display screen

183

from ideas to implementation

Pixel lights up when struck by a beam of cathode ray

Pixel

A standard CRO consists of (see Fig. 10.14a):  n electron gun, which emits a beam of cathode A rays. ■■ A deflection system, which consists of two sets of parallel electric plates. ■■ A display screen that has on its inner surface materials that will fluoresce when struck by the cathode rays. The screen usually contains a grid that makes displays easier to read and measure. ■■

Grid Figure 10.14  (b) CRO display screen

Electron gun

As shown in Figure 10.14 (a), the electron gun resembles the one used in Thomson’s experiment. A beam of electrons is emitted at the cathode and is accelerated towards the multiple anodes, and then travels into the deflection part of the tube as a fine, well-defined beam. Two additional features need to be mentioned here. First, in addition to the high voltage supply across the cathode and anodes, a separate small voltage is supplied to the cathode to generate a current in it to heat it. The heated cathode then releases many free electrons. Once freed, these electrons can be accelerated towards the anodes with little effort. This technique is known as the thermionic emission, and is utilised here to ensure a high electron density in the cathode ray. For this reason, devices like some CRTs are also named thermionic devices. Second, there exists another electrode in between the cathode and anode, which is named the grid. Making the grid more positive or negative with respect to the cathode controls the number of electrons reaching the anodes and hence striking the display screen per unit time, which consequently controls the intensity of the cathode ray and thus the brightness of the display. Deflection system

The deflection system in a CRO helps to manipulate the cathode ray so that useful information can be displayed on the screen. The deflection system of a CRO consists of two sets of parallel electric plates; one pair controls vertical deflections while the other pair controls horizontal deflections. ■■ Y plates: These plates are horizontal and thus are responsible for vertical deflections of the cathode ray. The voltage supply to these plates is an amplified copy of the external signal input, that is, the pattern and the range of deflection is directly related to the type and strength of the input signal. ■■ X plates: These plates are vertical and thus are responsible for horizontal deflections of the cathode ray. Unlike the Y plates, the voltage supply to these plates is normally independent of the external signal input; rather the plates are controlled by inbuilt circuitry that supplies a time-based voltage. This signal deflects the cathode ray across the screen from left to right, and then the polarity of the plates quickly reverses so that the cathode ray is deflected swiftly from right to left (quick enough so that this is not seen). By adjusting the pattern of the time-based voltage, one can make the left to right deflection run at different speeds—really slowly across the screen as a dot or so fast so that the dot has an appearance of a line.

184

chapter 10  from crts to cros and tvs

By superimposing vertical and horizontal deflections, the cathode ray will move up and down as well as from left to right across the screen, consequently a wave-front display can be obtained. Display screen

The screen contains many pixels made from fluorescent materials, or phosphors. When the fine beam of the cathode ray strikes a pixel, the pixel fluoresces, which allows the information to be displayed (see Fig. 10.14b). Once the beam moves on from a pixel, the fluorescence persists for a short time, then fades. A CRO displays a signal as a voltage-time graph, and accurate measurements can be obtained with the help of the grid on the screen, a set of 1 cm by 1 cm squares. The vertical voltage can be adjusted, for instance 1 V/cm, 10 V/cm and so on. Horizontally, the grid represents time, which is also freely adjustable, for instance, 1 ms/cm, 10 ms/cm and so on. By knowing these settings, as well as examining the waveform being displayed (i.e. counting the number of grids vertically and horizontally), useful information can be calculated, for instance the amplitude of the wave from the number of vertical grids and the period of the wave from the number of horizontal grids. If the input signal repeats itself with a suitable period, adjusting the time-based voltage allows the image formed by each new sweep of the beam of the cathode ray to be superimposed onto the previous image, so that a stable image can be viewed. This makes the interpretation of the display easier. Note: Do not confuse ‘pixels’ and ‘grids’.

Television A television uses a CRT to form images. Its basic principle is similar to that of a CRO, but with a number of marked differences. Electron gun

Generally, the electron gun used in a TV is similar to that used in a CRO. A black and white TV has only one electron gun, whereas a colour TV has three electron guns, for red, green and blue colours, which are the three primary colours for image formation. There is also a grid in each electron gun to control the brightness of the display. The grid and the time of turning on or off a particular electron gun are controlled by the amplified electric signals captured by an antenna that receives signals from a TV station. Deflection system

Unlike a cathode ray oscilloscope, the deflection system of a TV utilises magnetic fields that are created by current coils. Magnetic fields are used in this case to ensure more efficient and larger deflections. The deflection system is also controlled by the amplified electric signals derived from the antenna. Display screen

Again, the screen consists of pixels (also known as phosphor dots), which fluoresce when struck by a beam of cathode rays. For a black and white TV, the single phosphor glows white and the intensity ranges from white—when a pixel fully lights up when struck by a beam of cathode rays with a maximal intensity—to dark (black)

185

from ideas to implementation

when the beam intensity is reduced to zero. The intensity follows a grey scale. In a colour TV, each pixel has three sub-pixels of phosphor: one glows red when struck by a cathode ray, one glows green and the other glows blue. A shadow mask is employed in a colour TV to ensure the beam from each colour gun only hits the corresponding spot in each pixel, thus the shadow mask is essential for the correct image display in a colour TV. Essentially, the pixels in a colour TV screen only display the three primary colours; by changing the intensities of the sub-pixels, we can obtain the full colour we see everyday. Also, unlike a CRO, where a beam of cathode rays scans across the screen relatively slowly to trace out a single dot or line, the electron guns of a TV scan a series of horizontal lines across the entire screen 50 times a second. The odd line pixels are scanned first followed by the even lines. The scanning pattern is called a raster. After it is struck by a beam of cathode ray, each pixel continues to shine for a short period of time, so the rapid scanning of the entire screen allows an image to be displayed on the entire TV screen without any discernible flickering.

Observing different striation patterns first-hand investigation PFAs H1 physics skills H13.1 a, b, e

The different striation patterns produced by cathode ray (discharge) tubes with different interior air pressure

n

Perform an investigation and gather first-hand information to observe the occurrence of different striation patterns for different pressures in discharge tubes

Procedure Often a standard kit is provided in the school laboratory. The kit usually consists of five to six discharge tubes that are held vertically and parallel to each other by a stand. Each tube contains a different preset air pressure and is sealed permanently. The anodes of these discharge tubes are connected together to form one common outlet whereas the cathodes are separated. This means that during the operation of this kit, the positive terminal of the power source can remain attached to the anode, while attaching the negative terminal to each individual cathode selectively operates that particular discharge tube. As mentioned before, an induction coil is required for the operation of these discharge tubes.

Observations The following observations can be made from each discharge tube: ‘High’ pressure tube (say 5 kPa) Note: The usual air pressure at sea level is 1 atmospheric pressure, or 101.3 kPa.

Purple streamers appear between the cathode and anode. With a slightly lower pressure tube, the streamers change to a gentle pink glow that usually fills the entire tube.

186

chapter 10  from crts to cros and tvs

‘Medium’ pressure tube (say 0.1 kPa) As we move down to an even lower pressure tube, the pink glow starts to break down into alternating bright regions and dark regions. A typical example of this type of discharge is shown in Figure 10.15. From the cathode to the anode, the glows and dark spaces are named: Aston dark space, cathode glow, Crooke’s dark space, negative glow, Faraday’s dark space, positive column, anode glow and anode dark space. The cause of the glows and dark spaces: As a general rule, a glowing region is a result of the electrons (cathode rays) from the cathode carrying different energies. When they collide with the gas molecules in the tube, they cause excitation of these gas molecules, which then release EMR. A dark space not surprisingly is usually a result of electrons having insufficient energy (perhaps due to previous collisions) to excite the gas molecules. At even lower pressure, the dark spaces elongate, and the glow is usually faint. ‘Low’ pressure tube (say 0.02 to 0.04 kPa)

Figure 10.15  A discharge tube with a medium interior air pressure

When the tube with an extremely low air pressure is used, the glows in the tube completely disappear, that is, the dark spaces take over and fill the entire tube. Here the entire dark space is defined as the Crooke’s dark space. The only visible glow in this case is the green fluorescence on the glass wall at the anode region. The CRTs described in this chapter are typically at this pressure. Note: Although the tube is not producing striations at this pressure, this does not mean the cathode ray does not exist. Its pathway can be easily demonstrated by placing a fluorescent background inside the tube.

chapter revision questions For all the questions in Chapter 10, take the electrons to have a charge of −1.60 × 10–19 C and a mass of 9.11 × 10–31 kg. Take the proton to have a charge of +1.60 × 10–19 C and a mass of 1.67 × 10–27 kg.   1. Determine the size and the direction of the force acting on the following moving charges: (a) 0.10 T 5.0 m s–1

35º P

Proton Size and direction of force on charge

(b)

3.0 × 10–2 T

e

7.5 m s–1

Size and direction of force on charge

187

from ideas to implementation

(c)

2.0 T B

2.5 cm s–1

Size and direction of force on charge

(d)

2.3 T

Pb (>>) ion 0.20 km s–1

Size and direction of force on charge

  2. (a) Why does a cathode ray tube (CRT) require a high voltage to operate?

(b) How would this high voltage be achieved? (c) What are the safety issues involved?   3. Today we know that cathode rays are electrons. Why was it that many decades ago,

German scientists believed they were waves?   4. Draw the electric field pattern for the following cases:

(a)



(b)

+ –

188



chapter 10  from crts to cros and tvs

  5. A charged particle is moving through a pair of charged parallel electric plates as shown

in the diagram:



100 V 2.5 cm

3.2  104 C 5.0 cm s1 2.02  103 kg 

Charged particle through a pair of parallel electric plates

(a) Determine the strength of the electric field produced by the electric plates. (b) What is the net force acting on this charged particle, including the gravity? (c) Determine the acceleration of this charged particle.   6. The diagram below shows a CRT; a pair of parallel electric plates is placed inside this CRT. Inside a cathode ray tube 

150 V

1.0 × 106 m s1

50.0 mm



CRT and electric plates

(a) What will be the required voltage to produce the velocity of the electron? (b) Ignoring gravity, what is the force acting on this electron due to the electric field? (c) What magnetic field, size and direction would you need to balance the force due to the electric field?   7. An alpha particle is describing a circle inside a magnetic field that is directed into the page.

(a) Suppose this alpha particle is moving at 780 m s–1, and the strength of the magnetic field is 10.0 T. Determine the radius of this alpha particle. (b) If we want the alpha particle to leave the magnetic field at point O in the direction as shown in diagram, how can this be achieved by using a pair of parallel electric plates?

O

A

Alpha particle describing a circle

189

from ideas to implementation

  8. A mass spectrometer is used to measure the mass of a charged particle by using the

deflection caused by a magnetic field. A schematic drawing is shown. Particle





Detector

Mass spectrometer



A particle has a charge of 1.60 × 10–19 C. It is accelerated by the electric collimators to gain a velocity of 200 m s–1. It then enters the semicircular compartment, where a magnetic field deflects it so that it follows an arc of a circle to reach the detector. (a) State the direction of the magnetic field that is required to give rise to this deflection. (b) If the magnetic field strength is 2.5 T, and the radius of the circle is 60 cm, what is the mass of the particle?

  9. Describe the motion of the hydrogen ion as it enters the magnetic field as shown in the

diagram. Magnetic field

Motion of hydrogen ion

10. (a) Outline J. J. Thomson’s experiment to measure the charge to mass ratio of the

cathode ray. (b) Evaluate the impact of this experiment. SR

Answers to chapter revision questions

190

11. William Crookes designed many CRTs that enabled him to demonstrate the physical

properties of cathode rays. With the aid of a diagram, describe: (a) A tube that shows cathode rays travelling in a straight line. (b) A tube that shows cathode rays carrying momentum. 12. Describe how a cathode ray oscilloscope (CRO) is able to display an electrical signal in

the form of a wave. 13. Describe two major differences between a CRO and a colour TV.

chapter 11  from the photoelectric effect to photo cells

CHAPTER 11

From the photoelectric effect to photo cells The reconceptualisation of the model of light led to an understanding of the photoelectric effect and black body radiation Introduction Whether light is a wave or a stream of particles had scientists puzzled for centuries. While at times only a wave model can explain the behaviour of light, the photoelectric effect can only be explained with a particle model. This seemingly contradictory nature of light led to the development of a whole new way of thinking—quantum physics. The study of black body radiation and the photoelectric effect makes a fascinating story; it challenged the scientists of the time to leave behind old ways of explaining our world to one that would lead them a long way into the future.

11.1

Electromagnetic radiation (EMR) n

Identify the relationships between photon energy, frequency, speed of light and wavelength: c = f

Definition

Electromagnetic radiation (EMR) consists of changing magnetic and electric fields that propagate perpendicularly to each other. Note: EMR is also known as electromagnetic waves.

EMR exists in a number of different forms resulting from each one’s unique frequency range, called wavebands. Some examples of EMR from the highest frequency to the lowest are: gamma rays, X-rays, ultraviolet (UV), visible light, infrared, microwaves, highfrequency radio waves and lowfrequency radio waves. These comprise the EMR spectrum, which is summarised in Figure 11.1.

Figure 11.1  The EMR spectrum The EMR spectrum red, orange, yellow, green, blue. violet

LowHighMicrowave frequency frequency radiowave radiowave

Smallest f Longest λ

Infrared

Visible light

Ultraviolet (UV)

X-ray

Gamma ray

Highest f Shortest λ

191

from ideas to implementation

At the same time, all EMR shares the common property that it can propagate through a vacuum and travels at a constant speed of approximately 3.0 × 108 m s–1, that is c.

Relationship between frequency and wavelength Recall from the preliminary course that the product of the frequency and wavelength of a wave equals its velocity. Mathematically: Where: v = the velocity of the wave, measured in m s–1 f = the frequency of the wave, measured in Hz  = the wavelength of the wave, measured in m

v = f

In this context, since all EMR travels at the speed of light (c), v in the above equation is substituted with c; hence: Where: c = the speed at which light travels; it is a constant, which has a value of approximately 3.0 × 108 m s–1 f = the frequency of the wave, measured in Hz λ = the wavelength of the wave, measured in m

c = fλ

SR

■■

Solve problems and analyse information using: c = f 

Example 1

A particular coloured light has a wavelength of 700 nm. What is its frequency? Worked example 19

Solution

c = f λ λ = 700 × 10–9 = 7.00 × 10–7 m

c λ 3.0 × 108 = 7.00 × 10–7 f = 4.3 × 1014 Hz f=



Example 2

Humans can see light frequencies ranging from 3.75 × 1014 Hz to 7.5 × 1014 Hz. What is the range of wavelengths humans can see?

192

chapter 11  from the photoelectric effect to photo cells

Solution

c = fλ when f = 3.75 × 1014 Hz c λ= 3.75 × 1014 3.0 × 108 3.75 × 1014 = 8.0 × 10–7 m when f = 7.5 × 1014 Hz c λ= 7.5 × 1014 = 4.0 × 10–7 m ∴ the range of wavelengths humans can see is from 4.0 × 10–7 m to 8.0 × 10–7 m.

=

Example 3

In a red shift, it is observed that the frequency of EMR is decreased by 10%. By what percentage is its wavelength increased? Solution

Let the original frequency be f and wavelength be λ; and the new frequency be f  and the new wavelength be λ: Since f λ = c f  λ = c ∴ f λ = f  λ Also f  = f – 0.1f  (10% = 0.1) = f (0.9) ∴ f λ = [ f (0.9)] ⋅ λ λ = λ (0.9) 1 λ = λ 0.9 λ ≈ 1.11λ ∴ the new wavelength is 111% of the original wavelength; that is, it has increased by 11%

/

/

EMR and charged particles As we have discussed, a stationary charge produces its own electric field, and a charge that is moving at a constant velocity produces a magnetic field. So what about accelerating charges? It is essential to remember: An accelerating or oscillating charge produces EMR. The reverse is also true: EMR can cause charges to accelerate or oscillate.

193

from ideas to implementation

PFA

H1 History of Physics: ‘Evaluates how major advances in scientific understanding and technology have changed the direction or nature of scientific thinking’

TR

Hertz’s discovery of radio waves and his measurement of their speed n

Outlines qualitatively Hertz’s experiments in measuring the speed of radio waves and how they relate to light waves

Why was this discovery a major advance in scientific understanding? At the time of Hertz’s discovery, the scientific community had available to it James Clerk Maxwell’s equations for EMR. These equations mathematically predicted the existence of other, unknown forms of EMR which should behave in similar ways to light, but differ in wavelength (and therefore frequency). Hertz’s discovery was the first of its kind to identify the nature of another form of EMR.

How did it change the direction or nature of scientific thinking? PFA scaffold H1 Mapping the PFAs

Once Hertz had identified radio waves, which behaved as Maxwell’s equations predicted, the search was on for yet other unidentified forms of EMR—UV, X-rays, microwaves (often referred to as a form of radio waves) and gamma rays, all of which were subsequently discovered. Within a few years, Hertz’s radio waves were being put to use by Marconi for communication purposes.

Evaluation of Hertz’s discovery of radio waves and measurement of their speed Hertz’s discovery was the first of many to verify the existence of what is now known as the electromagnetic spectrum. Maxwell’s equations were shown to be correct. The many uses of the other forms of electromagnetism could then be developed. Thus Hertz’s discovery was a profound step in this area of scientific research and endeavour. In honour of Hertz, the unit for frequency was named after the discoverer of radio waves.

WWW>

Useful websites A short article containing photos of Hertz’s apparatus: http://www.sparkmuseum.com/BOOK_HERTZ.HTM Articles that take one beyond Hertz’s initial discovery: http://www.britanica.net/nobelprize/article-25129

11.2 SR

Simulation: electromagnetic waves

194

Hertz’s experiment: production and reception of EMR n

Describe Hertz’s observation of the effect of a radio wave on a receiver and the photoelectric effect he produced but failed to investigate

German physicist Heinrich Hertz was the first person (1888) to conduct experiments to produce and investigate EMR after its existence was proposed by James Maxwell. Hertz was aiming to produce EMR other than visible light and determine its properties to see whether they agreed with Maxwell’s early theoretical predictions. The experimental apparatus is schematically represented in Figure 11.2.

chapter 11  from the photoelectric effect to photo cells

Parabolic plates of the receiving coil: to re-focus the received EMR

Power source   Primary loop

EMR produced

Receiving coil Spark generated in the receiving coil

Induction coil Parabolic plates: to focus the EMR generated Spark generated in the primary loop

Figure 11.2  Hertz’s experiment

Heinrich Hertz

Note: An induction coil (above) is used to step up the DC voltage. As discussed before, high voltage is required in order to allow the electrons to jump across the air gap. As electricity is conducted through the air, a spark is produced.

The procedure of Hertz’s experiment 1. Production of the EMR: The current that was fed into the primary loop from the induction coil oscillated back and forth. This oscillation of charges (accelerating electrons) in the primary loop generated EMR (in this case a radio wave), which was emitted at the gap. There was also a spark generated across the gap as charges were conducted through the air. 2. Transmission of the EMR: The EMR (radio wave) was focused by the parabolic plates and travelled to the receiving coil. 3. Reception of the EMR: The EMR (radio wave) was again focused at the receiving coil. The EMR caused the electrons in the receiving coil to oscillate, thus regenerating the electric signal that was used in the primary loop, although much weaker. The oscillation of charges in the receiving coil also generated a spark across the air gap, although it was much fainter than the one in the primary loop due to the energy lost during transmission.

Measuring the speed of the EMR produced n

Outline qualitatively Hertz’s experiments in measuring the speed of radio waves and how they relate to light waves

Hertz was able to determine the speed of the EMR he produced by measuring its frequency and wavelength (since v = f λ). The frequency of the EMR must be identical to the frequency of the oscillation of the electric current, which could be predetermined. The wavelength was determined by taking measurements from the interference pattern generated by allowing the newly produced wave to take two slightly different pathways and recombining them at the receiving coil. Hertz calculated that the speed of the newly generated EMR was the same as that of light.

Other conclusions from the experiment Not only did the newly produced EMR have the same speed as light, Hertz also showed that this radiation had all the other properties of light, such as reflection, refraction, interference and polarisation. Hence he concluded (and verified Maxwell’s

195

from ideas to implementation

earlier prediction) that there exists a whole spectrum of EMR which all travel at the speed of light; the EMR he produced and light are just two out of many members of this spectrum.

One other important observation made by Hertz During his experiment, Hertz also observed that the intensity of the spark in the receiving coil faded considerably when the receiving coil was placed inside a dark box. To verify this, he also showed that by illuminating the receiving coil with a light source, a more intense spark was generated, with UV producing the most intense spark. Although Hertz recorded these experimental observations, he failed to further investigate this ‘mysterious’ phenomenon. It was found later that what Hertz had observed was the phenomenon known as the photoelectric effect. So what is the photoelectric effect?

11.3

The photoelectric effect Electrons ejected from the surface

Photons

Definition

The photoelectric effect is the phenomenon that a metal surface emits electrons when struck by EMR with a frequency above a certain value.

Sodium metal The photoelectric effect

Consider the following example: Example 1

Two electroscopes are set up. One is charged negatively and the other positively, so that their leaves are widely separated. Each electroscope has a piece of pure Electrons being ejected as a result of the photoelectric effect

UV

UV Zinc block

Electroscope

Before illumination: leaves widely open due to the repulsion of negative charges

*

*After illumination: leaves collapse due to the loss of excessive negative charges as electrons are emitted as a result of the photoelectric effect Figure 11.3  (a) A negatively charged electrode illuminated by UV

196

Electroscope

Zinc block

Leaves widely open due to the repulsion of positive charges. The leaves will remain open Figure 11.3  (b) A positively charged electroscope illuminated by UV

chapter 11  from the photoelectric effect to photo cells

zinc placed on its top. When a UV lamp is used to illuminate the zinc metal, the leaves of the negatively charged electroscope collapse very quickly, whereas the leaves of the positively charged electroscope remain almost unaltered. Account for what happened. Solution

When the UV lamp illuminates the negative electroscope, the UV light has enough energy to cause the zinc metal, hence the electroscope, to emit free electrons—the photoelectric effect (see Fig. 11.3a). As the excessive negative charges on the leaves are dissipated quickly by the release of the electrons, the leaves of this electroscope collapse very rapidly. On the other hand, for the positive electroscope, UV light does not have sufficient energy to free electrons from an already positively charged surface (due to electrostatic attraction). Hence the position of the leaves remains almost unaltered (see Fig. 11.3b). Note: Even if the photoelectric effect did take place in this case, losing electrons from a positively charged electroscope would not help to collapse the leaves but would open them even wider.

Explanation for Hertz’s observation So how can we use the photoelectric effect to account for Hertz’s observation? When the receiving coil is illuminated by UV free electrons are emitted at the terminal of the receiving coil as a result of the photoelectric effect. Once freed, these electrons can be much more readily accelerated back and forth in the air gap by the voltage generated in the receiving coil, resulting in a stronger spark at the receiving coil. Placing the receiving coil inside a dark box means the UV light is blocked; hence, the photoelectric effect cannot occur. Consequently, significantly fewer free electrons are released and accelerated across the gap in the receiving coil, thus the spark is fainter. Illuminating the gap of the receiving coil with light facilitates the photoelectric effect, increasing the intensity of the spark. It is now known that while UV light can cause the photoelectric effect, visible light or other EMR with frequencies less than that of UV light do not have enough energy to cause the photoelectric effect (See later sections). Thus not surprisingly, UV light produces the most intense spark.

More about the photoelectric effect After the photoelectric effect was discovered, many experiments were carried out to demonstrate its properties. Some results are: 1. The photoelectric effect only happens if the EMR used has a frequency above a certain value. 2. The maximum kinetic energy of the photoelectrons emitted depends on the frequency of the EMR used, not its intensity. Note: Photoelectrons refer to the electrons released as a result of photoelectric

effect.

197

from ideas to implementation

SR

3. Once the right frequency is achieved, emission of photoelectrons is instantaneous. If this frequency is not achieved, no matter how long a metal surface is illuminated, no photoelectrons will be emitted. 4. An increase in the intensity of the EMR used will result a larger photocurrent.

Animation: the photoelectric effect

Note: Photocurrent refers to current resulting from moving photoelectrons.

Based on classical physics, scientists at the time could not offer any satisfactory theoretical explanation for these experimental observations. Note: Classical physics means traditional physics: this term is further explained in the next section.

Thus before we can offer a satisfactory explanation for the photoelectric effect, we must first examine a new area of physics, known as quantum physics.

11.4

Quantum physics

11.5

Black body radiation and the black body radiation curve

Classical physics is traditional physics, which relies heavily on the contributions made by Isaac Newton, so it is sometimes called Newtonian physics. In such physics, all quantities are considered to be continuous and can take any value within a certain range. Classical physics is useful in describing macroscopic physical phenomena, such as the motion of a satellite or the torque of a motor. Quantum physics was introduced at the end of the 19th century. In quantum physics, quantities are considered to have discrete or non-continuous values, and have a limited number of values that they can take. Quantum physics is essential in describing microscopic physical phenomena, such as the photoelectric effect, energy levels of electrons, properties of a nucleus, and so on. (For more on quantum physics, please refer to From Quanta to Quarks.) Which physics theory is correct? The answer is that they are both correct, as they are theories proposed to explain certain phenomena in certain situations. It is more correct to identity one as the supplement to the other. Black body radiation is a good example of quantum physics.

n n

198

Identify Planck’s hypothesis that radiation emitted and absorbed by the walls of a black body cavity is quantised Identify the relationships between photon energy, frequency, speed of light and wavelength: E = hf

chapter 11  from the photoelectric effect to photo cells

Definition

A black body need not be ‘black’, rather, by definition, a black body is an object that can absorb and/or emit energy perfectly.

˜Ìi˜ÃˆÌÞ œvÊ̅i À>`ˆ>̈œ˜ ­Õ˜ˆÌ®

ŽÊÜ>Ûii˜}̅ When a black body is heated to some temperature in a vacuum, for example, by electric heating, it starts Figure 11.4  Three black body to emit radiation perfectly, known as black body radiation curves radiation. This radiation can cover the entire range of the EMR spectrum with the intensity varying with the wavelength. If the individual wavelengths of this radiation are detected and the corresponding intensities measured experimentally, the data can then be plotted as ‘intensity’ versus ‘wavelength’, to produce a black body radiation curve. Three black body radiation curves are shown in Figure 11.4, each obtained by heating the black body to a specific temperature: A few important trends need to be emphasised here for the black body radiation curves: ■■ The black body radiation curve for a given temperature will have a peak, which represents the wavelength with the highest intensity. Black body being ■■ When the temperature is increased, the height of the entire curve heated is increased. It is also important to note that the position of the peak is also shifted towards smaller wavelengths (higher frequencies). This explains why a cool star emits mostly infrared and appears reddish, whereas a very hot star emits mostly UV and appears blue.

The ‘UV catastrophe’, or ‘left-hand catastrophe’ The black body radiation curves shown in Figure 11.4 are obtained empirically by plotting experimental measurements. However, when scientists at the time tried to apply mathematics to the black body radiation in an attempt to derive the black body radiation curve theoretically, they found inconsistencies. The righthand side of the curve agreed with the one derived experimentally. However, the theoretical curve had no peak, rather it approached infinity as the graph approached small wavelengths (high frequencies) (see Fig. 11.5). This is known as the ‘UV catastrophe’.

The revolutionary quantum hypothesis by Max Planck To help to theoretically derive the black body radiation curve, German scientist Max Planck

Theoretically derived curve: it did not peak but approached infinity for the short wavelengths Intensity of the radiation (unit)

XºC

Experimental black body radiation curve

Wavelength (m) Figure 11.5  The UV catastrophe

199

from ideas to implementation

proposed a radical hypothesis, known as Planck’s hypothesis of black body radiation, which states: The radiation emitted from a black body is not continuous as waves—it is emitted as packets of energy called quanta (photons). The energy of these quanta or photons are related to their frequencies by the equation:

Max Planck

E = hf

Where: E = the energy of each quantum or photon, measured in J h = Planck’s constant, with a value of 6.626 × 10–34 J s f = the frequency of the radiation, measured in Hz

Planck’s quantum hypothesis for black body radiation was revolutionary; he simply proposed this hypothesis in order to mathematically derive the black body radiation curve. The proposal violated many physical laws of the time, and thus was not well supported by the scientific community. Even Planck himself was not convinced his hypothesis was correct. Note: Planck’s hypothesis led to the idea that energy is quantised, which is widely accepted today and also forms the basis of modern quantum physics.

■■

Solve problems and analyse information using E = hf

Example 1

A particular light wave has a wavelength of 420 nm. What is the energy of each of its photons? Solution

E = hf h = 6.626 × 10–34 J s c = fλ

f=

c 3.0 × 108 ≈ 7.14 × 1014 Hz = λ 4.2 × 10–7

∴ E = (6.626 × 10–34) × (7.14 × 1014) ≈ 4.73 × 10–19 J

200

chapter 11  from the photoelectric effect to photo cells

Example 2

If the frequency of an AM radio wave is 1000 kHz, what will be the energy of each photon? Solution

E = hf h = 6.626 × 10–34 J s f = 1000 × 1000 = 106 Hz ∴ E = (6.626 × 10–34) × (106) E = 6.626 × 10–28 J

Particle nature of light n

Explain the particle model of light in terms of photons with particular energy and frequency

From our previous study, we have learnt that light can be reflected, refracted, deflected, interfered and polarised, which undoubtedly proves light is a transverse wave. However, based on the quantum hypothesis proposed by Planck, the energy of light is quantised and comes as packets; this suggests light is composed of particles. This phenomenon, whereby light can behave as both waves and particles at the same time, is known as the wave-particle duality of light. (This idea can also be generalised for other matters, see From Quanta to Quarks.) Each light particle, or photon, possesses an amount of energy related to the frequency of the light wave, as described by the equation: E = hf.

11.6 SR

Simulation: models of light: electromagnetic spectrum

Note: Many students wonder how light can be particles and waves at the same time— which one is more correct? The answer is that both are correct. Generally, it is important to throw away one’s common sense in dealing with abstract concepts like this. Whether light is waves or particles, they are both models we created that apply to certain situations but not others. When we deal with reflection and refraction, light is a wave; when we deal with the photoelectric effect (see below) or the fact that light can be influenced by gravity, light is particles. These two models certainly do not conflict with each other; rather they work conjointly to allow us to describe all the behaviours of light.

Einstein’s explanation for the photoelectric effect: a quantum physics approach n

11.7

Identify Einstein’s contribution to quantum theory and its relation to black body radiation

As mentioned before, the phenomenon of the photoelectric effect cannot be explained by classical physics. In 1905, Albert Einstein combined Planck’s hypothesis that the energy of radiation was quantised and the particle model of light to explain the photoelectric effect, as follows:

201

from ideas to implementation

1. Light behaves like particles called photons, each carries a discrete package of energy. The energy of the photon is related to its frequency by E = hf. The collisions between photons and electrons lead to the photoelectric effect. 2. Only photons with energy above the work function (W) of a metal can cause the photoelectric effect. The work function is defined to be the minimum energy required to free electrons from the metal surface, and is different for different metals. The minimum frequency (which detemines the energy) the light must have to cause the photoelectric effect in a metal is called the threshold frequency for that metal. The kinetic energy of the photoelectrons released is determined by the difference between the energy of the photon (hf ) and the work function (W) of the metal: Ek = hf – W. 3. A photon can transfer either all of its energy to an electron, or none. If the frequency of the photon is less than the threshold frequency, its energy, even if transferred to an electron, will not be sufficient for the electron to leave the surface.

A graphic analysis Plotting the maximum kinetic energies of photoelectrons emitted against the frequencies K.E.  h f  w of the illuminating EMR for one particular metal surface (a fixed work function), will result in a y k xb m h graph like that shown in Figure 11.6. the standard line equation A few important features of the graph need to be emphasised: ■■ The slope of the graph is equal to Planck’s Frequency of ( Wh ) Threshold frequency constant h. If graphs are drawn for other the EMR (Hz) W Its absolute value equals the work function metals with different work functions, all those graphs will be parallel to each other, Figure 11.6  as they all share the same slope h. Kinetic energy of ■ ■ x-intercept of the graph is equal to the threshold frequency. If a metal with T  he the photoelectrons a larger work function is used, this point shifts to the right, indicating that a higher versus frequency of the illuminating threshold frequency is required. EMR ■■ The absolute value of the y-intercept of the graph is equal to the work function of the metal. If a metal with a larger work function is used, then the y-intercept shifts downwards. Maximum kinetic energy of the photoelectrons (J)

Figure 11.7  Measuring the kinetic energy of photoelectrons

Example

When light with a frequency greater than the threshold frequency shines on a photoelectric cathode embedded in an evacuated tube, electrons will be emitted as expected (see Fig. 11.7). (‘Photoelectric’ refers to a material that readily undergoes photoelectric effect to Cathode (large surface area) release electrons.) These electrons will have a specific EMR amount of maximum kinetic energy and in this case will move away from the cathode across the vacuum tube to reach the collector. This causes a photoelectric current to flow. Photoelectrons Anode (collector) One way of actually measuring the kinetic energy 0 Current of the photoelectrons experimentally is by making the collector negative enough to repel the photoelectrons to make Opposing Ammeter to prevent them from reaching the collector. This can the collector voltage negative be done by inserting a power source into the circuit

202

chapter 11  from the photoelectric effect to photo cells

with the correct orientation, as shown by the dashed lines in Figure 11.7. The consequence of this is that the photocurrent in this circuit drops to zero. The minimum voltage the power source needs to supply in order to make this happen is known as the stopping voltage. The stopping voltage correlates directly with the kinetic energy of the photoelectrons; a larger stopping voltage means more work needs to be done to stop the photoelectrons reaching the collector, which in turn reflects higher kinetic energies. (a) Suppose the incident EMR has a wavelength of 350 nm and the photoelectric cathode has a work function of 4.53 × 10–19 J; calculate the maximum kinetic energy for the photoelectrons released. (b) Find the maximum velocity of these photoelectrons. (c) How big should the stopping voltage be in this case? Solution

(a)



Ek = hf – W h = 6.626 × 10–34 c 3.0 × 108 f= ≈ 8.57 × 1014 Hz = λ 3.5 × 10–7 W = 4.53 × 10–19 J

∴ Ek = (6.626 × 10–34) × (8.57 × 1014) – 4.53 × 10–19 ≈ 1.15 × 10–19 J (b) The kinetic energy of these photoelectrons is related to their velocity by the equation: Ek = ½ m v2 Ek = 1.15 × 10–19 J m = mass of the electron = 9.109 × 10–31 kg ∴v=

=

√ √

2Ek m 2 × (1.15 × 10–19) = 5.02 × 105 m s –1 9.109 × 10–31

(c) In order to stop the photoelectrons reaching the collector, the power source must apply an opposing energy (work) that is at least equal to the kinetic energy of these photoelectrons. The work done by any electric system can be defined as the product of the voltage and charge, that is, (W = qV ). Hence: Ek = Workopposing = qVstop Ek = 1.15 × 10–19 J q = charge of the electrons = 1.602 × 10–19 C (negative) E ∴ Vstop = k q 1.15 × 10–19 = 1.602 × 10–19



Vstop = 0.72 V

NOTE: If Vstop is known (measured) then the kinetic energy of the photoelectrons can be determined experimentally.

203

from ideas to implementation

11.8 SR

Simulation: the photoelectric effect

Using Einstein’s explanation to investigate the photoelectric effect In section 11.3 (page 196) we listed some properties of the photoelectric effect and, as mentioned in the same section, these properties cannot be explained by classical physics. However, it is easy to explain them using Einstein’s quantum mechanical approach. Most of the answers lie in Einstein’s equation: Ek = hf – W. In order for the photoelectric effect to take place, hf needs to be larger than W (in order to make Ek positive); since h is a constant, it follows that f needs to be above a certain value. Once photoelectrons are emitted, what determines their kinetic energy is the frequency of the incident EMR and the value of the work function. Since intensity is not part of the equation, it plays no role in determining the kinetic energy of the photoelectrons. The all or none principle also has its own important consequences. If the photons have the right energy (thus frequency) to cause a photoelectric effect, they may transfer all their energy instantaneously, so emission of the photoelectrons is instantaneous. However, if the photons have insufficient energy, then no energy is transferred. This effectively means there is no accumulation of the energy level of the electrons. Consequently no matter how long they are illuminated by the photons, their energy will never exceed the threshold value to cause a photoelectric effect. Although intensity has no effect on the kinetic energy, it does determine the number of photoelectrons released per unit time, if the frequency is above the threshold value. This is because intensity is a measure of how many photons are received per unit time. Higher intensity means more photons are bombarding the electrons, which means more photoelectrons are emitted. Since current is defined as q the number of charges passing through a point in a second (recall: I = ), the more t photoelectrons, the higher the current. Note: However, it is important to note that if the frequency is below the threshold frequency, then the intensity has no influence on the photoelectric effect.

11.9 H12.3a, b, d H13.1a, b, c PFA

H1 PFA

H2 204

Einstein’s contributions to quantum physics and black body radiation n

Identify data sources, gather, process and analyse information and use available evidence to assess Einstein’s contribution to quantum theory and its relation to black body radiation

Planck is believed to have been the initiator of quantum physics. When Planck first proposed the idea of the quantisation of energy, it was thought to be radical and even Planck himself could not be convinced that this was true. However, when Einstein ‘borrowed’ this idea and used it to successfully explain the photoelectric effect, it provided convincing evidence to back up this radical hypothesis. Einstein’s idea of the quantisation of the energy of light led many scientists at the time to realise there was a whole new area of physics opening up.

chapter 11  from the photoelectric effect to photo cells

Later, when Millikan performed his experiment to analyse the relationship between the frequencies of the incident EMR and the kinetic energies of the photoelectrons released by different metal surfaces, he plotted them as shown in Figure 11.6. Not only he was able to verify Einstein’s equation for the photoelectric effect, he was also able to determine a more precise value for h by examining the gradient of the line. This was the first time Planck’s constant h could be derived experimentally. Before that, the value for h could only be determined empirically by fitting the mathematically derived black body radiation curve with the one obtained experimentally (in other words, by trial and error). This further strengthened the connection between the photoelectric effect, the black body radiation curve and Planck’s hypothesis, which form the heart of quantum physics.

Applications of the photoelectric effect: the implementation n

n

Identify data sources, gather, process and present information to summarise the use of the photoelectric effect in: – photocells Identify data sources, gather, process and present information to summarise the effect of light on semiconductors in solar cells

We turn to the implementation part of the photoelectric effect. Two devices will be examined: photocells and photovoltaic (solar) cells.

The cathode is coated with a light sensitive material

Light

Definition

A fundamental photocell, called a phototube, consists of a low-pressure glass bulb, in which is embedded an anode and a large cathode coated with a photoelectric material. Figure 11.8 shows a schematic representation of a phototube. Functional principle When a photocell is connected to a circuit, the gap between the cathode and anode means that the resistance it

PFAs H3 physics skills H12.3a, d H13.1a, b, c, d, e H14.1g

Cathode with a large surface area to maximise the photoelectric effect

Photocells

Photocells are electronic devices with resistances that alter in the presence of light.

secondary source investigation

Anode

Low-pressure glass bulb

Connect to a circuit Free electrons are easily accelerated towards the anode to increase the conductivity of the phototube

Electrons emitted as light strikes the photosensitive cathode—the photoelectric effect

Figure 11.8  The structure of a phototube

205

from ideas to implementation

develops is infinite, consequently no current can flow in the circuit despite there being a supplied voltage. When a light shines on the light sensitive cathode, electrons are emitted as a result of photoelectric effect. These free electrons can conduct electricity quite easily from the cathode to the anode, which consequently lowers the resistance of the photocell. As a result, a current starts to flow in the circuit. This current then triggers another functional system such as an alarm, usually through an amplifying electric circuit.

Photo-sensitive area

Common uses of photocells Automatic doors A phototube

Generally, photocells are used when electronic circuits need to be switched on or off by lights. Some examples include:

n A  larm systems in houses: For instance, thieves break into a house and turn on the lights, which triggers the photocell and causes the alarm to sound. n Automatic doors: Infrared light emitted from the sensor is reflected from the approaching objects and triggers the photocell, which then controls the opening of the doors. n Door alarms for shops, which beep when customers come into the shop.

Photovoltaic (solar) cells These are discussed in Chapter 12, along with semiconductors and solid state devices.

Can science be set free from social and political influences? Einstein and Planck’s views secondary source investigation PFAs H1

n

Process information to discuss Einstein and Planck’s differing views about whether science research is removed from social and political forces Note: Neither Einstein nor Planck made any personal statements that could answer the above question. It is therefore necessary to examine their life stories and reach an answer based on each person’s contributions, both scientifically and socially.

Einstein and Planck were contemporaries and they were also friends. However, their very different socio-economic backgrounds and personalities led to their distinct views on the relationship between science and politics.

Einstein Einstein is considered one of the greatest physicists who ever lived. His special theory of relativity (discussed in Chapter 4) and general relativity (not required by the syllabus), and his investigation into the photoelectric effect all had profound impacts on the way humans perceive the Universe.

206

chapter 11  from the photoelectric effect to photo cells

Not only did he devote his life to physics, he was also a strong believer in pacifism—he opposed wars and violence. Indeed he spent just as much time studying and preaching pacifism as he did in physics research. Einstein was a politically active man. He openly criticised German militarism during World War I. After World War I, he constantly moved around the world to give lectures on physics and more importantly to preach his pacifist ideals and promote peace for the world. However, an irony of his later life, after he emigrated to the US (at the beginning of World War II), was the famous letter he wrote to the US president, Franklin Roosevelt, to convince him to set up the project of making nuclear bombs (later to be known as the ‘Manhattan Project’), which later led to the deaths of tens of thousands of people in Japan. His rationale was his fear that the Germans were developing nuclear technology and might build nuclear bombs first. When Einstein realised after the war that the Germans were nowhere near making nuclear bombs, he painfully regretted his decision. Indeed, Einstein’s famous equation E = mc2 (see Chapters 4 and 16) made him inseparable from society and politics, as this physics knowledge resulted in the creation of the most powerful and deadly weapon ever known to humankind. This also served as the basis for the development and implementation of nuclear power stations.

Planck While Einstein came from a Jewish working-class family, Planck came from an upper-class German family. His famous quantum theory made him the authority in German physics. Unlike Einstein, Planck continued his physics research at the University of Berlin under the Nazi regime during World War II. He was not as politically active as Einstein and focused on his physics research even during the war. However, Planck was not amoral. He did go to Adolf Hitler in an attempt to stop his racial policies. It could be argued that this was an act guided by his moral values; it could also be argued he did this simply to preserve the development of German physics, with no intention of influencing political decisions.

The production of radio waves n

Perform an investigation to demonstrate the production and reception of radio waves

first-hand investigation PFAs

Demonstrate the production and reception of radio waves

H1

This is a relatively simple experiment. The apparatus may be set up in a similar way to Hertz’s experiment described early in this chapter. A spark in the receiving loop may be observed if the laboratory is dark enough. However, it is rather difficult to measure the speed or determine the properties of these radio waves experimentally in school labs. If a spark cannot be seen in the receiving loop, the radio wave produced can be heard as a buzzing sound by

physics skills H12.1a, b, d H12.2a

TR

The equipment used

Risk assessment matrix

207

from ideas to implementation

AM radio

Tuned off a station and with the volume turned up

Long coil of wire Spark gap

using a piezoelectric ear piece (such as one made from quartz) from an old-fashioned radio. This is because the radio waves produced cause the piezoelectric material in the ear piece to vibrate, thus generating a buzzing sound.

Alternative method

Radio waves are also produced by sparks. Lightning produces radio waves that can interfere with radio reception. Earth Sparks from an induction coil produce radio waves that can be received by any Long wire AM radio placed within a few metres of the coil. The static noise can be heard in time with the sparking. It can be observed that the radio static is not as pronounced on FM radio, which is one of the many benefits of FM radio broadcasts. Figure 11.9 shows a schematic diagram of the apparatus that can be used for this investigation. Extreme care needs to be taken with the induction coil due to the high voltage and X-rays produced. A second AM radio tuned off a station by about 500 Hz from the other radio and with the volume turned down can help you hear the other one. (It will act as an oscillator and with a bit of fiddling produce a ‘tone’.) Induction coil

CRO

Earth Figure 11.9  An alternative method of demonstrating the production and receiving of a radio wave

chapter revision questions   1. Describe the purpose of Hertz’s experiment involving radio waves. What was the other

significant observation made by Hertz during this experiment?   2. Describe the fundamental differences between classical Newtonian physics and

quantum physics.   3. The following sketch is a black body radiation curve. A black body radiation curve Intensity (unit) At X ºC



Wavelength (m)

(a) How can a black body radiation curve be obtained experimentally?

208

chapter 11  from the photoelectric effect to photo cells

(b) Sketch another curve using the same axis, at a temperature Y °C, where Y is greater than X.   4. Define Planck’s hypothesis. Under what circumstances was this hypothesis made?   5. Calculate the energy of the photons of:

(a) A radio wave that has a wavelength of 50 cm. (b) An X-ray that has a wavelength of 1.1 × 10–10 m.   6. A very dim light beam has 270 photons passing through a given point in one second. If this beam of light is rated as 2.23 × 10–16 W, calculate the frequency of this light beam.   7. Can the photoelectric effect take place with an insulator? Justify your answer.   8. Einstein won the Nobel Prize for his contributions to the photoelectric effect. Analyse

Einstein’s explanation for the photoelectric effect; make specific reference to the quantum theory.   9. Describe the effect of intensity of the incident EMR on the maximum kinetic energy of

the photoelectrons released. 10. An evacuated glass tube with a photoelectric cathode is set up as shown in the diagram.

When light shines on this cathode, electrons are emitted. Light with different frequencies is used. Cathode (large surface area) Evacuated tube

EMR

Photoelectrons

Anode (collector)

Ammeter, which registers a photocurrent (a current flowing in the circuit)



(a) The table below records the frequency of the incident EMR and the kinetic energy of the photoelectrons. Plot them on a graph using appropriate axes. Frequency of the incident EMR (Hz) 1.0 ×

1014

3.0 × 1014 5.0 ×

1014

TR

Kinetic energy of the photoelectrons (J) 0 8.18 × 10–20 J 2.14 ×

10–19

Graph paper

J

7.0 × 1014

3.47 × 10–19 J

9.0 × 1014

4.79 × 10–19 J

(b) Describe the meaning of the x- and y-intercept and the gradient of the graph. Use the graph to determine the work function of the metal used as the photoelectric cathode. (c) Friends of yours argue that they can just use one set (row) of data in the table to calculate the work function of this metal. They insist this will save a lot of time. Critically explain to them why the method you employed in part (b) is superior. (d) In order to measure the kinetic energy of these photoelectrons, scientists would have to apply an opposing voltage (called the stopping voltage) to stop these photoelectrons, so that the net current flowing in the circuit would be zero. How

209

from ideas to implementation

should such a voltage source be placed in this circuit, and what is the stopping voltage for each of the above frequencies? (e) On the same axes used in part (b), sketch the graph for another metal with a lower work function. 11. The operation of old-fashioned breathalysers involves applications of the photoelectric

effect. (a) Initially, the analyser contains dichromate solution, which is orange in colour. This filters the white light in the system to produce an orange light beam. This orange light then shines on to the detector, which is effectively a photocell. If the orange light has a wavelength of approximately 620 nm, calculate the energy of photons as received by the detector. (b) These orange photons are not energetic enough to cause photoelectric effect in the photocell. However, when alcohol is breathed into the solution (from the driver’s mouth who has been drinking), it changes the dichromate to chromium oxide, which turns the solution green. This changes the initial orange light beam into a green light beam. Calculate the energy of the green photons, if green light has a wavelength of approximately 520 nm. (c) Green light is energetic enough to cause a photoelectric effect in the photocell (detector). Estimate the size of the work function of the cathodic plate of the photocell. (d) Based on the information above and the knowledge about how photocells work, suggest how a breath analyser can help to identify whether a particular driver has been drinking or not. SR

Answers to chapter revision questions

210

12. As part of your HSC course, you should have tried to demonstrate the production and

reception of radio waves. (a) Describe the set-up you used to produce radio waves. (b) Identify a simple procedure you may use to confirm that radio waves have been produced. 13. What was Einstein’s view of the relationship between science and politics? What

evidence is there for this view?

chapter 12  from semiconductors to solid state devices

CHAPTER 12

From semiconductors to solid state devices Limitations of past technologies and increased research into the structure of the atom resulted in the invention of transistors Introduction The development of semiconductors from the first germanium ‘crystal set’ radio to modern microprocessors containing millions of transistor connections was largely driven by the need for reliable, portable radio transceivers. Parallel to this need was the development of more sophisticated and complex electronic circuits, which were based on an increasing number of valves—unreliable, power-hungry and bulky vacuum tubes. The photograph (right) shows such a device from an old radio navigation unit, the modern equivalent of which could easily fit in the palm of your hand. The increased knowledge of the behaviour of semiconducting elements—germanium and later silicon, along with the process of ‘doping’, allowed huge improvements in electronics to occur—improvements that today we largely take for granted, but without which our modern society could not function.

Valence shell and valence electrons

An old valve-based radio navigation unit

12.1

{

An atom consists of a small and dense positive nucleus that contains protons and neutrons; around the nucleus are electrons, which are organised into distinct Figure 12.1  orbits called electron shells. Different atoms have different numbers Electron shells of electron shells, but each electron shell can only hold a certain number Valence Valence of electrons. shell electrons The outermost electron shell of all atoms is given the name valence   e e shell, and the electrons contained in the valence shell are called valence electrons. There can be a maximum of eight valence electrons in a valence shell, and they have higher energy compared to the electrons e e e e in the other shells. (The innermost electron shell has the lowest energy   e e level.) e e Consider the magnesium atom, which has 12 electrons in total: e e the organisation of these electrons into electron shells is shown in Electrons Figure 12.1. There are three electron shells for the magnesium atom, and Electron Nucleus the valence shell contains two valence electrons. These two electrons shells have higher energy than any of the other 10 electrons.

211

from ideas to implementation

12.2

Metals: metallic bonds and the sea electron model n

Identify that some electrons in solids are shared between atoms and move freely

Metallic bonds are interactions through which metal atoms are joined together to form a lattice structure; it is best described by the ‘sea of electrons’ model. Definition

Delocalised electrons that are shared among all atoms

Delocalised valence electrons refers to the fact that the valence electrons of all metal atoms are freed and are shared among other metal atoms. These electrons therefore do not belong to any specific atom, they have high energy and are Positive metal ions moving freely about. Delocalised electrons are responsible for stablising and holding metal atoms together in a lattice. It is also important to remember it is these delocalised electrons that give metals their unique physical properties, for instance, high thermal and electrical conductance. In Figure 12.2, we have a lattice of sodium metal. Each sodium atom has one valence electron that is delocalised and shared among all other atoms.

Figure 12.2  Lattice of sodium metal

12.3

According to the ‘sea of electrons’ model, the bondings between metal atoms are described as a lattice of positive metal ions surrounded by a sea of delocalised valence electrons (see Fig. 12.2).

The structure of semiconductors You need to have some ideas about what conductors and insulators are. The easiest way to define semiconductors is that they have properties in between that of conductors and insulators. Common semiconductors are germanium and silicon, with silicon being more commonly used today (discussed later).

Figure 12.3 (a)









































One tetrahedral unit

,i«ÀiÃi˜ÌÃÊ>Ê VœÛ>i˜ÌÊLœ˜`]Ê Ü…ˆV…ʈÃÊiµÕˆÛ>i˜Ì ̜Ê>Ê«>ˆÀʜvÊiiVÌÀœ˜Ã

>V…ʼ-ˆ½Ê>̜“ÊvœÀ“ÃÊvœÕÀÊVœÛ>i˜Ì Lœ˜`ÃÊ܈̅ʘiˆ}…LœÕÀˆ˜}Ê>̜“Ã

212

Note: This is a three-dimensional representation of the structure of silicon. Each silicon atom is bonded to 4 other silicon atoms in a tetrahedral fashion. This structure is repeated indefinitely and constitutes the entire crystal lattice. Bonds for silicon atoms on the edges are not drawn for the purpose of clarification.

Figure 12.3 (b)

chapter 12  from semiconductors to solid state devices

As both germanium and silicon are group IV elements, they have 4 valence electrons. This allows each silicon or germanium atom to make 4 covalent bonds with the neighbouring silicon or germanium atoms to form a macro-covalent lattice structure as represented two-dimensionally in Figure 12.3 (a). The four covalent bonds around each atom form a tetrahedral shape three-dimensionally, as shown in Figure 12.3 (b).

Band structure and conductivity for metals, semiconductors and insulators n

n

12.4

Describe the difference between conductors, insulators and semiconductors in terms of band structures and relative electrical resistance Compare qualitatively the relative number of free electrons that can drift from atom to atom in conductors, semiconductors and insulators

When an atom is by itself, the energy level of its electrons is sharply defined; that is, the energy is the same for all the electrons in the same (corresponding) electron shell, and different from those in the other electron shells. However, when atoms are packed together quite closely to form a lattice structure, electrons in a single atom will be constantly interacting with neighbouring electrons of other atoms or even positive nuclei of other atoms. This results in the energy level of the individual electrons changing slightly and therefore being no longer sharply defined. In a broad sense, the energy of all the electrons in the lattice is not sharp and can take all ranges of values. This blurring of the energy level of electrons in a lattice structure results in the formation of electron energy bands. Definition

The energy band is the range of energy electrons possess in a lattice. There are two types of energy band you need to know about: Valence band: The valence band is made up of the energy levels of the valence electrons of individual atoms. It has higher energy than energy bands formed by the electrons found in the inner shells. ■■ Conduction band: When valence electrons gain energy, they might move up to even higher energy shells that were previously empty. These electrons (energy levels) make up the conduction band. Once in the conduction band, these electrons are free to move and therefore are able to conduct electricity (hence the name conduction band). Except for conductors, electrons in the conduction band usually have higher energy than those in the valence band. The energy gap that electrons have to overcome to move from the valence band to the conduction band is referred to as the forbidden energy gap. We are now going to examine the band structures for metals, semiconductors and insulators and analyse how they are related to the conductivity of each. It is easier to start with semiconductors, to illustrate the basic principles mentioned above, then the differences will be noted for metals and insulators. ■■

213

from ideas to implementation

Semiconductors n

Identify absences of electrons in a nearly full band as holes, and recognise that both electrons and holes help to carry current

Band structure

You have learnt that the 4 valence electrons of each semiconductor atom form 4 covalent bonds with the neighbouring atoms, therefore all are locked in position. Also the formation of the covalent bonds means the valence shells now have 8 electrons, consequently the valence band is full. Note: One covalent bond equals a pair of (shared) electrons.

By the nature of a semiconductor, the valence electrons only need to gain a small amount of energy if they are going to move into the conduction band; hence, the conduction band is separated from the valence band by a very small forbidden energy gap. At room temperature, a minority of the electrons in the valence band can possess enough thermal energy to overcome the small forbidden energy gap to ‘jump’ into the conduction band. Hence at room temperature, the conduction band of a semiconductor is partially filled. See Figure 12.4. Figure 12.4  The band structure of a semiconductor

Conduction band: partially filled Forbidden energy gap: very small

Energy level

Valence band: completely filled

Conductivity

The small number of electrons in the conduction band means at room temperature the conductivity of a semiconductor is moderate. It is also important to remember: As the temperature of a semiconductor increases, its conductivity increases and its resistance decreases. Note: Conductivity (conductance) is inversely related to resistance; that is

conductivity =

1 . resistance

This is because as the temperature increases, the electrons in the valence band gain more thermal energy. This allows more electrons in the valence band to possess enough energy to overcome the forbidden energy gap to move into the conduction band. More electrons in the conduction band effectively translate to a higher conductivity for the semiconductor. Although the increase in the temperature also increases the total number of undesirable collisions between the conducting electrons and the lattice (which tends to decrease the conductivity), this is largely overruled by the substantial increase in the number of electrons in the conduction band.

214

chapter 12  from semiconductors to solid state devices

An electron jumps from the valence band into the conduction band leaving behind a positive hole

Movement of free electrons Applied voltage

Movement of positive holes

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Si

Positive holes The positive holes are ‘moving’ towards the negative terminal of the voltage supply

Electrons jump into and out of the positive holes

Figure 12.5  (b) Electron hole pair conduction in the presence of a voltage

A characteristic property: electron-hole pair conduction

As one electron ‘jumps’ across the forbidden energy gap to occupy the conduction band, there will be an electron deficiency in the Figure 12.5  (a) Electron hole pairs valence shell, which should have 8 electrons to be full. The missing of one electron from the almost full valence shell constitutes a positive hole. See Figure 12.5 (a). Positive holes are imaginary and do not exist in reality, as they are simply regions in the valence band where there are deficiencies of valence electrons. They are quantum mechanic models that allow electrons to jump in and out without spending a lot of energy. Although positive holes do not migrate, when an electron jumps into a positive hole, there will be another positive hole left at the position where the electron was before. It follows that the movement of electrons creates the movement of positive holes in the opposite direction. Hence when a voltage is applied across a semiconductor, conduction can occur in the conduction band by the movement of free electrons, as well as in the valence band by the movement of positive holes in the opposite direction due to the movement of electrons into and out of these holes. See Figure 12.5 (b). This type of conduction is known as the electron-hole pair conduction. A positive hole: a region where there is a deficiency of one electron

Note: A positive hole is only formed in a region where there is one electron deficiency within an electron filled surrounding, that is, in the case of a semiconductor. For this reason, no positive holes can form for metals as all the valence electrons are delocalised.

Conductors (metals) Band structure

As discussed early in this chapter, the valence Energy electrons for metal atoms are all delocalised and level shared. These electrons have gained high energy and are free to move, and hence are able to conduct; these electrons are all in the conduction band. Since all the valence electrons of a conductor (metal) are in the conduction band, thus the valence band of a conductor is said to merge with the conduction band. The forbidden energy gap of course is non-existent. This is represented in Figure 12.6.

Valence band and conduction band: they are merged together in this case. =No forbidden energy gap Figure 12.6  The band structure of a conductor

215

from ideas to implementation

Conductivity

Since the valence band of a conductor is merged with the conduction band, this means that there are as many electrons found in the conduction band as there are in the valence band. Since a single metal lattice will contain billions of metal atoms thus billions of valence electrons, the population of electrons in the conduction band is very high. Consequently a conductor has a very good electric conductivity, or low electric resistance. It is important to remember: As the temperature of a conductor increases, its conductivity decreases and its resistance increases. This is because as the temperature of a conductor increases, the lattice will possess more thermal energy, which causes it to vibrate more vigorously. These vibrations lead to more collisions between the conducting electrons and lattice and therefore impede the motion of these electrons; this results in a decrease in conductance or an increase in resistance of the conductor. Note: No extra electrons can be recruited into the conduction band for a metal at a higher temperature, as the population of electrons in the valence band is already a maximum.

A word about drift velocity

Even when there is no potential difference applied across a conductor, its electrons are still moving at very high speed in all directions. This speed can reach 105 to 106 m s–1; however, due to their randomness, no net current will flow. See the solid lines in Figure 12.7. When a voltage is applied across a conductor, When there is no external voltage supply; the superimposed on top of the random motions of random motion of the electrons is very fast; Conductor there is no net movement of the electrons (metal) the electrons is that all the electrons start to ‘drift’ hence there is no net current flowing slowly in one uniform direction. This results in a net Applied current to flow. See the dashed lines in Figure 12.7. voltage How fast the electrons will drift in response to the applied voltage is known as the drift velocity. Such a velocity is generally very slow, usually only a few Drifting velocity: very slow centimetres per second. When a voltage is applied across the conductor, superimposed on the random motion of the electrons The drift velocity can be quantitatively described is that they are slowly drifting towards the positive terminal I of the voltage; this results in a net current to flow ; where v is the drift by the equation: v = neA velocity (m s–1), I is the current flowing through Figure 12.7  Drift velocity of the conductor (A), n is the electron density of the electrons conductor (how many electrons in a given volume), e is the charge of electron which has a value of 1.602 × 10–19 C, A is the cross sectional area of the conductor (m2). I neA . However, you do need to know what factors determine the size of the drift velocity.

Note: You are not required to perform any calculation using the equation v =

Insulators Band structure

For an insulator, all the valence electrons are used to form covalent bonds to hold its atoms together within the lattice (similar to a semiconductor). Thus the valence

216

chapter 12  from semiconductors to solid state devices

band is full, and the valence electrons are locked in position and can not move. The valence band is separated from the conduction band via a very large forbidden energy gap. At room temperature, virtually no electrons in the valence band can gain enough energy to jump across the large forbidden energy gap to occupy the conduction band. Therefore the conduction band is virtually empty. See Figure 12.8.

Conduction band: empty Energy level

Conductivity

Since the conduction band of an insulator is empty, its conductivity at room temperature is almost zero, and its resistance is infinite. It is important to note if enough energy is applied to an insulator, that is, by applying a very high voltage or heating it to a very high temperature, the electrons in the valence band will eventually gain enough energy to overcome the forbidden energy gap to occupy the conduction band. In these cases, the insulation property will break down and the insulator will start to conduct; however, during such processes its structure might have already been damaged.

A closer look: intrinsic and extrinsic semiconductors n n

Describe how ‘doping’ a semiconductor can change its electrical properties Identify differences in p- and n-type semiconductors in terms of the relative number of negative charge carriers and positive holes

Forbidden energy gap: very large Valence band: completely filled

Figure 12.8  Band structure of an insulator

Insulators

12.5

Semiconductors can be broadly classified into two categories: 1. Intrinsic semiconductors: Pure semiconductors. These semiconductors conduct electricity by electron-hole pair conduction, and have moderate conductivity at room temperature, as discussed in the previous section. 2. Extrinsic semiconductors: Semiconductors that have other types of impurities added. The impurities added to semiconductors can dramatically increase their conductivity by modifying their electrical properties. This will be discussed in detail in the next section.

Extrinsic semiconductors Extrinsic semiconductors can be further classified into two types, depending on the types of substances (impurities) added (see Fig. 12.9a and b for summary). 1. p-type semiconductors 2. n-type semiconductors The process of adding substances (impurities) to semiconductors in order to change their electrical properties is known as doping. It is important to remember that the amount of impurities added is very small, about 0.001%. p-type semiconductors

A p-type semiconductor is created when a pure semiconductor such as silicon is doped with any group III elements, for instance, boron. All group III atoms have

217

from ideas to implementation

Movement of the positive holes as a result of the migration of the electrons

One valence electron short; this results in a positive hole

Si

Si

Si

Si

Si

Si

Si

B

Si

Si

Si

Si

Si

Si

Si

Si

B

Si

Si

Si

Si

Si

Si

Si

Voltage supply

P-type semiconductor Figure 12.9  (b) Conduction by a p-type semiconductor

A covalent bond Figure 12.9  (a) A p-type semiconductor

3 valence electrons, therefore can only form 3 covalent bonds with neighbouring atoms. This means in the regions of the silicon lattice where the silicon atoms are replaced by the boron atoms, there will be one electron short for the formation of the fourth covalent bond with the neighbouring silicon atoms. These electron deficiencies within an almost filled valence electron band not surprisingly constitute positive holes (see Figure 12.9a). Therefore: p-type semiconductors contain positive holes. Note: Remember ‘p’ for positive holes.

Note: p-type semiconductors are not positive, as the number of electrons is still equal to

the number of protons within the lattice.

When a voltage is applied across a p-type semiconductor, conduction occurs as electrons can easily migrate from the negative to the positive terminal by jumping in to and out of the positive holes, which effectively results in the migration of these holes in the opposite direction (see Fig. 12.9b). Therefore we say the conduction in a p-type semiconductor is carried out by the positive holes. The presence of positive holes allows p-type semiconductors to have much higher conductivity compared to intrinsic semiconductors. n-type semiconductors

An n-type semiconductor is created when a pure semiconductor such as silicon is doped with any group V elements, for instance, phosphorous. All group V atoms have 5 valence electrons, therefore are capable of forming 5 covalent bonds with neighbouring atoms. Therefore after the phosphorous atoms have formed 4 covalent bonds with the neighbouring silicon atoms in the lattice, each atom will have one spare electron that is not required for bonding (see Fig. 12.10a). These electrons are free to move and have sufficiently high energy to occupy the conduction band. Therefore: n-type semiconductors contain free electrons.

218

chapter 12  from semiconductors to solid state devices

"˜iÊë>ÀiÊiiVÌÀœ˜Ê̅>ÌʈÃʘœÌÊ ÕÃi`ÊvœÀÊLœ˜`ˆ˜}ÆʈÌʈÃÊvÀiiÊÌœÊ “œÛiÊ>˜`ʅ>ÃÊ>ʅˆ}…Êi˜iÀ}Þ

Movement of the free electrons















*









Voltage supply

N-type semiconductor

-ˆ -ˆ

-ˆ -ˆ

-ˆ -ˆ

-ˆ -ˆ

* -ˆ

-ˆ Figure 12.10  (b) Conduction by an n-type semiconductor



Figure 12.10  (a) An n-type semiconductor

Note: Remember ‘n’ for negative electrons.

Note: By the same logic, n-type semiconductors are not negative.

When a voltage is applied across an n-type semiconductor, the free electrons can quite readily conduct electricity, as shown in Figure 12.10 (b). Therefore we say the conduction in an n-type semiconductor is carried out by the free electrons. Again the presence of free electrons increases the conductivity of n-type semiconductors quite dramatically. When a p-type semiconductor is joined with an n-type semiconductor

As p-type semiconductors have positive holes and therefore lack electrons and n-type semiconductors have excessive free electrons, when a p-type semiconductor and an n-type semiconductor are joined together, electrons from the n-type semiconductor will migrate into the p-type semiconductor at the junction to fill up positive holes in this area. This is shown in Figure 12.11. As a result, the p-type semiconductor will now possess more electrons than its protons, consequently displaying a negative charge, whereas the n-type will display a positive charge due to the loss of electrons. This creates a potential difference (electric field) across the junction where electron diffusions had occurred, which is also known as the Depletion depletion zone. At equilibrium, this potential difference zone also opposes further diffusion of electrons from the P-type n-type to the p-type semiconductor. So remember: The p-type semiconductor becomes negative and the n-type semiconductor becomes positive when they are joined together.

Negative potential

Figure 12.11  Joining a p-type and n-type semiconductor

Migration of es down the electron gradient

N-type

Positive potential

219

from ideas to implementation

Implementations of semiconductors: solid state devices secondary source investigation PFAs H1, H3

n

Gather, process and present secondary information to discuss how shortcomings in available communication technology lead to an increased knowledge of the properties of materials with particular reference to the invention of the transistor

physics skills

The need for the transistor

H12.3a, b, d H13.1a, b, c H12.4f H14.1g, h

During World War II, the use of thermionic devices (vacuum tubes; see section 12.6, page 222) became important in communications and in other developing technologies such as radar. Reliable communication between pilots of aircraft and between pilots and control towers, as well as between field commands and troops in the field was needed. Rugged, light and reliable transceivers that could be powered by batteries rather than mains power for portability were needed. Radar became an increasingly important tool for defence as it was able to detect approaching bombers long before they could be seen or heard by lookouts. With electronic circuits becoming more complex, more vacuum tubes, or valves, were required, increasing the unreliability, size and power requirements of the devices being built. The need to replace vacuum tubes, with all their inherent problems, saw the attention of researchers turn to solid state semiconductors. Germanium had already been identified as the first such material able to be obtained with sufficient purity as to be useful. However, it took over a decade of research and experimentation until the transistor was invented by John Bardeen and William Shockley in 1947, two years after the war had ended.

WWW>

Useful website For a comprehensive timeline leading to the invention of the transistor: http://www.pbs.org/transistor/index.html

Definition

Solid state devices are electronic devices made from semiconductors.

Diodes In the above section, we saw what happens when a p-type semiconductor is joined with an n-type semiconductor. This type of arrangement also has practical uses. Such a device is known as a solid state diode. A diode is an electronic device which only allows electric current to flow in one direction. When the p-type part of a diode is connected to the positive terminal of a power source and the n-type part to the negative terminal, a current will flow in the circuit unimpeded (see Fig. 12.12a). In this case, we say the diode is forward biased. However, if the connection is reversed, that is, connecting the p-type to the negative terminal and the n-type to the positive terminal, then no current can flow through the diode, in which case we say the diode is reverse biased. Note: How diodes are able to carry this function is basically due to the movement of electrons and positive holes within the semiconductors. The principle is not very complicated; however, because this knowledge is not explicitly required by the syllabus, it will not be discussed in this book.

Diodes are very important in electronics; for instance they form the basis of current rectifiers, which are devices that are able to convert AC to DC.

220

chapter 12  from semiconductors to solid state devices

Circuit symbol for a diode: forward biased

N

ÊÓ>ÊۜÌ>}iÊÃÕ««ˆiÃÊ̅iÊ V>̅œ`ˆVÊvˆ>“i˜ÌÊ̜Êv>VˆˆÌ>ÌiÊ Ì…iÀ“ˆœ˜ˆVÊi“ˆÃȜ˜

˜œ`iÊ«>Ìi Ê`ˆœ`iÊÛ>Ûi\Ê vœÀÜ>À`ÊLˆ>Ãi`

P

>

> ˆ>“i˜Ì ­V>̅œ`i®

Diode: in forward bias

>

I

Û>VÕ>Ìi` }>ÃÃÊÌÕLi ­Ã“>®

> *œÜiÀÊÃÕ««ÞʜvÊ̅iʓ>ˆ˜ÊVˆÀVՈÌ

Figure 12.12  (a)

Figure 12.12  (b)

Diodes are not the only examples of solid state devices. By combining p-type and n-type semiconductors in different ways, many other useful solid state devices are created. Transistors will be discussed here; others will be discussed in a later section.

Transistors Structure of transistors: Transistors are another type of solid state electronic device. A transistor can be created by either sandwiching a thin layer of p-type semiconductor in between two pieces of n-type semiconductor or a thin layer of n-type in between two p-type semiconductors. The first type of arrangement is more common. A transistor has three connecting leads: collector, base and emitter. These arrangements are illustrated in Figures 12.13 (a) and (b). Functions of transistors: Transistors have extensive applications in electronics. The three leads of a transistor allow it to be connected across two circuits. One circuit goes through the emitter and the base, while the other one goes through the emitter and the collector (see Fig. 12.14). The flow of charge carriers (electrons) through the emitter and the base alters the electric property of the middle piece semiconductor; this will affect the conductivity of the transistor between the emitter and collector, thereby affecting the flow of charges in that circuit. (Students do not need to know the details of how that happens.) It follows that a

œiV̜À

Diodes

œiV̜À

* >Ãi

*

>Ãi

*

“ˆÌÌiÀ

œ˜`ÕV̈œ˜ÊˆÃÊLÞÊ«œÃˆÌˆÛiʅœià Figure 12.13  (a) A PNP transistor

“ˆÌÌiÀ

œ˜`ÕV̈œ˜ÊˆÃÊLÞÊvÀiiÊiiVÌÀœ˜Ã Figure 12.13  (b) An NPN transistor

Transistors

221

from ideas to implementation

Figure 12.14  A single NPN transistor in the circuit

Larger voltage of the main circuit; thus larger current

Collector

N >

P

Base

N

small current flowing through the base can modulate the flow of—usually—a larger current in the main circuit that goes through the collector. This allows the small current to make a larger copy of itself in the main circuit; hence, transistors are often used in electronics as amplifiers. A small current flowing through the base may also facilitate or completely stop the current flow in the main circuit (through the collector); hence, transistors can also act as electronic switches.

Emitter

Small voltage; small current Note: if the transistor is of the P-N-P type, then the polarity of the power sources needs to be reversed for the transistor to function properly; and in such a case, positive holes migrate from the emitter to the collector

12.6

Solid state devices versus thermionic devices n

Describe differences between solid state and thermionic devices and discuss why solid state devices replaced thermionic devices

A word about thermionic devices

Diode valves

222

The easiest way to define a thermionic device is that it consists of a vacuum tube in which is embedded two or more electrodes, very much resembling the structure of a cathode ray tube, although not as big. Emission of electrons is from the cathode with the aid of the thermionic effect (described in Chapter 10), that is, by heating. A diode valve is an example of a thermionic device; it is the thermionic counterpart of a solid state diode made from semiconductors. It consists of a small vacuum tube with two embedded electrodes, an anode and a thermionic cathode. Its structure and the way it is connected in a circuit to give a forward bias are shown in Figure 12.12 (b). (Try to compare this with Figure 12.12a.) Diode valves were used in electronics before the invention of solid state diodes to restrict the flow of current to one direction. A triode valve is another example of thermionic device. It has a similar structure to a diode valve, however as its name suggests it has a third electrode inserted in between the anode and cathode. It is the thermionic counterpart of a transistor. Not surprisingly, the function of a triode valve is such that a small current passing

chapter 12  from semiconductors to solid state devices

through the third electrode (which makes it more or less negative) can be used to control and modify a large current in the main circuit that is going through the anode and cathode. Although today they have all been replaced by transistors, before the invention of transistors, many electronic devices relied on triode valves. For instance, the world’s first computer was built based on triode valves. It was undeniable that the invention of thermionic devices revolutionised the whole electronics industry and made lots of electronic devices possible many decades ago. However, due to the problems associated with their use, such as their large size, fragility and inefficiency, scientists were constantly seeking suitable replacements. Eventually the thermionic devices were all replaced by the newer solid state devices, which could carry out similar functions but were far superior in many other aspects. Some advantages of solid state devices over thermionic devices ■■

■■

■■

■■

■■

Miniature size: The relatively large size of thermionic devices limits their uses. Solid state devices such as diodes and transistors are considerably smaller (in millimetres). Further reduction in size can be achieved in a microchip (discussed later in this chapter) which may contain millions of transistors within the size of a fingernail. The trend of miniaturisation of electronic devices, such as mobile phones and lap-top computers, means the tiny solid state devices are much preferred. Durable and long lasting: Solid state devices are quite tough and can withstand a reasonable amount of physical impact. (Dropping a transistor might not necessarily break it.) On the other hand, thermionic devices are made from glass bulbs, which make them extremely fragile, so they need to be handled with care. Also, solid state devices generally have a longer life span than thermionic devices, which must be replaced after certain number of uses. More rapid operational speed: Solid state devices operate at a much faster rate than thermionic devices; this makes them particularly valuable in the production of fast operating microchips and microprocessors. In addition, solid state devices will function as soon as they are switched on, whereas thermionic devices require warming up. More energy efficient: Thermionic devices require very high voltages for their operation, whereas solid state devices can function at voltages less than 1 V. In addition, a large amount of heat is dissipated during the operation of thermionic devices so that there is a considerable amount of energy wasted. Solid state devices on the other hand only dissipate a small amount of energy during their operation. Cheap to produce: Solid state devices are much cheaper to make than the thermionic devices, so they are more economical when large quantities are needed.

Why silicon not germanium? n

Identify that the use of germanium in early transistors is related to lack of ability to produce other materials of suitable purity

12.7

Early solid state devices were mostly made from the semiconductor germanium. This was because the semiconductors used to make solid state devices must be extremely

223

from ideas to implementation

pure. At this early time, there was only the technology for extracting and purifying germanium, but no technology was available to prepare silicon with sufficiently high purity for the production of solid state devices. Today, technology allows the manufacture of extremely pure silicon. Therefore, almost all solid state devices are made from silicon, as silicon has many superior properties compared to germanium. These include: ■■ More economical: Silicon can be extracted from sand (SiO2), which is very abundant. The abundance of silicon allows it to be obtained at a much lower price and consequently reduces the price of solid state devices. ■■ Functions well under high temperature: Heat is produced while electronic devices are operating, which elevates the temperature of such devices. Under these high temperatures, silicon will still maintain its semiconductivity, while germanium tends to become a better conductor, losing its semiconductor properties. ■■ The ability to form an oxide layer: Silicon is able to form an impervious silicon dioxide layer when it is treated by heat in the presence of high oxygen content. This is an essential property for the production of microchips.

More solid state devices: solar (photovoltaic) cells secondary source investigation physics skills H12.3a, b, d H12.4f H13.1a, b, c, e H14.1e, f, g, h PFA

H3 ‘Assesses the impact of particular advances in physics on the development of technologies’

n

n

Identify data sources, gather, process and present information to summarise the effect of light on semiconductors in solar cells Summarise the effect of light on semiconductors in solar cells

What is the advance in physics in this case? Utilising the photoelectric effect to transform light energy into electrical energy, photovoltaic cells, or solar cells, use the photoelectric effect within semiconductor materials that have been arranged in such a way as to generate an electromotive force, or EMF. This EMF can in turn be used to do useful work in an external circuit.

Which technologies arose from this advance in physics? Solar cell technology, or photovoltaics, is one of the most exciting developments in the area of alternative energy sources. Its application is already widespread in remote communities and in smaller, mobile applications such as boats where connection to the mains power grid is not possible or practical. Satellites and space stations such as the International Space Station use solar panels as a source of electricity for their long-term missions. Millions of dollars are being spent worldwide on further developing this technology. The University of New South Wales Centre for Photovoltaics is one of several Australian research facilities attempting to improve the efficiency of the conversion of sunlight to electricity and to lower the cost of solar cells. Application of solar cells as a viable alternative to base-load coal-fired power stations is hampered by the inability to store the huge amounts of electricity required for the periods when the sun is not shining.

Assessment of the impact of this advance in physics on the development of new technology

WWW>

Useful websites How solar cells work: http://www.howstuffworks.com/solar-cell.htm

224

chapter 12  from semiconductors to solid state devices

With the use of the photoelectric effect (see Chapter 11) photovoltaic cells or solar cells are yet another example of solid state devices; they are capable of converting sunlight energy into electricity. A solar cell consists of a joined p-type and n-type semiconductor, sandwiched in between two metal contacts that are responsible for conducting electricity into and out of such a device. A schematic drawing of a small section of a panel of solar cell is shown in Figure 12.15 (a). A small section of this panel of solar cell is enlarged as shown in Figure 12.15 (b) to clearly illustrate the principle of a solar cell: when the sunlight reaches the p-n junction, Solar panel electrons are freed from the semiconductor at the junction as a result of the photoelectric effect. These electrons were once in the valence band but now have gained high enough energy so that they occupy the conduction band. Since these electrons are free to move, they can be easily accelerated by the electric field existing naturally at the p-n junction towards the n-type semiconductor, that is, against the direction of the electric field. Recall this electric field is created as a result of the migration of free electrons down the electron gradient from the n-type semiconductor into the p-type semiconductor when they are joined together; and since the n-type carries a positive potential and p-type carries a negative potential, the direction of the electric field at the junction is from the n-type to the p-type. After being accelerated through the n-type semiconductor, the electrons are collected by the front metal grids to enter the external circuit to do work. This is the electricity. These electrons are then returned by the back metal

Light

Section of solar cell

Figure 12.15  (a) and (b) A photovoltaic (solar) cell

(a)

Front metal grids Anti-reflective material

Back metal plate

N P

N-type semiconductor

P-type semiconductor

Front metal grids Light Anti-reflective N

P

>

Load

Electron released by photoelectric effect

Depletion zone: where electric field is created

Back metal plate

Photoelectron accelerated by the electric field created at the junction

Photoelectron returned

(b)

225

from ideas to implementation

plate to fill up the position of electron deficiency created by the initial photoelectric effect, and the whole process starts again. Also, it is important to note, since the electrons flow from the n-type through the external circuit to the p-type, it follows that the conventional current flows in the opposite direction. Note: Recall that conventional current flows in the opposite direction to the direction of the electron current.

Integrated circuits and microchips: extensions of transistors secondary source investigation PFAs H4 physics skills H12.3a, b, d H12.4f H13.1a, b, c, e H14.1e, f, g, h H14.3d

n

Identify data sources, gather, process, analyse information and use available evidence to assess the impact of the invention of transistors on society with particular reference to their use in microchips and microprocessors

Integrated circuits The next application involves a more complicated use of transistors: this is to put transistors together with other electronic devices in a way called integrated circuits. Imagine you have some semiconductor devices like transistors and diodes and other devices like capacitors or resistors; normally they can be connected into a circuit via copper wires and this circuit is supposed to carry out a certain operation. In contrast, in integrated circuits, these devices can be built along with their interconnections on a single chip of semiconducting material. Integrated circuits are also called microchips. Note: In an integrated circuit, the transistors and diodes do not have the same

appearance as those described in the previous section. They do not have a particular morphology in the integrated circuit; the only thing in common is that they carry out the same function. Definition An integrated circuit is an assembly of electronic devices and their connections, fabricated in a single unit (a single chip), which is designed to carry out specific tasks as they would if they were made individually and connected by wires. Integrated circuits

Analogy: Integrated circuits are almost like normal circuits shrunk into a

single unit. The production of integrated circuits is very complicated and is not required by the HSC course. The basic concept is that integrated circuits are made by processes of lithographic definition, deposition, and etching on common substrates, in most cases silicon. The processes are such that the required electronic devices, for example, transistors and diodes, are made and their interconnections are also established. A typical integrated circuit or microchip is very small, usually 1.5 cm2, but contains numerous transistors and diodes and their interconnections to carry out a very complex operation.

226

chapter 12  from semiconductors to solid state devices

To highlight the relationship between transistors and integrated circuits, there are basically two types of transistors used in integrated circuits: the bipolar transistors and the Metal-Oxide-Semiconductor-Field-Effect Transistors (MOSFET). The bipolar transistors are basically those shown in the previous section, Figure 12.13 (although in an integrated circuit they will not have any particular morphology), and they are current-controlled devices. They are usually used in situations where high-gain amplifiers (to increase current) are needed, this includes applications such as radios and other analog applications. MOSFET, on the other hand, is a voltage-controlled device. These devices dissipate far less heat compared to the bipolar transistors, due to the small amount of current used. As a consequence, they are more suitable for integration and therefore can perform complicated functions. They are more often used in digital circuits to switch electric signals on and off, therefore operating as switches. Ons and offs generate 1s and 0s, which form the basic computer languages. Hence MOSFET are commonly found in digital devices, of course, the main one being computers.

Microprocessors Definition

A microprocessor is a type of microchip that contains enough complicated electronic devices and their connections to perform arithmetic, logic and control operations. In a sense, a microprocessor is a more complicated form of a microchip. A microprocessor can perform and execute complicated actions. The rise of microprocessors is due to the ability to integrate a large number of electronic devices into a single chip. Note: The more devices integrated onto a single chip, the more powerful the chip.

In early days, only a few thousand devices could be integrated onto one chip. Gradually, the number has increased to thousands of thousands, millions and millions of millions, and growing all the time. A typical example of a microprocessor is the central processing unit (CPU) of a computer. Features and advantages The number of devices that are able to be integrated onto a single tiny chip has increased from a few to billions. As mentioned before, the more devices that can be integrated onto a single chip, the more powerful is its function. Hence billions of integrated devices will result in the integrated circuits’ ability to execute an extremely complicated action, which includes logic and calculations, such as in the case of a CPU and electronic robots. As the number of semiconductor devices on a single chip increases, the devices are placed closer to each other and so the transmission of signals between the devices becomes more efficient due to the smaller operating distance between the devices. The power dissipation during operation is also reduced due to both the small size of the devices and small distance of separation between them; the small distance results in less resistance to electric signals, and therefore less heat loss. Low heat dissipation means the devices become more power efficient, and the requirement for heat removal is also reduced.

Microprocessor chips

227

from ideas to implementation

One should note that the manufacturing costs of microchips are proportional to the area of the chip. Hence, being able to integrate more devices onto a single chip, which results in increase in complexity, does not actually raise the total cost of production. Effectively, this actually leads to a reduction in cost per unit transistor. Massive production of integrated circuits due to high demand further lowers the cost in the manufacturing of integrated circuits. Evaluation of the impact of microchips and microprocessors The invention and development of integrated circuits form the foundation for modern microelectronics and have promoted the development of the so-called information society. The applications of integrated circuits (microchips) are extensive. They are found in many forms of electronic devices, such as medical diagnosis applications, biotechnology, telecommunications, and so forth. The further development of microprocessors enables computers to be made. As the number of devices integrated in one chip increases, computers are not just becoming more powerful but also smaller. Computers are essential in our day-to-day lives and are used in business and industries and by scientific researchers. The fundamental necessity of computers is self-evident. The development of microprocessors also leads to the invention of intelligent terminals and robots, which are employed in many areas to carry out work that could replace human labour. This could prove beneficial in areas where heavy labour is involved or dangerous situations are anticipated. In summary, one can see that from basic semiconductors, diodes and transistors are made. These devices are integrated to make microchips and from there, more and more integration has led to the development of microprocessors. You can easily see the significance of the development of solid state devices such as transistors on society, especially in relation to microchips and microprocessors.

Modelling the behaviour of semiconductors first-hand investigation PFAs H2 physics skills H12.3a H13.1a, b, c, e H14.1f, g H14.3a

TR

Risk assessment matrix

228

n

Perform an investigation to model the behaviour of semiconductors, including the creation of a hole or positive charge on the atom that has lost the electron and the movement of electrons and holes in opposite directions when an electric field is applied across the semiconductor

Model the creation of positive holes and the movement of positive holes and electrons in the presence of an external electric field This concept has already been illustrated in this chapter. To model this, one can use bottle caps and marbles. Another way of modelling this is to use a Chinese checkers board or a draughts board. Leave a space in one of the holes or squares. As the playing pieces are moved into the hole, and the pieces behind are moved too, it is observed that pieces (representing electrons) move in one direction, but the space (representing the positive hole) moves in the opposite direction.

chapter 12  from semiconductors to solid state devices

Alternative method An alternative way to show the movement of electrons and holes in the opposite direction is to develop a series of PowerPoint slides with the electron moving one place at a time and then to play the slideshow with each slide showing for one or two seconds.

chapter revision questions   1. Using the concept of energy bands, explain why a metal is a better conductor of

electricity than a semiconductor, which is in turn a better conductor than an insulator.   2. (a) Identify the factors that influence the size of the drift velocity of electrons.

(b) Two conducting wires are made from the same metal. If one wire has a diameter that is three times bigger than the other wire, compare the drifting velocity of the electrons going through these conductors when equal size currents flow through these conductors. (c) Justify why drifting velocity is much slower compared to the actual velocity of the electrons.   3. Compare the change in resistance when a metal conductor and a semiconductor are

cooled.   4. Define ‘intrinsic semiconductors’.   5. Besides heating a semiconductor, describe two other ways of decreasing its resistance.   6. The electrical property of a semiconductor can be modified by a process called doping.

(a) Define the term ‘doping’. (b) How are n-type semiconductors produced; what electric property do n-type semiconductors have? (c) How are p-type semiconductors produced; what electric property do p-type semiconductors have? (d) Describe what will happen when an n-type semiconductor is joined with a p-type semiconductor. In your answer include the charge carried by each type of semiconductor. (e) Name the electronic device that is produced by the arrangement described in (d); what is the function of this electronic device?   7. Draw a table to compare and contrast five aspects of solid state devices versus

thermionic devices.   8. Give two reasons why most integrated circuits are made from silicon rather than

germanium.   9. Quote: A solar cell involves the creation of an electric field, the photoelectric effect and

movement of electrons. (a) Describe how the electric field is created in a solar cell. (b) Describe the occurrence of the photoelectric effect. (c) The resulting acceleration of the electrons creates electricity. Briefly explain how this takes place. 10. Evaluate the uses of semiconductors in the development of electronics and computers.

SR

Answers to chapter revision questions

229

from ideas to implementation

CHAPTER 13

From superconductors to maglev trains Investigations into the electrical properties of particular metals at different temperatures led to the identification of superconductivity and the exploration of possible applications Introduction Material science, the study of the behaviour of materials, has become an increasingly relevant and important field. From studying the original discovery of superconductivity in metals cooled to within a few kelvin (K) of absolute zero to the present-day implementation of this property in magnetic resonance imaging and maglev trains, it becomes clear that future implementation of high-temperature superconductors may revolutionise our technological society. A magnet levitating above a superconductivity disc

13.1

Braggs’ X-ray diffraction experiment n

Outline the methods used by the Braggs to determine crystal structure

The X-ray diffraction experiment was designed so that one could use X-rays to study the internal structure of a particular crystal lattice. This method is still commonly used today. The method was first developed by physicists Sir William Henry Bragg and his son, Sir William Lawrence Bragg. They gathered information on how electromagnetic radiation like X-rays would behave when they were scattered and subsequently interacted with each other to create an interference pattern. They stated that as an X-ray beam shone towards a lattice, the X-rays would be penetrative enough to reach different planes of the lattice and be scattered and reflected by these planes. These scattered or reflected X-rays would result in an interference pattern that could be detected and analysed to give information about the internal structure of the lattice. The set-up of the experiment is shown in Figure 13.1 (a), and where the X-ray hits the lattice, it is enlarged to give rise to Figure 13.1(b). As shown in Figure 13.1 (a), the X-ray source produced a uniform beam of X-rays and the X-rays were directed towards the lattice, which was placed at an angle θ to the X-ray beam. The X-rays

230

chapter 13  from superconductors to maglev trains

X-ray source

Detector

Detector A

X-ray

R

Scattered X-ray

A’ B’

B

See Figure 13.1 (b)

R R Crystal lattice

d

C

Figure 13.1  (a) The set-up of the Braggs’ experiment

E Atoms in the lattice

D

were scattered by the different planes of the lattice, and the detector was used to measure and record some of the scattered X-rays.

Figure 13.1  (b) An enlarged section of Figure 13.1 (a)

Note: X-rays scatter in all directions, but only ones that are of any use are measured. These are shown in the diagram.

To further understand the concept behind the experiment, we shall focus on Figure 13.1 (b). As you can see, the incident X-ray is represented by two parallel beams of X-rays A and B; A and B are initially in phase. Beam A strikes the first plane of the crystal and is scattered (reflected) to A and meanwhile, beam B strikes the second plane of the crystal and is scattered to B—we assume A and B are at the position of the detector. A similar process also happens in the third and fourth plane and so forth, which is not shown here for the purpose of simplicity. Once the beams have reached A and B respectively, it is clear that the beam reaching B’ has travelled a greater distance compared to that reaching A. This extra distance travelled is CD plus DE, which equals to 2CD (since CD equals to DE) as shown in the diagram, and by simple trigonometry that CD = dsinθ, therefore 2CD = 2dsin, where ‘d’ is the distance between the atoms in the lattice. Now, do you think that because of the extra distance travelled by one beam but not the other, the two beams may be potentially out of phase when they reach the detector (A and B)? They can be made in phase again if the difference in the distance is actually an integral multiple of the wavelength of the X-ray. In other words, we allow the difference to be by one, two or three—and so on—numbers of wavelengths (nλ), so that they are again in phase despite the difference in their travelled distance. Since d has a fixed value, the only way to adjust the extra travelled distance is by changing the size of the angle θ, which can be achieved by rotating the source of the X-rays and the detector. Hence in the next step, the apparatus is rotated until a constructive interference is recorded at the detector, which indicates the beams are again in phase. Clearly, when this occurs, we have nλ = 2dsin, where n is an integer, which is also known as the order of diffraction; it takes values of 1, 2, 3 and so on. λ is the wavelength of the X-ray measured in metres. Importantly, θ is the angle between the X-ray beam and the crystal surface to give constructive

231

from ideas to implementation

SR

Simulation: Bragg’s law

13.2

interference. ‘d’ is the distance between the atoms in the lattice. In the experiment, the wavelength of the source X-ray is known, and when the angle θ is measured, we can calculate the only unknown in this equation: ‘d’. Furthermore, the same method can be used to find for ‘d’ in other orientations and so map out a precise picture of the arrangement of the lattice. As mentioned before, this method is still commonly used today to study the structure of any unknown substance; because of the importance of their contribution, the Braggs (father and son) were awarded the Nobel Prize in Physics for their work on the X-ray diffraction experiment. The structure of metals, which are represented as the sea of electrons model, can be confirmed by this particular experiment.

Metal structure n n

Identify that metals possess a crystal lattice structure Describe conduction in metals as a free movement of electrons unimpeded by the lattice

As discussed, metal has a structure that can be represented as the sea of electrons model (see Chapter 12). Generally, metals are excellent conductors of electricity due to the presence of the large number of delocalised electrons. These electrons are free to move, and so are able to conduct electricity. This means most metals have a high conductivity and low resistance, where conductivity is always inversely related to resistance.

13.3

The effects of impurities and temperature on conductivity of metals n

Identify that resistance in metals is increased by the presence of impurities and scattering of electrons by lattice vibrations

A few factors may influence the conductivity of a metal conductor. Basically, anything that impedes the movement of the delocalised electrons would reduce the conductivity of the metal, and so increase its resistance. These factors include: ■■ temperature ■■ impurities ■■ cross-sectional area of the conductor ■■ length of the conductor ■■ electron density Our focus in this module will be mainly on temperature, which will be discussed in detail in later sections.

Temperature As temperature increases, the energy of the lattice increases. This leads to an increase in vibration of all the particles inside the lattice. This vibration will cause more collisions between the electrons and the lattice, impeding their movement. Thus the increase in temperature will result in a decrease in its conductivity or increase in its

232

chapter 13  from superconductors to maglev trains

resistance. Also note, unlike semiconductors, no more electrons can be recruited into the conduction band because all the valence electrons are already in the conduction band for metals (see Chapter 12). Lowering the temperature has the opposite effect.

Impurities Adding impurities is like adding obstacles to the movement of electrons. This impedes the electron movement, and hence decreases the conductivity or increases the resistance. An example of this is copper alloys that have impurities added. They are not as good conductors as pure copper metals.

Length As you have learnt in the preliminary course, the longer the conductor, the higher the resistance. This is due to electrons needing to travel a longer distance, so there is a higher probability that collisions will occur.

Cross-sectional area of the conductor The larger the cross-sectional area, the lower the resistance. The reason is that electrons can pass more easily through a conductor that has a larger cross-sectional area.

Electron density Electron density refers to how many free electrons per unit volume of the conductor are able to carry out conduction. Some metals, like silver, naturally have more electron density than other metals; hence, these metals are naturally better conductors, and have a lower resistance.

Superconductivity n

Describe the occurrence in superconductors below their critical temperature of a population of electron pairs unaffected by electrical resistance

13.4

As mentioned before, temperature has a determining effect on the conductivity or resistance of a metal conductor. Increasing the temperature will increase the resistance of a metal conductor, while lowering the temperature has the opposite effect; this can be summarised as shown in Figure 13.2 (a). From this graph, one can conclude that in order to make a metal a good conductor, one of the easiest ways is to lower its temperature. The graph shown in Figure 13.2 (b) demonstrates a similar pattern; however, one can see that as the temperature decreases, there will be a point where the resistance of the metal suddenly drops to zero. This effect is known as superconductivity, and the temperature needed for this to happen is known as the critical temperature. A metal or conductor that is exhibiting superconductivity is called a superconductor. Definition

Superconductivity is the phenomenon exhibited by certain metals where they will have no resistance to the flow of electricity when their temperature is cooled below a certain value (critical temperature).

233

from ideas to implementation

,iÈÃÌ>˜ViÊ­7®

ä

Figure 13.2  (a) Increasing temperature and resistance

œœ`ÊVœ˜`ÕV̜À

œiؽÌÊÀi>V…ÊâiÀœ

,iÈÃÌ>˜ViÊ­7®

ä

/i“«iÀ>ÌÕÀiÊ­®

-Õ«iÀVœ˜`ÕV̜À

ÀˆÌˆV>ÊÌi“«iÀ>ÌÕÀiÊ­/V ®

/i“«iÀ>ÌÕÀiÊ­®

Figure 13.2  (b) Decreasing temperature and resistance

It is important to realise that not all metals can exhibit superconductivity. In other words, some metals, even when their temperature is cooled, can only behave in the way that has been described in Figure 13.2 (a). Also, a metal that can potentially exhibit superconductivity will only do so when its temperature is below the critical temperature. It is more correct to label a metal as a superconductor while it is demonstrating superconductivity.

More on superconductors secondary source investigation PFAs H1 physics skills 13.1a, 14.1a, h

n

Process information to identify some of the metals, metal alloys and compounds that have been identified as exhibiting the property of superconductivity and their critical temperatures

We are now in the position to describe the types of superconductors. In general, we usually categorise superconductors into two main groups: n type 1: Metal and metal alloys n type 2: Oxides and ceramics

Type 1. Metal and metal alloys There are numerous metals and metal alloys that can behave as superconductors. They were the first ones discovered in history. Some examples include: Examples of types of superconductors. Bars of niobium, used in special steels, alloys and superconductors (left). Flexible tape of high temperature supconducting ceramic material (right)

234

chapter 13  from superconductors to maglev trains

Table 13.1 Metal and metal alloys

Critical temperature (Tc) Kelvin (K)

Aluminium

1.2 K

Mercury

4.2 K

Niobium-aluminium-germanium alloy

21 K

The advantages of these superconductors are: n These metal and metal alloy superconductors are generally more workable, as with all

metals; this means they are more malleable (able to be beaten into sheets) or ductile (able to be extruded into wires). n They are generally tough and can withstand physical impact, as with all other metals. n They are generally easily formulated and produced, as they are either just pure metals or simple alloys. They were the first to be discovered also for this reason. The disadvantages are: n These metal and metal alloy superconductors usually have very low critical temperatures,

as shown in Table 13.1. These low critical temperatures are technically very hard to reach and maintain. n They usually require liquid helium as a coolant to cool them down below their critical temperature. Liquid helium is much more expensive compared to the other common coolant used, liquid nitrogen, whose boiling point is –196 °C (77 K), which is not low enough for these metal and metal alloy superconductors.

Type 2. Oxides and ceramics Again there are numerous examples in this category of superconductors, and new ones are constantly being developed. A few examples are given in Table 13.2. Table 13.2 Oxides and ceramics

Critical temperature (Tc) Kelvin (K)

YBa2Cu3O7

  90 K

HgBa2Ca2Cu3O8+x

133 K

The advantages of this group of superconductors are basically to cover the disadvantages of metal and metal alloy superconductors. The main advantage is that one can use liquid nitrogen to reach the critical temperature, as well as maintain it. Note that liquid nitrogen has a boiling point which is low enough to cool the type 2 superconductors, but not type 1. The disadvantage, on the other hand, is that they do not have some of the advantages of metal and metal alloy superconductors. They are more brittle and fragile, shatter more easily and are generally less workable, which can pose a problem if one is going to use them to make electric grids, where the material has to be extruded into very thin wires. Also, they are chemically less stable and tend to decompose in extreme conditions. Furthermore, they are often more difficult to produce, and for that reason they were the later ones to be invented.

235

from ideas to implementation

13.5 PFA

H2 PFA

H5 ‘Analyses the ways in which models, theories and laws in physics have been tested and validated’ ‘Identifies possible future directions of physics research’

TR

Explaining superconductivity: the BCS theory n

Discuss the BCS theory

Superconductivity was first observed in 1911. An explanation for the cause of superconductivity evaded the likes of Einstein, Feynman and Bohr. Bardeen, Cooper and Schrieffer (whose surnames give rise to the naming of the BCS theory) won the Nobel Prize in 1972 for their explanation, which is based on the existence of ‘Cooper pairs’.

How has this theory been tested? The BCS theory, when applied to the original family of low temperature superconductors (those elements that have a critical temperature within a few degrees of absolute zero, 0 K), has proven to be statistically correct in the way it predicts the actual critical temperature and the conduction that occurs. However, a newer type of superconductor, known as the cuprates (due to the presence copper and oxygen in the substance), which have the ability to become superconducting at relatively high temperatures (i.e. their critical temperature is above that of liquid nitrogen) have made up the family of high-temperature superconductors. The BCS theory, when applied to these high-temperature superconductors, does not work.

Where to from here?

Mapping the PFAs PFA scaffold H5

WWW>

While further research into superconductivity proceeds in many facilities and universities around the world, newer theories using complex quantum theory ideas are being developed, tested and validated to explain what is happening in high-temperature superconductors. If we can understand the mechanism by which superconductivity occurs, we stand a better chance of making a roomtemperature superconductor. Useful websites Information about superconductors and their history: http://superconductors.org/History.htm

The next question is why some materials lose their resistance completely when they are cooled below certain temperatures (Fig. 13.2b) while others do not (Fig. 13.2a). It is reasonable to predict that there must be something happening at these extremely low temperatures that results in this sudden drop in resistance. To account for this, a group of scientists including John Bardeen, Leon Cooper and John Schrieffer developed a theory known as the BCS theory, named after themselves, which later became the most accepted explanation for the sudden drop in resistance at temperatures below the critical temperatures. It is important to realise that the BCS theory is a quantum mechanical model, and therefore is associated with very complicated physics and mathematics if it is to be explained fully. However, at the HSC level, only the fundamental concept needs to be covered and the explanation is simplified. The BCS theory is as follows: 1. Under low temperatures, that is, below the critical temperature, the vibration of the lattice is minimal. 2. The electron travelling at the front (first electron) attracts the lattice, as shown in Figure 13.3 (a).

236

chapter 13  from superconductors to maglev trains

3. The lattice Positive region (Phonon) responds, very slowly because they 1st 1st electron 2nd electron 1st electron are heavier, and electron moments later therefore distorts after the fast moving electron has passed this point, Lattice gets as shown in attracted Figure 13.3 (b). 4. This creates a Figure 13.3  (a) BCS theory and Cooper pairs positive region behind the first electron, which attracts the next electron Cooper pair: electrons and helps it to move through the lattice. (Note that when help each other to move the second electron reaches this positive region, the lattice through the lattice would have recoiled back to its original position due to the elasticity of the lattice to allow the second electron to pass Figure 13.3  (b) Lattice response through.) and distortion 5. This process repeats as the electrons move through the lattice. These two electrons move through the lattice assisted and unimpeded in a pair called the Cooper pair.

The Meissner effect n

Perform an investigation to demonstrate magnetic levitation

Definition

The phenomenon that a superconductor is able to totally exclude external magnetic fields, therefore its internal magnetic field is always zero, is known as the Meissner effect. The Meissner effect allows a superconductor to be able to levitate a small piece of magnet placed on top of it. These definitions are illustrated in Figure 13.4 (b) and (c). Note that in Figure 13.4 (a), the magnetic field is able to penetrate through a normal piece of conductor. However, when the conductor changes to a superconductor, as shown in Figure 13.4 (b), it is able to exclude the external magnetic field and allow none of it to penetrate through. Figure 13.4 (c) shows a small bar magnet hovering over the superconductor. The hovering or levitation of a small piece of magnet over a superconductor may be demonstrated in a school laboratory as shown in Figure 13.4 (c). The following is a summary of the procedure. 1. A piece of superconductor is cooled by immersing it in liquid nitrogen. 2. Use a pair of forceps to pick up a small piece of permanent magnet and carefully place it above the superconductor. 3. Describe the observations. 4. Describe what will happen when the magnet is forcefully pushed downwards.

first-hand investigation physics skills H12.1a, b, d H12.2b H12.3 a, d TR

Risk assessment matrix

237

from ideas to implementation

Figure 13.4  (a) to (c) A demonstration of the Meissner effect: a small magnet is made to hover over a superconductor

Magnetic field lines

Magnetic field lines

Bar magnet Cancels out or repels N

(a) Non-superconductor or a superconductor above its critical temperature

(b) A superconductor below critical temperature

S

Induced eddy current

(c) Superconductor below the critical temperature

Note: Liquid nitrogen has a temperature of –196 oC and can cause serious injuries if

it is splashed on the skin or into the eyes. Wearing of safety goggles and rubber gloves is mandatory for this demonstration. Students must not be too close to the apparatus. note: Normal magnets can be made to levitate if placed carefully inside a glass tube. Liquid nitrogen and superconductors are not required.

13.6 H14.1 a, b, c, d, f, g

Explaining the Meissner effect ■■

Analyse information to explain why a magnet is able to hover above a superconducting material that has reached the temperature at which it is superconducting

The physics behind the Meissner effect can be summarised as the following: When an external magnetic field attempts to enter a superconductor, it induces a perfect eddy current to circulate in the superconductor. The current is ‘perfect’ as a result of zero resistance to the flow of electricity in the superconductor. This ‘perfect’ current flows in such a direction that the magnetic field it produces is just as strong, but in the opposite direction to the external magnetic field. This leads to a total cancellation of this external magnetic field and allows none of it to penetrate through the superconductor. This idea can also be used to explain why a small magnet is able to hover over a piece of superconductor. In a sense, the perfect flow of induced current in the superconductor will allow it to set up magnetic poles that are strong enough to repel the small magnet forcefully enough to overcome its weight force. The Meissner effect presents another property possessed by superconductors. Note, therefore, that superconductors have two very important properties: 1. Their electrical resistance is effectively zero. 2. They demonstrate the Meissner effect. Also, these potential properties will only be exhibited when the potential superconductors are cooled below their critical temperature.

238

chapter 13  from superconductors to maglev trains

Applications of superconductors n

n

Gather and process information to describe how superconductors and the effects of magnetic fields have been applied to develop a maglev train Process information to discuss possible applications of superconductivity and the effects of those applications on computers, generators and motors and transmission of electricity through power grids

As a simple summary, superconductors are mainly used in applications for: 1. conducting electricity efficiently 2. generating very powerful magnetic fields

secondary source investigation PFAs H3, H4, H5 physics skills H12.3a, b, d H12.4f H13.1a, b, c, d, e

1. Efficient conduction: As discussed in Chapter 8, energy is lost as heat when a current

flows through a conductor. The amount of heat lost (P) can be quantified by the equation P = I2 R, where I is the size of the current and R is the resistance of the wire. Superconductors have effectively zero resistance, so no heat loss will occur when the current passes through superconductors. 2. Powerful magnets: As discussed in Chapter 5, the strength of the magnetic field produced by an electromagnet is directly proportional to the current fed into it. However, there is a limit to the strength of the magnetic field produced. A very strong magnetic field requires a very large current, which inevitably results in a significant amount of energy loss as heat. In other words, some of the supplied electrical energy, apart from creating a magnetic field, is also lost as heat. As we increase the current in an attempt to create a stronger magnetic field, we will find more and more electrical energy is lost as heat rather than being converted into the required magnetic field, making the whole process very inefficient. Hence, if we can use an electromagnet that has zero resistance, that is, one made from superconducting material, no heat loss will occur as the current flows, and all the electrical energy would go to produce the magnetic field. This not only makes the whole process more energy efficient but also allows the magnetic field to be stronger. The extra benefit of a superconductor electromagnet is that there is a phenomenon known as a perpetuating current; once a current is established in the superconductor, because of the lack of resistance in the conductor, the flow of current will not diminish even if the power source is removed. This circulating current resides in the superconductor for a long period of time, generating a magnetic field without further energy input. This further enhances the energy efficiency.

Maglev train

Maglev train, Shanghai, China

Maglev trains make use of superconductors to build trains that can travel at a very high velocity. Currently such trains are still rare and are only available in a few countries, for example, China, Japan and Germany. The operation of these trains is technically quite complicated in terms of engineering and design. However, the basic principle behind such trains is still easy to understand. In simple terms, the operational principle of the maglev train can be divided into two sections: n levitation n propulsion.

239

from ideas to implementation

Levitation Maglev trains are levitated off (hover over) the ground. To levitate the train, the magnets are set up between the train and the track such that they are made to have Superconductor the same pole so that they can repel, as shown in electromagnets Figure 13.5 (a). The repulsion is made strong enough on the train to overcome the weight of the train and thus the train hovers. In theory, both the magnets can be made from N N Repulsion superconductors. Superconductor magnets produce more N N Normal Track powerful magnetic fields and are more energy efficient as electromagnets discussed before; in addition, they are easier to control in on the track terms of their magnetic polarities. However, the drawback Figure 13.5  (a) of using superconductors is that they require coolants to Magnets repulsion keep their temperature below the critical temperature; also, the presence of superconductors makes the design more complicated and expensive. Weighing the costs and benefits, generally only the electromagnets on the trains are to be made from superconductors, whereas the electromagnets on the track are just made from normal conductors. It is much easier to just cool the limited numbers of superconductors on the train than try to cool down the entire track! &913&44

Propulsion Figure 13.5  (b) Top view of maglev train

N

N S

S

Top view of train

N

N S

It is no use just to be able to hover the train above the ground, it also needs to be able to be propelled forward. To do this, another group of magnets is used; both the magnets on the side of the train and the magnets on the track are made to have alternating polarities. As shown in Figure 13.5 (b), the north poles at the head of the train are attracted by the south poles ahead of them and repelled by the north poles behind, and therefore the train moves forward. A similar process occurs along the entire N length of the train. When the train moves forward, the north N poles will be pulled back by the south poles on the track. If Normal S electromagnet nothing is done, the train will move back before being pulled forward again, therefore oscillating but not accelerating. S To resolve this problem, every time the train gets past one N set of magnets on the track, the polarity of the magnets N on either the train or the track (but not both) will need to Superconducting reverse. For instance, as shown in Figure 13.5 (b), the S electromagnet original south poles on the track are changed to north poles S

S

N

N

Repel N

N

N

Head of the train

S

N

N

N

S

N

N S

Attract S

V

S

S

N

N

240

Magnet changes polarity

chapter 13  from superconductors to maglev trains

such that they can keep propelling the train forward. As this process repeats, the train gets faster and faster. Finally, as the train speeds up, it will take less time to reach the next set of magnetic poles on the track. This means the frequency with which the polarity changes needs to increase as well in order to synchronise with the increase in speed of the train. This also means the frequency with which the magnetic poles change effectively limits the speed of the train. Advantages First of all, the train is levitated off the ground, which means it is not making any physical contact with the track. This minimises the frictional drag the train experiences, and thus improves its maximum speed. Furthermore, no mechanical energy is lost to overcome the friction, which means the train is extremely energy efficient. In addition, the hovering makes the train extremely smooth to run, and the minimal contact results in less wear and tear, thus less effort is needed for maintenance. Disadvantages Superconductors are very expensive to run, mainly because they are designed to operate at very low temperatures and so there is a constant need for coolants. Also, low temperatures are technically difficult to confine, and therefore requires high technology. Currently, only a few countries have the technology and the financial capability to build such a train; these high costs are reflected in the high cost of tickets, making the maglev train less acceptable compared to the normal transport means. Maglev trains are a relatively new technology and improvements are constantly being made. You are encouraged to do your own research on maglev trains and find out about the most current developments and modifications.

Switches and supercomputers From the previous chapter, recall that the way to make integrated circuits powerful is to have as many devices as possible on a single chip. This is often limited by many factors. One of them is the heat produced by these devices. The fact that superconductors have effectively zero resistance and therefore effectively zero heat production when electricity flows through them means that the devices made from superconductors can be integrated closer than those made from ordinary semiconductors. This makes the integrated circuits made from superconductors far more powerful. Furthermore, the fact the devices are packed closer to each other means there will be even less delay in signal transfers between the devices, so these integrated circuits are also faster. These more powerful and faster operating integrated circuits lead to supercomputers, which are able to perform extremely complex operations at a much enhanced speed. These supercomputers may be employed for scientific or military purposes. The only disadvantage of such a system is that they require coolants (e.g. liquid nitrogen) to run, so it is not feasible for personal use in homes. The need for coolant makes these computers technically difficult to run and very costly.

A supercomputer

Motors and generators When superconductors are used in motors, the low resistance to the flow of electricity means that with any

241

from ideas to implementation

given amount of voltage, the net current flow in the motors will be bigger, which means the motors are made more powerful. Also, as discussed, superconductors lose no heat when currents pass through them, making both superconducting motors and generators more energy efficient. Furthermore, because the devices are more energy efficient, they can be made smaller but still be able to carry out the same amount of work.

Power transmission lines This is a very good future application of superconductors. Recall from Chapter 8 that large amounts of heat are lost through the transmission wires when they transmit electricity from the power station to households. As discussed in Chapter 8, one way to reduce the heat loss is to reduce the current size through the wires, and this can be done by increasing the voltage using a step-up transformer. However, even then, there will still be a large amount of energy loss over the transmission wires as a result of their reasonably large resistance. The good news is that once transmission wires are made from superconductors, the resistance of the transmission wires can effectively be reduced to zero. As discussed before, this minimises the heat lost, which means all of the energy produced at the generator can be transferred to households, making the process almost 100% efficient. Minimal energy waste during transmission results in the whole process being more environmentally friendly. This also enables the power station to be built further away from large cities, which reduces pollution near metropolitan areas. The other added benefit of the superconductor transmission wires is that they can carry the same amount of electricity with a much smaller diameter, which means the cost of manufacturing these transmission wires is greatly reduced. Nevertheless, the major disadvantage of the system is that these wires need to be cooled below their critical temperatures, which requires liquid nitrogen. Unfortunately, as they are open to the environment, cooling will be very difficult and expensive to achieve and maintain. The other setback of the superconducting wires is that they only transmit DC. This is because in order for Cooper pairs to form, electrons need to travel in a constant direction. Oscillating electrons in the case of AC will cause the disturbance of Cooper pairs, so the disruption of superconductivity. The fact that our current power system operates on AC means that significant changes would be required if we were use DC to transmit electrical energy.

13.7

Limitations of using superconductivity n

Discuss the advantages of using superconductors and identify limitations to their use

As demonstrated through all the applications described in the previous section, superconductivity will hugely benefit human beings and society: both increasing the energy efficiency of the operations as well as providing us with applications that are otherwise not possible. One of the negative aspects of using superconductors is their low operating temperature. Low temperature is extremely hard and expensive to reach; once established, it is very difficult to insulate from the surroundings. Associated with that, of course, is the huge cost and inconvenience. Fortunately, with research, superconductors are being developed with higher and higher critical temperatures and already there are superconductors that can operate at critical temperatures over 100 K. In the near future, there is a possibility

242

chapter 13  from superconductors to maglev trains

that superconductors that can operate at room temperature will be developed. Once this happens, we will be able to use superconductors without coolants or spending effort on maintaining the temperature. This will be a milestone in human history in terms of introducing perfectly energy efficient devices, super powerful computers and electronics, all of which will be economical to produce and easy to operate.

chapter revision questions 1. (a) Briefly describe the Braggs’ X-ray diffraction experiment.

(b) The Braggs developed the equation nλ = 2dsinθ as a part of their experiment. Explain the meaning of each of the terms in the equation. What is the significance of this equation? 2. Explain how a change in temperature and adding impurities would affect the resistance

of a metal conductor. 3. Discuss the importance of the critical temperature in the context of superconductors. 4. There are two types of superconductors—metal and metal alloy, and metal oxide and

ceramic. (a) What is one significant feature of all metal-oxide and ceramic superconductors? (b) Evaluate the implication of this feature in the applications of superconductors. 5. Outline the theory used to explain superconductivity. 6. A small magnet is able to hover over a piece of superconductor.

(a) What is the name of this phenomenon? (b) Offer a satisfactory explanation as to how it may happen. (c) Predict what will happen if one keeps pushing the magnet down towards the superconductor. 7. (a) What do you think the word ‘maglev’ stands for?

(b) What is the superiority of the maglev train compared to the ordinary steam or electrically powered trains? Relate this superiority to the functional principle of the maglev train. (c) What are the drawbacks of the maglev train?

SR

8. There are debates about whether it would be suitable to replace power transmission lines

with superconductors. Justify your opinion on this debate. 9. Evaluate the impacts of the discovery of room temperature superconductors. In your

answers, make reference to the current uses of superconductors.

Answers to chapter revision questions

243

SR

Mind map

from quanta to quarks

from quanta to quarks

CHAPTER 14

The models of the atom Problems with the Rutherford model of the atom led to the search for a model that would better explain the observed phenomena

14.1

The early models of the atom The most familiar model of the atom is one that involves many negatively charged electrons revolving around a central region known as the nucleus. The nucleus contains positively charged protons and neutral neutrons. This chapter will show that it took scientists many centuries to propose, debate, investigate and modify that familiar model of the atom (known as Bohr’s model). The models of Democritus, Thomson, Rutherford and Bohr will be discussed in chronological order. These models are not the end of the development. New and more complicated models of the atom are constantly being developed. This option module will discuss this process.

Democritus Democritus (c. 460–370 BC) was one of the earliest scientists to propose the model of the atom. He realised that if one kept dividing a substance into smaller and smaller pieces, there would be a point where further divisions could no longer be possible because one had reached the fundamental units that formed the substance. He proposed that these fundamental units were spherical in shape and were termed atoms. Therefore, Democritus proposed that atoms were indivisible particles that made up all matter.

Thomson’s ‘plum pudding’ model Thomson’s experiment of the charge-to-mass ratio of cathode rays (1897) effectively indicated that, with any type of metal used as the cathode, identical cathode rays were obtained. Knowing that cathode rays were negatively charged particles— electrons—he proposed that atoms must all contain these common particles. The puzzle was, if all atoms were to contain electrons, how should these electrons be arranged inside the atoms? Uniform mass of positive charge (low density) Thomson developed his model where the atom was still assumed to be spherical in shape. However, this time the electrons were proposed to embed and scatter randomly Equal and among the region of the atom. Since the atom needed to be opposite neutral overall, there must be positive charge to balance the charge negative electron charges. He proposed that the rest of the atom was uniformly positively charged, with its mass evenly distributed but low in density. This model was analogous to a Electron embedded in plum pudding, where the electrons were like plums scattered the positive sphere throughout the ‘pudding-like’ atom, as shown in Figure 14.1.

Figure 14.1  Thomson’s ‘plum pudding’ model of the atom

246

chapter 14  the models of the atom

14.2

Rutherford’s model of the atom n

Discuss the structure of the Rutherford model of the atom, the existence of the nucleus and electron orbits

Rutherford’s alpha particle scattering experiment In 1911, Ernest Rutherford (1871–1937), or more precisely his students Geiger and Marsden, set out to perform an experiment that aimed to confirm Thomson’s model of the atom, using the newly discovered alpha particles. Thomson’s model showed electrons only occupying a very small space and the rest of the atom occupied with very low density positive charge. Rutherford thought that if this was the case, if alpha particles were fired at these atoms, they should either go straight through or through with very minimal deflections because nothing was in their way). He set up the experiment as shown in Figure 14.2. However, the results of the experiment were surprising. Although most of the alpha particles went through the atoms with either no deflection or very small deflections as predicted, one in eight thousand alpha particles were deflected back at an angle greater than 90° (see Fig. 14.2). This was totally surprising, as it suggested that there must exist a sufficiently dense positively charged mass inside the atoms to cause the alpha particles to rebound. Repetition of the experiment achieved the same result.

Ernest Rutherford Figure 14.2  Rutherford’s alpha particle scattering experiment

Rutherford’s model of the atom From the analysis of the results of his experiments, Rutherford proposed that the model of the atom needed to be modified to account for his observations. He stated that the only way the alpha particles could be deflected through such a large angle was if all of the atom’s positive charge and nearly all of its mass was concentrated in a very small region, which he later named the nucleus. The electrons, (first proposed by Thomson), were to be placed around the nucleus in a circular fashion, and the rest of the atom consisted of empty space (see Fig. 14.3). This model was adequate in explaining the deflection of the alpha particles. Usually these alpha particles would actually pass through the empty space between the nucleus and electrons, and hence would not have their path altered. If the alpha particles skimmed past the nucleus or collided with the electrons,

Thin gold foil Collimator: to focus the alpha particles α particles

ion Slight deflect Undeflected

Detectors

α - radioisotope Deflection greater than 90˚

Concentrated mass of positive charge

Figure 14.3  Rutherford’s model of the atom

SR

Electrons

Interactive Rutherford model

247

from quanta to quarks

then their path would be altered slightly. Those that were deflected back at an angle greater than 90° must be due to the alpha particles colliding head on with the positively charged nucleus. However, since the nucleus was proposed to be very small compared to the size of the atom, the chance of this happening was remote. PFA

H1 PFA 1: History of Physics: ‘Evaluates how major advances in scientific understanding and technology have changed the direction or nature of scientific thinking’

n

Discuss the structure of the Rutherford model of the atom, the existence of the nucleus and electron orbits

How did Rutherford’s proposed model of the atom differ from any before it? The concept of the indivisible, structure-less model of the atom, as proposed by the chemist John Dalton (the ‘billiard ball’ model) was the accepted view of the atom for over 50 years in the 1800s. Dalton could not envisage ‘empty space’; his atomic theory stated that atoms occupy all of the space in matter. J. J. Thomson’s work in 1897 suggested that the atom may indeed be divisible, as electrons were thought to be a part of any atom, and Goldstein subsequently showed in 1886 that atoms have positive charges. Rutherford’s challenge was to devise a way of probing the atom in an attempt to find its structure. Thomson had proposed a ‘plum pudding’ model of the atom in which the negative charges, electrons, were embedded in a sea of positive fluid in the same way plums were embedded in a plum pudding. Lenard proposed yet another model in which positive and negative pairs were found throughout the atom. Rutherford’s experimental results, in which some alpha particles actually rebounded back off the thin gold foil and his careful analysis of this, led him to propose his ‘planetary’ model.

Why was this contribution a major advance in scientific understanding? Rutherford’s model of the atom was the first to propose a nucleus with the electrons in separate motion. The position of the electrons enabled advances in the field of chemistry, which deals with the interaction between the electrons of different atoms. There were problems with his model, however. Orbiting electrons should radiate electromagnetic radiation, lose energy and spiral into the nucleus, destroying the atom. Clearly, this did not happen.

How did it change the direction or nature of scientific thinking? The motion of the electrons in Rutherford’s model of the atom violated the laws of classical physics. However, rather than being disregarded, Rutherford’s model triggered the further work of Bohr and others on their journey to develop quantum physics. The first step along this journey was to suggest that Rutherford’s electrons could exist in a stable state and not emit radiation.

Evaluation of Rutherford’s advance in scientific understanding and how it changed the nature and direction of scientific thinking Rutherford’s work paved the way for major changes in scientific thinking—the proposals of electrons orbiting a positive nucleus, and that much of the volume of atoms was empty space. The answers to Rutherford’s puzzles led to the development of quantum theory and changes to the way in which matter was explained.

248

chapter 14  the models of the atom

Useful websites

>WWW

Background information on atomic theory around the time of Rutherford’s experiment: http://www.visionlearning.com/library/module_viewer.php?mid=50 A wealth of information on Ernest Rutherford (he was born in New Zealand): http://www.rutherford.org.nz/

Inadequacies of Rutherford’s model of the atom Rutherford’s model was quite successful in accounting for the surprising results of his experiment. However, there were still a few aspects that he was unable to explain. ■■ First, he could not explain the composition of what he called the nucleus. Although he said that most of the atom’s mass and positive charges were to be concentrated into this very small and dense area called the nucleus, he could not explain what was in the nucleus. (The existence of protons and neutrons were not known at the time). ■■ Although he proposed that the electrons should be placed around the nucleus, he did not know how exactly to arrange the electrons around the nucleus, except ‘like planets around the Sun’. ■■ The biggest problem that Rutherford failed to explain was how the negative electrons could stay away from the positive nucleus without collapsing into it. The only way to overcome the attractive force between the positive nucleus and the negative electrons was to have the electrons orbiting around the nucleus, much like the Moon going around the Earth. However, electrons, when circulating around the nucleus, would have centripetal acceleration (centripetal acceleration applies to all circular motion). Accelerating charges produce EMR. This meant that electrons would release EMR as they were orbiting and these EMR would radiate away, which posed a loss of energy. This loss of energy must be derived from the kinetic energy of the electrons (law of conservation of energy), resulting in the electrons slowing down. Eventually, the electrons would lose enough kinetic energy so that they would no longer have sufficient velocity to maintain the orbit around the nucleus and would spiral back into the nucleus. Obviously, this did not happen—but Rutherford’s model failed to provide a reason for this.

Planck’s hypothesis n

Discuss Planck’s contribution to the concept of quantised energy

14.3

The concept of quantisation of energy has been discussed in Chapter 11. The basic definition is as follows. Definition

The radiation emitted from a black body is not continuous as waves; it is emitted as packets of energy called quanta (photons). The energy of each of these quanta or photons is related to their frequency by the equation:

249

from quanta to quarks

E = hf

Where: E = the energy of each quantum or photon, measured in J h = Planck’s constant, which has a value of 6.626 × 10–34 J s f = the frequency of the radiation (wave), measured in Hz

Planck’s hypothesis was initially made in order to theoretically derive the black body radiation curve. However, it was later used by Einstein to successfully explain the photoelectric effect. In this module, you will see this hypothesis also forms the basic foundation for the quantum theory and quantum mechanics. Also, as you will see later in this chapter, Planck’s hypothesis forms an essential part of Niels Bohr’s model of the atom.

14.4

A neon light

250

The hydrogen emission spectrum Using a neon light as an everyday example: A neon light consists of a glass tube containing neon gas in which two electrodes are embedded. When electricity is passed through the tube, the gas glows to produce the ‘neon light’. This does not just happen with the neon element, rather it happens with other elements in the periodic table. Our discussion will focus on the hydrogen element. As shown in Figure 14.4, a glass tube with two electrodes contains hydrogen gas. When electricity is passed between the electrodes, the hydrogen gas glows purple-red. Now we need to introduce a new concept that this emitted light is not just a single colour (hence a single wavelength), but rather a combination of different wavelengths of light. When this light is separated into its individual wavelengths, for example by a prism in a device called a spectroscope, and it is cast onto a black background, one can clearly see these individual colours, and hence wavelengths (see Fig. 14.4). This light pattern is known as the hydrogen emission spectrum. As you can see in Figure 14.4, the hydrogen emission spectrum has a pattern that consists of red and blue lines at different wavelengths. A similar method can be used to obtain the emission spectrum for other elements. It is important to realise that the emission spectrum is unique to an element, such that each element has its own unique pattern of wavelengths. Consequently, these wavelengths may also be used to identify any unknown element. Another model of the atom is needed to understand the mechanism of the production of the emission spectra—Bohr’s model.

chapter 14  the models of the atom

Figure 14.4  Hydrogen emission spectrum Electrode



656 nm

Slit

H2

Hß EMR

Prism (spectroscope) Glass tube

Hy H∂

486 nm

434 nm 410 nm

A hand-held spectroscope used in the school laboratory

Bohr’s model of the atom: introduction n n

Define Bohr’s postulates Analyse the significance of the hydrogen spectrum in the development of Bohr’s model of the atom

14.5

In 1913, Danish physicist Niels Bohr (1885–1962) developed a model of the atom. This model was based on the ideas of the quantisation of energy and Planck’s hypothesis, Rutherford’s model of the atom, as well as by observing the pattern of the hydrogen emission spectrum. In simple terms, Bohr’s model was based on three fundamental postulates. Postulate 1: All electrons around the nucleus are only allowed to occupy certain fixed positions and energy levels outward from the nucleus, thus the electron orbits are quantised and are known as the principal energy shells. While in a particular orbit, electrons are in a stationary state and do not radiate energy.

251

from quanta to quarks

Postulate 2: When an electron moves from a lower orbit to a higher orbit, or falls down from a higher orbit to a lower orbit, it will absorb or release a quantum of energy (EMR). The energy of the quantum is related to the frequency of the EMR by the formula: E = hf, where E is the energy of the quantum (J), h is the Planck’s constant and f is the frequency of the EMR (Hz). Postulate 3: The electrons’ angular momentum is quantised as mvrn (angular nh momentum) = , where n is the principal energy shell number, and the most 2π inner energy shell is assigned n = 1. Note: Postulate 3 was a purely empirical formula derived from the measurements taken

from the hydrogen emission spectrum.

Based on his first postulate, Bohr modified Rutherford’s model to one that contained a central positively charged nucleus and many orbits for the electrons around the nucleus, as shown in Figure 14.5. As mentioned, he called these orbits the energy shells or the principal energy shells. Electrons, when they were occupying these shells, were said to be stable, and did not need to rotate in order to stay away from the nucleus, nor did they radiate energy. There are also mathematical consequences to these postulates. Based on his first postulate and third postulate, combining with equations from classical physics—centripetal force and Coulomb’s law, Bohr was able to develop a series of mathematical equations to describe quantitatively the radius as well as the energy change of the principal energy shells as they moved away from the nucleus.

rn =

h2n2 4π2kme 2

En =

–2π2k 2me 4 h2n2

Where: r n = radius of the nth principal energy shell (m) En = energy of the nth principal energy shell ( J) h = Planck’s constant, 6.626 × 10–34 J s n = principal energy shell number k = constant, 9.11 × 10–31 e = charge of electron, 1.602 × 10–19 C m = mass of electron, 9.109 × 10–31 kg

Also the equations can be re-written with reference to the radius and energy of the first principal energy shell, that is, r1 and E1.

r n = n2r1 En =

1 E n2 1

Where: h212 h2 = r1 = 4π2kme 2 4π2kme 2 E1 =

–2π2k2me4 –2π2k2me4 = h212 h2

Note that both E1 and r1 are constants.

252

chapter 14  the models of the atom

Furthermore, the transition between the electron orbits, such as mentioned in postulate 2, explains the mechanism by which the hydrogen emission spectrum is produced. This will be discussed in detail in the next section.

Bohr’s model and the hydrogen emission spectrum n



Describe how Bohr’s postulates led to the development of a mathematical model to account for the existence of the hydrogen spectrum: 1 1  1 =R 2– 2  nf ni

(

14.6

)

According to Bohr, when electrons absorb energy, they will move up to a higher orbit. Their energy may be given by the means of heat, or electricity through the electrodes embedded in the glass tube as shown in Figure 14.4. When this energy is withdrawn, the excited electrons will not to stay in these higher orbits all the time. They later fall back to lower orbits. As the second postulate states, when the electrons fall back to lower orbits, they radiate energy in the form of EMR, the frequency of which is directly proportional to the difference in energy between the two levels (E = hf ). This is illustrated in Figure 14.5.

E = Ei – Ef = hf

Input energy, e.g. in the form of electricity. The electron is excited and moves to a higher energy shell.

Figure 14.5  The formation of the hydrogen emission spectrum

The excited electron drops back to a lower energy shell and releases energy in the form of EMR.The change in energy is E=hf. Note that the electron has the option of falling to different lower orbits. Energy released into the environment as EMR.

Electrical energy

Nucleus

Where: ∆E = change in energy as the electron transits from one orbit to another ( J) h = Planck’s constant, 6.626 × 10–34 J s f = frequency of the EMR released (Hz) Ei = the energy of the initial orbit, that is, where electron falls from (J) Ef = the energy of the final orbit, that is, where electron falls to (J)

Principal energy shells

253

from quanta to quarks

SR

Simulation: Bohr’s theory of the hydrogen atom

It is important to emphasise that when electrons fall back, they can go straight from the excited state to the lowest possible energy level or do this by many discrete steps. For example, an electron that is to fall from the fifth shell (n = 5) to the 1st shell (n = 1) has the choice of going from 5 to 1 directly, or from n5 to n4 and then one shell at a time, or from n5 to n3 then to n1, or many other combinations. Basically, the falling of electrons is a probability function. These combinations lead to different energy changes, and hence the different wavelengths and therefore colours seen in the emission spectrum (see Fig. 14.5). To further add to the equation above, we also know that the energy level of each shell can be described by the equation: En =



1 E n2 1

Hence we have: ∆E = hf = Ei – Ef

hf =

1 1 E1 – E 2 (ni) (nf )2 1

as c = fλ

f=

c and which can be substituted into the above equation: λ

[

]

h

c 1 1 – = E1 2 λ (ni) (nf )2

h

c 1 1 – = –E1 2 λ (nf) (ni)2

[ [

] ]



1 1 1 –E – = 1 2 λ (ni)2 hc (nf)



–E1 1 1 1 – , where R is and is called Rydberg’s constant. =R hc λ (nf)2 (ni)2

[

]

Since E1 is negative, R is positive. Therefore:

(

1 1 1 =R 2– 2 λ nf ni

)

Where: λ = wavelength (m) R = Rydberg’s constant, 1.097 × 107 m–1 nf = principal energy shell the electron falls to ni = principal energy shell the electron falls from

This equation is helpful as it can be used to calculate the frequencies and wavelengths of the emission spectrum provided that we know the initial and final

254

chapter 14  the models of the atom

orbit of the transition. Therefore, it is clear that Bohr’s model essentially provides a theoretical explanation for the appearance of the hydrogen emission spectrum. ■■

Solve problems and analyse information using: 1 1 1 =R 2– 2  nf ni

(

)

SR

Worked example 25

Some examples: Example 1

What is the wavelength of the emission line when an electron falls from the third energy shell to the second energy shell? Describe the nature of this EMR. Solution

Using

[

Where R nf ni λ

]

1 1 1 – , =R λ (nf)2 (ni)2 = = = =

1.097 × 107 2 3 ?

[

1 1 1 = 1.097 × 107 × 2 – 2 λ 2 3 = 6.56 × 10–7 m

]

The wavelength is calculated to be 6.56 × 10–7 m. This emission spectrum line corresponds to the first red light of the hydrogen spectrum. Note that similar calculations may apply to the other wavelengths. Example 2

Calculate the lowest possible frequency of an emission line, if the final energy shell is always 1. Describe the nature of the radiation. Solution

The lowest possible frequency corresponds to the smallest energy difference during orbital transition. Hence if the final orbit is 1, then the initial orbit can only be 2 (to give the smallest difference). Hence: 1 1 1 – , Using =R Then c = f λ 2 λ (nf) (ni)2 c Where R = 1.097 × 107 f= λ n =1

[

]

f



ni = 2 λ=?



[

1 1 1 = 1.097 × 107 × 2 – 2 λ 1 2 λ ≈ 1.22 × 10–7 m

]

f=

3 × 108 1.22 × 10–7

f ≈ 2.47 × 1015 Hz This frequency is in the UV range.

255

from quanta to quarks

Example 3

An emission spectrum line has the wavelength of 1.88 × 10–6 m (infrared). Suppose the final orbit is 3, calculate the orbit from which the electron falls from. Solution

[

]

1 1 1 – , =R 2 λ (nf) (ni)2



Using



Where R nf λ ni

= = = =

1.097 × 107 3 1.88 × 10–6 ?

[

1 1 1 = 1.097 × 107 × 2 – 2 –6 1.88 × 10 3 ni

[

]

]

1 1 1 – 2 = –6 9 ni 1.88 × 10 × 1.097 × 107



1 1 1 – = –6 2 ni 9 1.88 × 10 × 1.097 × 107



1 = 0.0626 ni2



ni2 = 15.97 ni = 4

Balmer series From the previous examples, an emission spectrum can also include ultra-violet and infrared in addition to visible light. The visible parts of the spectral lines were observed first. Years before Bohr had proposed his theory to account for the hydrogen emission spectrum, a Swiss school teacher, Johan Balmer (1825–1898), in 1885 realised that the visible part of the hydrogen emission spectrum obeyed a simple mathematical relationship, such that: n2 λ=b 2 n – 22 However, Balmer was only able to derive this equation using pure mathematics based on observational data and did not have any reasons for why this equation was so. Such equations are known as empirical. The visible part of the hydrogen spectrum was named Balmer series in his honour. Bohr realised that the ‘22’ in the equation was due to the fact that all emission lines from the visible part of the hydrogen emission spectrum were the result of electrons falling to the second energy shell (nf = 2). Hence Bohr’s ‘modern formula’ for the Balmer series is:

(

)

(

1 1 1 =R 2 – 2 λ 2 ni

256

)

chapter 14  the models of the atom

Bohr’s model and his postulates enabled scientists to account for the hydrogen emission spectrum as well as allowing them to calculate the wavelengths on a theoretical basis. He provided a theoretical explanation for Balmer’s formula, which was otherwise derived from the empirical observations. Therefore, his model of the atom was quite successful overall, not only structurally but also functionally. Other spectral line series have been observed. For instance, when electrons fall back to the first orbit, the EMR released is all in the ultra-violet range, and the series is called Lyman series. When electrons are excited and fall back to the third orbit, the EMR released is always in the infrared range, and is called Paschen series. There are many more, corresponding to the final energy level the electrons fall to.

More on the hydrogen emission spectrum n

Process and present diagrammatic information to illustrate Bohr’s explanation of the Balmer series

14.7

To summarise the energy profile of the electron shells around an atom and the different types of spectral series mentioned above, we use a diagram (see Fig. 14.6). Note that the energy differences between the orbits form a converging pattern; that is, as the principal energy shell goes outwards from the nucleus, the difference in energy between each energy shell gets smaller. For example, the energy difference between n = 1 and n = 2 is large compared to that between n = 5 to n = infinity (see Fig. 14.6). Would you be able to work out the reason for this based on the equation, 1 En = 2 E1? n The consequence of this is that the type of radiation emitted is determined by where the electron finally falls to. This is because the lower orbits have larger energy gaps. With more contribution to the energy difference, they have more weight in determining the type of radiation. For instance, when an electron falls to the first shell, the energy difference between n = 2 and n = 1 is so large that regardless of where the electron falls from, the energy difference is going to be large enough to cause the emission to be in the ultra-violet range. When an electron falls to the third Balmer series (visible light)

Energy in electron volts (eV)

Lyman series (ultraviolet)

Paschen series (infrared)

Electron volts n= n=5 n=4

0 -0.54 -0.081

n=3

-1.51

n=2

-3.40

n=1

-13.6

Figure 14.6  A summary of the hydrogen emission spectrum

257

from quanta to quarks

shell, the energy gap is always going to be small, therefore the emission is always going to be in the infrared range. Furthermore, the energy differences between emission lines on the red side of the spectrum are always larger than those on the blue side of the spectrum. Consequently, the differences in frequency thus wavelength for emission lines on the red side of the spectrum will be larger than those on the blue side of the emission spectrum. With any atomic emission spectrum, the lines towards the red side are further apart than the lines towards the blue side. Refer to Figure 14.4 again.

14.8

Limitations of Bohr’s model of the atom n n



Discuss the limitations of the Bohr model of the hydrogen atom Analyse secondary information to identify the difficulties with the Rutherford-Bohr model, including its inability to completely explain: – the spectra of larger atoms – the relative intensity of spectral lines – the existence of hyperfine spectral lines – the Zeeman effect

Bohr’s model was overall quite successful. First, it provided a reason (although without proof) why electrons were able to stay away from the nucleus. Second, it explained the hydrogen emission spectrum. However, there were still a few fundamental inadequacies, as discussed below: 1. Bohr’s model used a mixture of classical physics and quantum physics without giving any reasons for that. The classical physics in his model included circular motion of the electrons, and the concept of angular momentum as well as Coulomb’s law; all were used to derive the equations for the radius and energy of each energy shell. The quantum physics aspect of his model included the quantisation of the electron orbits and transitions, quantisation of energy, that nh . Furthermore, is, E = hf as well as the quantisation of angular momentum as 2π his quantum physics theories were radical and lacked rational explanation. For instance, in his first postulate, Bohr stated an electron was to be stable when it was in its orbit; although this explained how an electron could stay away from the nucleus, it was not logically convincing enough. Also, he could not explain why nh . the electron’s angular moment was quantised as 2π Historically, Bohr’s model of the atom was known as the quantum theory, a hybrid between the classical Newtonian physics and a brand new area of physics called quantum mechanics, which we will discuss briefly in Chapter 15. 2. The model could not explain the relative intensity between spectral lines. It was observed in experiments that some spectral lines were more intense, that is, brighter than others, which indicated that some types of transitions were more preferred than others. Bohr’s model failed to explain this.

258

chapter 14  the models of the atom

3. The model did not work for multi-electron atoms. Bohr’s mathematics and equations worked well and accurately for hydrogen atoms. However, they failed when Bohr tried to apply them to atoms with more than one electron, even helium atoms. Obviously, this was inadequate, as a good model should work for all types of atoms. 4. The model could not explain the existence of hyperfine spectral lines. Hyperfine spectral lines are thin, faint lines that exist as a cluster around a main spectral line (sometimes, they make up a main spectral line). They sit very close together and require close observation to distinguish between them. Bohr’s model only allowed the prediction for the main spectral lines, but could not explain why some transitions were outside these main spectral lines, which caused the hyperfine lines. 5. Bohr’s model could not explain the Zeeman effect. The Zeeman effect is defined as the splitting of the spectral lines when a powerful magnetic field is applied. Bohr’s model could not give a satisfactory explanation for this phenomenon. Furthermore, just like Rutherford’s model, Bohr’s model did not include an explanation for the structure of the nucleus. This was later done by other scientists, as discussed in Chapter 16. You are encouraged to conduct your own research to further extend your understanding and appreciation of the relative intensity of spectral lines, the existence of hyperfine spectral lines and the Zeeman effect. Use these as key words to facilitate your research, using either Internet or library resources. Multimedia sources, such as videos or animations, will be particularly useful.

Observe the visible components of the hydrogen emission spectrum n

Perform a first-hand investigation to observe the visible components of the hydrogen spectrum

The basic principle of the apparatus used to obtain a hydrogen emission spectrum and the expected results were discussed earlier in this chapter. Familiarise yourself with the setup and the underlying physics theories and appreciate the colours of the emitted light. You should also be familiar with the calculations and equations used to determine the wavelength of the spectral lines.

first-hand investigation PFAs H1 physics skills H12.1a, b, d H12.2b

chapter revision questions   1. Describe Thomson’s ‘plum-pudding’ model of the atom.   2. In 1911, Rutherford’s assistants, Geiger and Marsden, performed the now famous

experiment using alpha particle scattering. (a) Describe the procedure of the experiment with the aid of a diagram. (b) What relevant observations were made?

259

from quanta to quarks

(c) Did the result contradict the knowledge of atoms at the time? (d) What was the conclusion drawn from the experiment? What was the implication of this conclusion?   3. Draw up a table to contrast the differences (three or more) between quantum physics

and classical physics.   4. Explain how an atomic emission spectrum is produced and describe two features of an

atomic emission spectrum.   5. (a) Define Bohr’s three postulates in regard to the structure of the atom.

(b) How do these three postulates lay down the foundation for his model of the atom?   6. With the aid of equations, explain the pattern of change in radius and energy of the

electron shells from the innermost one to the outermost.   7. What is the Balmer series and how is it related to Bohr’s theory of the atom?   8. Define the Lyman and Paschen series.   9. (a) Calculate the wavelength of the EMR emission when an electron falls from the fifth

SR

Answers to chapter revision questions

260

shell to the third shell. (b) Calculate the wavelength of the EMR emission if the orbital transition of the electron is from the sixth level to the second level. (c) Calculate the energy required to raise an electron from the first energy level to the seventh energy level. (d) Calculate the frequency of the EMR emitted when an electron falls from the third energy level to the second energy level; hence, determine the nature of the EMR. 10. Calculate the first ionisation energy of a sodium atom. 11. Evaluate the successes and inadequacies of Bohr’s model of the atom.

chapter 15  more on the models of the atom

CHAPTER 15

More on the models of the atom The limitations of classical physics gave birth to quantum physics Introduction This chapter will describe and discuss models of the atom, proposed by Louis de Broglie, Wolfgang Pauli and Werner Heisenberg. These models are collectively referred to as quantum mechanics, which itself stands as a brand new area of physics that parallels the classical Newtonian physics. Many of these concepts involve very complicated theories and sophisticated mathematics; however, for the scope of the HSC Physics course, only the main ideas will be described.

The wave–particle duality of light For many centuries, light had always been thought of as waves, that is, it can undergo reflection, refraction, diffraction and interference, all of which are fundamental wave properties. However, as shown in Chapter 11, when Einstein was trying to explain the photoelectric effect theoretically in 1905, he had to assume that light behaved like particles, where each particle had the energy equal to Planck’s constant multiplied by the light frequency, E = hf. This particle model of light successfully explained the photoelectric effect. The phenomenon that matter can behave as both a wave and a particle at the same time is known as wave–particle duality. So the question is, how can something possess wave and particle characteristics at the same time? The answer lies in the fact that the wave model and particle model should be seen to complement each other to give a more complete description of a particular phenomenon, rather than to contradict each other. In terms of light, when one is dealing with properties such as reflection, deflection, refraction, diffraction and interference, the wave model of light applies. On the other hand, if one is explaining phenomena like the photoelectric effect, the particle model of light is more suitable. The two models do not contradict each other; rather, together they give rise to a better depiction of the overall properties of light. This chapter expands on this idea: the wave–particle duality is not limited to light; rather, it can be generalised to other substances.

15.1

Relationship between the wave characteristics and the particle properties To quantitatively describe wave–particle duality, a formula can be used to link the wavelength of the wave to the momentum of the particle, that is:

261

from quanta to quarks

λ=

h mc

Where: λ = wavelength of the light (m) h = Planck’s constant: 6.626 × 10–34 J s m = mass of the photon (kg) c = s peed of the photon (light) = 3 × 108 m s–1 mc = momentum of the photon, measured in kg m s–1

It is important to note that the left-hand side of the equation is ‘wavelength’, which is one feature of a wave, whereas the right-hand side contains ‘momentum’ which is an exclusive particle property.

15.2

Matter waves n

Describe the impact of de Broglie’s proposal that any kind of particle has both wave and particle properties As mentioned before, the wave-particle duality is not just limited to light or EMR. In fact, it can be generalised to all other substances. This principle was first proposed by Louis de Broglie (1892–1987) in 1924. He ‘borrowed’ h the equation λ = and generalised the equation by mc replacing the c (speed of light) by v, the speed of any particle. The consequence of this transformation is that any particle that has a momentum can have a wavelength, and therefore can behave like a wave called the matter wave. This is particularly true for small particles like electrons, as we shall see in the examples.

SR

Worked example 26

Louis de Broglie

■■

Solve problems and analyse information using:  = Thus we have:

λ=

262

h mv

h mv

Where: λ = wavelength of the matter wave (m) h = Planck’s constant: 6.626 × 10–34 J s m = mass of the particle (kg) v = speed of the particle (m s–1) mv = momentum of the particle (kg m s–1)

chapter 15  more on the models of the atom

Example 1

What is de Broglie’s wavelength of an object with a mass of 1.00 kg moving at a velocity of 1.00 m s–1? Solution

λ=

h mv

λ=

6.626 × 10–34 1×1

= 6.63 × 10–34 m This wavelength is too small to be observed! The consequence is that although all objects can possess wave characteristics, most of the wavelengths formed by massive objects are usually too small to be observed; therefore we usually cannot visualise them as waves (even with instruments).

Example 2

What is de Broglie’s wavelength of an electron moving at 1.00 × 106 m s–1? Solution

λ=

h mv

λ=

6.626 × 10–34 (9.11 × 10–31)(1.00 × 10­6)

= 7.27 × 10–10 m This wavelength is reasonably large and can be detected by the right instruments.

Example 3

Calculate de Broglie’s wavelength of an electron when it is accelerated by a voltage of 54 V. Solution

In this case, in order to calculate the wavelength of this electron, we must know its momentum, and hence its velocity. To calculate the velocity, we need to use the fact that the kinetic energy gained by the electron (thus velocity) is derived from the electrical energy supplied by the voltage. Hence: 1 mv 2 = qV 2 1 mv 2 = q × 54 2

263

from quanta to quarks

mv 2 = 108q

v2 =

(108)(1.6 × 10–19) 9.11 × 10–31



v2 =





(108)(1.6 × 10–19) 9.11 × 10–31

v = 4.4 × 106 m s–1

Once we have velocity we can calculate the wavelength: h λ= mv =

6.626 × 10–34 (9.11 × 10–31)(4.4 × 10­6)

= 1.67 × 10–10 m

15.3

Proof for matter waves n n

Define diffraction and identify that interference occurs between waves that have been diffracted Describe the confirmation of de Broglie’s proposal by Davisson and Germer

Any kind of new proposal needs to be supported through experiments, and the concept of matter waves proposed by de Broglie could not be an exception. The experiment to confirm the existence of matter waves was performed by two scientists, Clinton J. Davisson (1881–1958) and Lester H. Germer (1896–1971), in 1927. In order to understand the experiment, we need to first study a basic wave property known as diffraction.

Diffraction Definition

Diffraction is the bending of the waves as they pass around the corner of a barrier or as they move through obstacles such as a slit. This concept is demonstrated in Figures 15.1 (a), (b) and (c). Consider the situation where a wave is allowed to pass through two slits that are adjacent to each other, as shown in Figure 15.2. As we would expect, the waves will undergo diffraction. Furthermore, the diffracted waves will now make contact with each other; therefore, they will interact with each other to cause interference. Recall that when the crest of one wave meets the crest of another wave, they will combine to give an even bigger crest. A similar principle applies to two troughs; this is known as constructive interference. On the other hand, when a crest meets a trough, they will cancel each other out—this is known as destructive interference. Because waves have crests and troughs that alternate, there will be alternating constructive

264

chapter 15  more on the models of the atom

Barrier

Barrier Figure 15.1  (a) Diffraction of a wave as it passes around a corner

Figure 15.1  (b) Diffraction of a wave as it passes through a slit

Barrier

and destructive interferences throughout the region where the two waves are in contact, with the exact pattern determined by the size of the wavelength of the two waves (see Fig. 15.2). Furthermore, if the waves are visible light, then a series of dark and bright lines can be seen. In the case of sound waves, alternating loud and soft sounds can be heard. For any other waves, an alternating maximal and minimal signal intensity can be detected by instruments. Remember also that diffraction and interference are exclusive wave properties.

Figure 15.1  (c) Diffraction of a wave as it passes through a small circular hole

The experiment In 1927, Davisson and Germer set up the experiment, similar to that shown in Figure 15.3. They fired energetic electrons towards a nickel crystal and studied the behaviour of these electrons as they scattered off the nickel surface. The electrons were first accelerated using a voltage of 54 volts to achieve a high velocity and were then directed towards the nickel crystal. The electrons, on reaching the nickel crystal, would be scattered off different planes of the nickel crystal, similar to that in Bragg’s experiment (described in Chapter 13). It is important to note that some of the returning (scattered) electrons would pass through the gaps between the nickel atoms, which act as many ‘slits’, so diffraction would occur. The situation was therefore similar to that shown in Figure 15.2 but more extensive. Consequently,

SR

Simulation: interference of light—interference patterns Figure 15.2  Interference pattern formed by two adjacent diffracted waves: note that the brighter regions represent constructive interference whereas the darker regions represent destructive interference

265

from quanta to quarks

Movement of the detector

Detector picks up the interference patterns

Figure 15.3  Davisson and Germer’s electronscattering experiment

interference patterns would be formed by the returning electrons. If a detector was run alongside the nickel crystal, a series of maxima and minima of electron intensity should be detected. In their experiment Davisson and Germer were able to observe a Nickel series of maxima and minima of the (heated) scattered electrons, thus proving the wave nature of the electrons, and hence the existence of matter waves. Furthermore, from the interference pattern they were able to measure the wavelength of the electron waves that resulted from this diffraction pattern. The value agreed with the wavelength calculated using h (refer to example 3 on page 263). de Broglie’s equation λ = mv In summary, the experiment was successful not only in determining the existence of electron waves, and therefore matter waves, but also in confirming the validity of h . de Broglie’s equation to describe these matter waves, λ = mv Incident electron beam produced by a voltage of 54V

G. P. Thomson’s electron diffraction experiment Although G. P. Thomson’s experiment is not addressed in the syllabus, it is appropriate to mention here. In 1928, G. P. Thomson, son of J. J. Thomson, passed an electron beam through a thin foil of gold. The electrons went through the thin foil and were scattered to land on the photographic film behind the foil, where they created an interference pattern. He compared the pattern to that obtained from using X-rays, which had been established to have wave characteristics, and saw the two patterns were remarkably similar. From this, he was able to confirm the wave nature of the electrons. Ironically, his father J. J. Thomson was awarded the Nobel Prize for proving the particle nature of electrons, whereas a couple of decades later, his son G. P. Thomson proved the wave characteristics of electrons.

15.4

Applying the matter waves to the electrons in an atom n

Explain the stability of the electron orbits in the Bohr atom using de Broglie’s hypothesis

De Broglie stated that electrons can behave as waves and this is true for all electrons including those found in the atoms. De Broglie went on to propose that: Definition

The electrons in atoms behave like standing waves, which wrap around the nucleus in an integral number of wavelengths. They are known as electron waves.

266

chapter 15  more on the models of the atom

The concept of standing waves is shown in Figure 15.4. Standing waves refer to waves that do not propagate but vibrate between two boundaries. The points that do not vibrate are called nodes, and the points that vibrate between maximum and minimum positions are known as anti-nodes. Also note a faster vibration will result in a higher frequency and hence more waves. If we pick any of the standing waves from Figure 15.4 (a) to (c) and join the waves from the head to the tail so that they form a closed loop, they resemble the electron waves that wrap around the nucleus. Furthermore, as pointed out by de Broglie, the number of wavelengths for the electron waves wrapping around the nucleus must be an integer. This is because in order for the standing wave to wrap around the nucleus, the beginning point of the wave must be in phase with the end point of the wave, and this only occurs if the wave finishes with a complete wavelength. For non-integral wavelengths, the beginning and end position of the wave will be out of phase and consequently result in destructive interference, which diminishes the wave. Hence, for electrons in the first energy level, they have one wavelength as shown in Figure 15.5 (a), and for electrons in the second energy level, they have two wavelengths as shown in Figure 15.5 (b), and so on. When n=1

1 Figure 15.4  (a) A standing wave with one wavelength

Figure 15.5  (a) An electron wave with one wavelength around the nucleus

When n=2

2 Figure 15.4  (b) A standing wave with two wavelengths

Figure 15.5  (b) An electron wave with two wavelengths around the nucleus

When n=3

3 Figure 15.4  (c) A standing wave with three wavelengths

Figure 15.5  (c) An electron wave with three wavelengths around the nucleus

267

from quanta to quarks

Note: It is incorrect to think that the sketches in Figure 15.5 represent the pathways for the electrons to move around the nucleus. The entire wave is actually one electron. Remember, electrons are now waves, not little moving particles!

The implications of de Broglie’s electron wave model With the electron wave model of the atom, it is now possible to explain why electrons, when in their own energy level, are stable and do not emit EMR. In other words, using de Broglie’s model, Bohr’s first postulate can be explained: as electrons are now standing waves, they are no longer moving charges, and hence will not emit any radiation. Furthermore, standing waves do not propagate, and therefore are stable and will not lose any energy. Second, de Broglie’s electron wave model enables a mathematical derivation for Bohr’s third postulate—the quantisation of angular momentum, which Bohr proposed radically without any theoretical support: The circumference = total length of the electron wave of the nth shell 2πrn = nλ h also λ = mv nh mv



∴ 2πrn =



mv(2πrn) = nh



mvrn =

nh 2π

Thus, using de Broglie’s theory of matter waves and the matter wave equation, we are able to theoretically derive Bohr’s third postulate, thereby adding rationality to this postulate. h also formed Historically, de Broglie’s model and his matter wave equation λ = mv the foundation of a new area of physics known as quantum mechanics. Such physics was later expanded and perfected by the work of many physicists. Quantum mechanics involves complicated physics theory and mathematics. The work and the contributions of Wolfgang Pauli and Werner Heisenberg to the development of the model of the atom and quantum mechanics are discussed briefly.

Pauli and the exclusion principle secondary source investigation PFAs H1, H3 physics skills H12.3 a, b, c, d H12.4 f H13.1 a, b, c, d, e H14.1 a, b, e, f

268

n

Gather, process, analyse and present information and use available evidence to assess the contributions made by Heisenberg and Pauli to the development of atomic theory

In 1925, Wolfgang Pauli (1900–1958) proposed a theory for which he was famous—the exclusion principle.

chapter 15  more on the models of the atom

Definition

The exclusion principle states that no two electrons in the same atom can have all four quantum numbers the same.

The quantum numbers In order to understand the exclusion principle, you must first examine the meaning of the four quantum numbers. 1. The principal quantum number (n) n This quantum number is related to the principal energy shells that Bohr proposed. n It takes the values of n = 1, 2, 3… z, where z is any integer. n For example, the electron in a hydrogen atom takes the value of n = 1. For a lithium atom, two electrons are in the first energy shell, thus both take the value of n = 1 and the other electron is in the second shell hence takes the value of n = 2. 2. The orbital quantum number (l) n This quantum number is related to the angular momentum and therefore to the orbital shape of the electrons. n They are also known as the sub-shells in chemistry. n They can take the values of l = 0, 1, 2… (n – 1). For example, when n = 1, l would take the value of 0. When n = 2, the l takes the value of 0 and 1. When n = 3, l can take the value of 0, 1 and 2. n Each of the orbital quantum numbers relates to a particular shape orbit and the corresponding angular momentum. For example, 0 is spherical in shape, and 1 is pear shaped. Electrons with different orbital quantum numbers will have a slight difference in their energy even if they are within the same energy shell (the same principal quantum number). This slight difference in energy explains the existence of hyperfine spectral lines.

Wolfgang Pauli

SR

‘Assess’

Note: Slight differences in energy result in slight differences in frequency, which constitute the hyperfine lines. 3. The magnetic quantum number (ml) n This is the quantum number assigned to the magnetic orientation (moment) of the electron orbiting in the magnetic field. n It can take values of –l, …−2, − 1, 0, + 1, + 2 … +l. For example, when an electron is in the second energy level, n = 2. This electron can have two possible orbital shapes as l takes the values of 0 and 1. For l is 0, ml is 0, correlating to 1 type of magnetic moment for this electron orbit. For l = 1, ml can take the values of –1, 0, 1, correlating to 3 possible magnetic moments for this electron orbit. n The magnetic quantum number can also be used to explain the Zeeman effect. (Not required by the syllabus.) 4. The magnetic spin quantum number (ms) n This quantum number is assigned to the spin of electrons about their own axis. Each electron can spin in two different ways, which are known as positive a half spin (+ ½) and negative a half spin (−½).

Pauli’s exclusion principle To understand the exclusion principle, examine it using an actual atom as an example. Say for a lithium atom, in which there are three electrons, we can assign one of the electrons to be n = 1, in which case l will be 0 and ml will also be 0. The magnetic spin (ms) for this electron can be assigned as either +½ or −½. Now for the second electron, if we assign it n = 1, then both l and ml are again 0. It follows that in order to obey the exclusion principle, ms must be −½ if the previous electron was assigned +½ and vice versa, to avoid having

269

from quanta to quarks

11

12

3

4

1

2

10 Mg 9

8

Figure 15.6  The quantum numbers for a magnesium atom

7

the same four quantum numbers for both electrons. Finally, for the third electron, it is now impossible to assign it n = 1 without it having to have the same four quantum numbers compared to one of the previous two electrons. Since electrons have to fill from a lower energy 5 shell to a higher energy shell, it follows that this 6 electron must be assigned n = 2. This logically explains why atoms can only hold a maximum of two electrons in the first shell. Now let us look at a more complicated atom, such as magnesium. The electrons of the magnesium atom have been labelled from 1 to 12 and each electron has been assigned with a set of possible quantum numbers. Note that where a question mark is used, it represents the potential for an alternative quantum number, for instance for the ms of the first electron to potentially be −½, but if that is the case, the second electron will have ms = +½. It is clear from Figure 15.6 that no two electrons in this atom have all four quantum numbers the same. It is also clear if the second shell is to hold more than eight electrons, then the exclusion principle would break down. This effectively explains why the electron configuration for a magnesium atom must be 2, 8, 2. In summary, Pauli’s exclusion principle provides a very solid theoretical background for why electrons have to be configured in the way they are in atoms: in other words, why the first electron shell only holds two electrons, whereas the second one only holds eight, and next one holds 18 and then 32 and so on. (Try to verify this by carrying out a similar exercise to that in Figure 15.6.) The principle also explains the regularity of the periodic table and the reason for atoms’ position in the periodic table. Pauli’s exclusion principle can be seen as a further advancement to Bohr’s model in how to place electrons around the nucleus. Most electrons’ behaviours can potentially be explained by using the quantum numbers and the exclusion principle. Consequently, Pauli’s model can be seen as more comprehensive and complete compared to the earlier theories. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

n = 1, n = 1, n = 2, n = 2, n = 2, n = 2, n = 2, n = 2, n = 2, n = 2, n = 3, n = 3,

l = 0, l = 0, l = ?0, l = ?0, l = 1, l = 1, l = 1, l = 1, l = 1, l = 1, l = ?2, l = ?1,

ml = 0, ml = 0, ml = 0, ml = 0, ml = ?–1, ml = ?–1, ml = ?0, ml = ?0, ml = 1, ml = 1, ml = ?–2, ml = ?0,

ms = ? +1/2 ms = –1/2 ms = ? +1/2 ms = –1/2 ms = ? +1/2 ms = –1/2 ms = ? +1/2 ms = –1/2 ms = ? +1/2 ms = –1/2 ms = ? +1/2 ms = –1/2

Pauli’s other contributions Wolfgang Pauli was also famous for the prediction of the existence of neutrinos (this is discussed in Chapter 16).

Heisenberg and the uncertainty principle secondary source investigation PFAs H1, H3 physics skills H12.3 a, b, c, d H12.4 f H13.1 a, b, c, d, e H14.1 a, b, e, f

270

n

Gather, process, analyse and present information and use available evidence to assess the contributions made by Heisenberg and Pauli to the development of atomic theory

Werner Heisenberg (1901–1976) in 1927 proposed the uncertainty principle, for which he won the Nobel Prize. Definition

The uncertainty principle states that the product of the uncertainty in measuring the position and uncertainty in measuring the momentum of an object has to be always equal to or larger than a constant.

chapter 15  more on the models of the atom

Mathematically, the equation is written as .x ≥ h/4; where  is the uncertainty in the momentum measurements (kg m s–1), x is the uncertainty in the measurements of positions (m) and h is Planck’s constant. It is important to point out that the uncertainty in this principle arises not as a result of the errors in measurements during the experiment, but rather as a result of the inherent properties of matter at atomic levels. The uncertainty principle is also a direct result of the wave-particle duality of matter. In simple terms, the principle indicates that if one knows everything about the momentum of an object, then one would have absolutely no idea of its position. On the other hand, if one knows the position of an object for certain, nothing is known about its momentum and also its wavelength (λ =

h ). In other words, one has to mv

sacrifice some certainties in one quantity in order to know anything about the other. This is a new perspective in looking at matter at atomic levels: nothing can be measured absolutely and the momentum and position of any particles at atomic levels would all have to be assessed with some degree of uncertainty. This adds a further dimension to the electrons inside the atom, in addition to the standing wave theory and quantum numbers.

Werner Heisenberg

Note: The work done by Louis de Broglie, Wolfgang Pauli and Werner Heisenberg, as well

as the contributions made by Max Born, Erwin Schrödinger and Paul Dirac, are collectively known as quantum mechanics, a new physics that is separate from Newtonian physics. It is a further advancement from the quantum theory proposed by Niels Bohr.

chapter revision questions   1. Define wave-particle duality.   2. Explain the meaning of matter waves.   3. (a) Calculate the wavelength of the matter wave when a tennis ball with a mass of

20.0 g is moving at a velocity of 2.00 m s–1. (b) Calculate the wavelength of the matter wave when a proton is made to move at a velocity of 3.00 km s–1. (c) Calculate the frequency of the matter wave when a neutron is made to move at 53.6 × 103 m s–1.   4. When an electron is accelerated by a voltage, it achieves a high velocity such as in the

case of a cathode ray tube (CRT) (see figure).



271

from quanta to quarks

(a) Demonstrate, by drawing on the diagram, the direction in which the electron moves in the CRT. (b) What is the wavelength of the matter wave of the electron when the voltage is set at 32.0 V? (c) What is the required voltage to produce an electron wave with a wavelength of 1.32 × 10–10 m? 5. Electron microscopes are devices that use the wave characteristics of electrons to

produce magnified pictures of minute objects. (a) One of the steps in the operation of an electron microscope is to focus the electrons and this is done by magnetic fields. Suppose beams of electrons travel from left to right parallel to each other each as shown in the diagram. Draw a possible pattern of magnetic field that can be used to focus these beams (to bring these beams to one point).

(b) The electrons are accelerated by using a power source. Calculate the voltage required to produce electron waves with a wavelength of 0.102 nm. (c) Use the information in (b) to explain why an electron microscope is used to magnify things that are too small to be seen by a light microscope.   6. With the aid of a diagram, describe the experiment performed to prove the existence

of matter waves. In your answer, you should mention the principle of interference and diffraction.   7. (a) Describe de Broglie’s electron-wave model of atoms.

(b) Evaluate why his model was successful. In your answer, you should discuss the inadequacies the model addressed and the improvements the model made.   8. (a) Define Pauli’s exclusion principle.

SR

(b) Using the sodium and argon atoms as two examples, write down a set of possible quantum numbers for these two atoms; explain the meaning and the significance of the exclusion principle.   9. Define Heisenberg’s uncertainty principle and describe its significance in the context of

the models of the atoms. Answers to chapter revision questions

272

10. Construct a table that describes chronologically all the events you know that are relevant

to the development of the models of atoms.

chapter 16  the nucleus and nuclear reactions

CHAPTER 16

The nucleus and nuclear reactions The work of Chadwick and Fermi in producing artificial transmutations led to practical applications of nuclear physics Introduction Chapters 14 and 15 discussed the various models of the atom, concentrating on the electrons around the nucleus. This chapter studies the structure, as well as the components, of the nucleus.

The nucleus n

Define the components of the nucleus (protons and neutrons) as nucleons and contrast their properties

16.1

Rutherford was the first person to use the term ‘nucleus’ as part of the conclusion of his alpha particle scattering experiment described in Chapter 14. Now it is known that there are two types of particles that exist inside the nucleus: protons and neutrons. Protons and neutrons are collectively called nucleons. The properties of the types of nucleons are summarised in Table 16.1. Table 16.1 Properties

Proton

Location

Inside the nucleus

Mass Charge

Neutron Inside the nucleus

1.673 ×

10–27

kg

1.675 × 10–27 kg

1.602 ×

10–19

C

0

The development of the knowledge of the nucleus The nucleus described by Rutherford was simply a concentrated mass of positive charge. No one at the time really knew what it was composed of and what its internal structure was. Again, just like the electron models, it took the work of many scientists, over a few decades, to determine the structure and components of the nucleus. Protons were the first nucleons to be discovered, in a similar fashion to the way electrons were discovered. The charge-to-mass ratio of the protons was measured in a discharge tube containing hydrogen ions. (Hydrogen ions are simply protons: Why?) Neutrons were discovered later and are discussed in the next section.

273

from quanta to quarks

16.2

The discovery of neutrons n

Discuss the importance of conservation laws to Chadwick’s discovery of the neutron

Soon after learning about the existence of the nucleus and protons, some scientists started to speculate that the nucleus should possess A (mass number) of protons and A – Z (atomic number) of electrons. For example, for sodium atoms, there should be 23 protons (A = 23), and 23 – 11 (Z) = 12 electrons in the nucleus. This hypothesis worked well for two main reasons: 1. It successfully explained why atoms had a larger mass number compared to the actual number of positive charges (protons). Note that the 12 electrons in the nucleus cancelled out the positive charges of the 12 protons, leaving a net charge of positive 11 rather than 23. 2. It explained beta emission, a radioactive decay where electrons were ejected from the nucleus. (This is discussed later in this chapter.) Although incorrect, scientists at the time thought the only way to account for this phenomenon was if there were electrons inside the nucleus. However, other scientists believed that there existed another type of particle inside the nucleus. They hypothesised that these particles had a similar mass to protons and were neutral in charge; these particles were neutrons.

Chadwick’s discovery of neutrons

James Chadwick

In 1930, the German scientist Walther Bothe (1891–1957) noted that when the element beryllium was bombarded with alpha particles (helium nuclei), a neutral but highly penetrative ‘radiation’ could be obtained. However, he could not explain the nature of this ‘radiation’. In 1932, Englishman James Chadwick (1891–1974) proposed that the unknown radiation obtained from the alpha particle bombardment of beryllium were in fact neutrons. He then set out an experiment to try to quantitatively study this unknown ‘radiation’. The neutral charge of this ‘radiation’ was easily demonstrated as it was not deflected by electric fields or magnetic fields. To measure the mass of the proposed neutrons, Chadwick directed the neutrons produced towards a block of paraffin wax (see Fig. 16.1). Paraffin wax is rich in hydrogen atoms, and hence protons. The proposed neutrons, when directed towards the paraffin wax, should have a good chance of colliding with those protons and knocking them out. As a result, protons would be ejected from the paraffin wax and could be measured by the detector, which allowed the energy and the velocity of the ejected

α particles

Protons, which can be measured directly

Neutrons Beryllium metal

Proton-rich paraffin wax

α source

Detector

274

Figure 16.1  Chadwick’s experiment

chapter 16  the nucleus and nuclear reactions

protons to be assessed. By applying the law of conservation of momentum and the law of conservation of energy, Chadwick was able to calculate backwards to determine that the mass of the neutron was approximately equal to that of the proton. The existence of the neutron was experimentally shown! Chadwick demonstrated the existence of neutrons without directly observing them; rather, it was done through demonstrating neutrons’ properties. Note: The calculation itself is not required by the syllabus, however, students should appreciate that it would be similar to that for the collision between two cars described in the Preliminary Course.

It is important to point out that neutrons are difficult to assess directly as they do not have any charge and therefore cannot be manipulated easily. The clever part of Chadwick’s experiment is that it translates the difficult-to-measure neutrons to the easily measured protons. Since protons have charges, they can be easily manipulated, just as electrons can be manipulated in cathode ray tubes to allow their properties to be assessed. To summarise the reaction taken place in Chadwick’s experiment, make use of a nuclear equation: 4 2 He

(α) + 94 Be →

12 6C

+ 10 n

Note: When writing nuclear equations, the total mass number on the left-hand side of the equation should equal to the total mass number on the right, that is, 4 + 9 = 12 + 1; similarly the atomic numbers should also be equal on both sides: 2 + 4 = 6 + 0.

The cloud chamber The cloud chamber was originally developed by Scottish physicist, Charles Wilson (1869–1959), therefore it is sometimes called the Wilson cloud chamber. Definition

A cloud chamber is a device used to detect the presence of radiation. It also allows for observation and manipulation of radiation in order to assess its properties. A cloud chamber consists of a glass tube, usually cylindrical in shape, filled with dry ice at its base. To one side there is a light bulb that illuminates the chamber. On the other side there is an entrance for the radiation. This is shown in Figure 16.2. The tube is super-saturated with alcohol and water vapour and the Supersaturated alcohol/water vapour dry ice is used in order to maintain a cool temperature inside the tube. As radiation travels through Ionisation of the molecules Radiation entering the chamber, it ionises some of (α,β or H) Condensation and the molecules inside the chamber the track of the radiation by knocking out their electrons. The ionised molecules then act as nucleation centres for the Dry ice condensation of the super-saturated vapour to occur.

Figure 16.2  A cloud chamber

Light bulb

275

from quanta to quarks

Note: A nucleation centre is where condensation can occur.

As the radiation passes through the chamber and ionises the molecules along the way, it will create a track of nucleation centres for the vapour to condense on, thereby outlining the pathway of the radiation. The nature of the radiation can be easily distinguished by the nature of the pathway created. The alpha particles (see the next section), being those that ionise the most, will create the thickest pathway, whereas the gamma rays, which ionise the least, will create the thinnest track. Furthermore, manipulation can be carried out using electric fields or magnetic fields in order to assess the properties and make measurements of the radiation.

16.3

Radioactivity and transmutation n n

Define the term ‘transmutation’ Describe nuclear transmutations due to natural radioactivity

Definition

Radioactivity is the spontaneous release of energy or energetic particles from unstable nuclei. In nature, there are three types of radioactivity (radioactive decay): alpha (α), beta (β) and gamma (γ). α and β are particles while γ is electromagnetic radiation. Definition

Transmutation is the phenomenon in which one element changes its identity to become another element. Transmutation can be either natural, through α, β or γ decays, or artificial. This chapter will first examine in detail the nature of α, β or γ decays and their associated transmutations. It will then examine examples of artificial transmutations. Examples of radioisotopes

Alpha () radiation or decay Alpha decay refers to an unstable nucleus emitting an alpha particle (α). Alpha radiation (particles) are energetic helium nuclei, in other words, helium atoms without their two electrons, and are written as 42 He. Obviously, an alpha particle has two protons and two neutrons, and is doubly positively charged due to the two protons. What happens when alpha decay occurs?

For each alpha particle emitted, two neutrons and two protons (hence four nucleons) are lost. This reduces the mass number by four and the atomic number by two and results in transmutation. This transmutation is a natural process. A general equation for an alpha decay is:

276

chapter 16  the nucleus and nuclear reactions

A ZX



A–4 Z – 2Y

+ 42 He (α)

transmutation Why does alpha decay occur?

As a general rule, unstable elements become more stable through the process of radioactive decay. When alpha decay occurs, the size of the nucleus reduces and becomes more stable. Hence it may be concluded that alpha decay occurs for elements that are too ‘big’; elements are considered too ‘big’ if their atomic number is equal to or greater than 83. Some examples include: 238 92U



241 95Am

234 90Th



+ 42 He (α)

237 93Np

+ 42 He (α) 238

241

Note: Both 92U, 95Am are elements beyond element 83, and therefore are too large

to be stable.

Beta () radiation or decay There are two types of β decay: β− decay and β+ decay. β+ decay is not required by the ‘From Quanta to Quarks’ syllabus, and therefore is not discussed in this chapter. (Also note an alternative to β+ decay is electron capture.) β− decay occurs when an unstable nucleus breaks down to emit β− radiations (particles). β− particles are fast-moving electrons and have the symbol of –10e. The β− particle is derived from the conversion of a neutron into a proton and electron inside the nucleus; the electron is ejected from the nucleus whereas the proton stays within the nucleus: 10 n → 11 p + –10e What happens when − decay occurs?

For each β− particle emitted, a neutron is converted to a proton, therefore the total number of nucleons, and hence the mass number, does not change. However, because there is now an added proton, the atomic number increases by one. This again results in natural transmutation. A general equation for β− decay is: A ZX



A Z + 1Y

+

0 –1e

(β)

transmutation Why does – decay occur?

Through β− decay a neutron is converted into a proton and the element is now more stable. Hence β− decay occurs when there are too many neutrons compared to protons, or too few protons compared to neutrons. Generally, for small atoms, the neutron-proton ratio should be about 1:1, whereas for larger atoms such as uranium, the ratio can be as high as 1.5:1. You may need to consult the periodic table to determine whether the number of neutrons in a particular atom is too many. Some examples include: 14 6C



14 7N

+

0 –1e

(β)

277

from quanta to quarks

90 38Sr



60 27Co



90 39Y

+

60 28Ni

0 –1e

+

(β)

0 –1e

(β)

60 Note that 146C, 90 38Sr and 27Co all have more neutrons compared to their stable isotope listed in the periodic table. Isotopes refer to the same element with different numbers of neutrons; isotopes of the element that may undergo radioactive decay are referred to as radioisotopes. There is another small particle accompanying β− decay. This will be discussed in the next section.

Gamma () radiation or decay Gamma (γ) radiation is the highest frequency electromagnetic radiation in the EMR spectrum. Gamma decay occurs when atoms try to discharge the excessive amounts of energy from the nucleus. The nucleus would have the excessive amount of energy usually as a result of some kind of prior disturbance, such as having been bombarded by neutrons from an external source or having previously undergone alpha or beta decay. Gamma radiation is pure energy so by itself does not cause transmutation. Nevertheless, through gamma decay, the element becomes more stable. Some examples include: (a)

60 27Co



60m 28Ni

+

0 –1e

60 And immediately 60m 28Ni → 28Ni + γ No transmutation



Overall equation:

(b) 99 42Mo →

99m 43Tc

+

60 27Co



60 28Ni

+

0 –1e

m = metastable/excited, indicating that the nucleus has excessive amounts of energy. +γ

0 –1e

99 After a while 99m 43Tc → 43Tc + γ   No transmutation

Note that for cobalt-60, because the gamma radiation occurs immediately after the beta decay, sometimes the two forms of radiation are said to occur together and cobalt-60 is described as a co-emitter of beta and gamma radiations. However, in the second example, the gamma decay for technetium-99m is a delayed process. Consequently, technetium-99m is described as pure gamma emitter and its parent isotope molybdenum-99 is described as a beta emitter. Nevertheless, the principle of the gamma decay is the same in both cases and in particular, gamma decay by itself does not cause transmutation.

Some examples of artificial transmutations Both alpha and beta decays lead to natural transmutation. However, as mentioned before, transmutations can also be done by artificial means. These include: 1. Bombarding elements with α particles, as seen in Chadwick’s experiment. 2. Bombarding elements with slow neutrons, which will be discussed in detail later in this chapter. 3. Bombarding elements with charged particles at high speeds in a particle accelerator. This is covered in Chapter 17.

278

chapter 16  the nucleus and nuclear reactions

Wolfgang Pauli and the discovery of neutrinos n

Discuss Pauli’s suggestion of the existence of neutrino and relate it to the need to account for the energy distribution of electrons emitted in -decay

16.4

Relative number of electrons

When radioisotopes undergo alpha decay, the ejected alpha particles either have energy that is identical or varying in a predictable way. However, when beta decay occurs, the energy of the ejected electrons exhibits a wide range, from a minimum of approximately 0.02 MeV to a maximum of approximately 1.2 MeV, as shown in 9 the graph in Figure 16.3. An obvious question is that if a 8 beta particle can achieve a maximal energy of 1.2 MeV, 7 then what accounts for the missing energy for those 6 beta particles with a sub-maximal energy level? In other 5 words, if no energy loss can be identified, then all of 4 the beta particles should have the same maximal energy. 3 This puzzle led some scientists, including Niels Bohr, to End-point Ek(max) 2 start questioning the validity of the law of conservation of 1 energy at the atomic level. It was the Austrian physicist 0 Wolfgang Pauli who solved this by proposing that there 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 was another small particle that was co-emitted during Kinetic energy of electrons, Ek (MeV) beta decay that would carry away the missing energy. This particle was later termed the neutrino. Note: 1 MeV is one mega electronvolt. 1 electronvolt = 1.6 × 10–19 J Therefore, 1 MeV = 1.6 × 10–13 J. An electronvolt is an alternative unit of energy.

Figure 16.3  The energy profile of beta decay: this graph shows that the beta particles emitted during a beta decay exhibit a whole range of energy levels

Properties of the neutrino The proposed neutrino was hypothesised to be electrically neutral and have no rest mass. However, it carried energy and momentum and travelled at the speed of light. The neutrino carried away a variable fraction of energy during beta decay so that the total energy of the decay is always conserved—if the beta particle carries more energy, then the neutrino carries less and vice versa. Billions of neutrinos from the Sun pass through the Earth every day undetected. This is because neutrinos have almost no mass and are electrically neutral, therefore causing minimal interactions with matter. Taking the existence of neutrinos into account, the equations for beta decay should be written as: β− decay:

14 6C



14 7N

+

0 –1e

(β) + v–

Anti-neutrino

β+ decay:

18 9F



18 8O

+

0 +1e

(β+) + v

Neutrino

Remember that an anti-neutrino is associated with the emission of the electron (β−), whereas a neutrino is associated with the release of the anti-electron (positron) (β+). Neutrino and anti-neutrino, as well as electron and positron, are known as matter and anti-matter pairs.

279

from quanta to quarks

The detection of neutrinos The detection of neutrinos can be extremely difficult for three reasons: 1. Neutrinos have no charge, and hence do not cause ionisation. So they cannot be detected by conventional detectors such as a cloud chamber. 2. Neutrinos have a mass of virtually zero, and therefore will not undergo any collisions like neutrons would in Chadwick’s experiment. 3. Neutrinos are invisible; unlike photons, neutrinos cannot be seen. The existence of neutrinos was predicted in 1934 by Pauli but it took about 20 years for the technologies to evolve to enable the detection of neutrinos experimentally. Note: When matter meets anti-matter, they annihilate each other’s mass completely. This mass is converted into energy in the form of gamma rays, according to the equation E = mc2.

16.5

Strong nuclear force n n

Evaluate the relative contributions of electrostatic and gravitational forces between nucleons Account for the need for the strong nuclear force and describe its properties

The nucleus contains protons and neutrons. Protons are positively charged particles, therefore when they are placed next to each other, they tend to repel. The following example illustrates the force of interaction between protons when they are placed next to each other inside the nucleus. Example

On average, two nucleons (protons) are separated by a distance of 1.30 × 10–15 m. Calculate: (a) The repulsive force between two protons due to their electric charge. (b) The gravitational attraction force between these two protons. Solution

(a) The equation used to calculate the size of the electrostatic force: (9 × 109) × q1 × q2 d2 Known quantities: q1 = 1.602 × 10–19 C q2 = 1.602 × 10–19 C d = 1.30 × 10–15 m Fq =

Unknown Fq Substituting into the equation: Fq =

280

(9 × 109) × 1.602 × 10–19 × 1.602 × 10–19 (1.30 × 10–15)2

= 137 N Repulsion

chapter 16  the nucleus and nuclear reactions

(b) The equation used to calculate the size of the gravitational attraction force: Fg =

(6.67 × 10–11) × m1 × m2 d2

Known quantities: m1 = 1.673 × 10–27 kg m2 = 1.673 × 10–27 kg d = 1.30 × 10–15 m Unknown Fg Substituting into the equation: Fg =

(6.67 × 10–11) × 1.673 × 10–27 × 1.673 × 10–27 (1.30 × 10–15)2

= 1.10 × 10–34 N Attraction

You can see from the above example that the gravitational attraction force that tries to hold the protons together is nowhere near as strong as the repulsive electrostatic force. Consequently, it will seem to be impossible for the nucleus to hold together its protons unless there exists another holding force. This is the strong nuclear force. The strong nuclear force is one of the four fundamental forces in the Universe. Fundamental forces are gravitational force, electromagnetic force, strong nuclear force and weak nuclear force. Gravitational force is the force resulting from mass. Electromagnetic force is the force due to the interaction between electric fields and/or magnetic fields. (The weak nuclear force will not be described in this book, as it is not a part of the HSC syllabus.) Definition

The strong nuclear force is the force that is responsible for holding the nucleons together inside the nucleus. Two important properties of the strong nuclear force: 1. The strong nuclear force acts equally between proton–proton, proton–neutron and neutron–neutron. This means that the strong nuclear force is responsible for holding all nucleons together, although it is obvious that such a force is more important for the protons, since neutrons are neutral and do not repel. The strong nuclear force is essential for counteracting the electrostatic repulsive force between the protons. The fact that the strong nuclear force has an equal action between all nucleons also explains the role of neutrons in stabilising the nucleus. Inside a nucleus, neutrons are placed between protons, holding the protons together via the strong nuclear force while separating the protons from each other so that the size of the repulsive force is reduced. Larger elements need to have relatively more neutrons compared to protons to allow the neutrons to be interspersed among the protons so that the repulsive forces between the protons are reduced as the protons are separated.

281

from quanta to quarks

Attractive force

Repulsive force

Force (N)

0.5

1.0

1.5

2.0 1015 m

104

Figure 16.4  The profile of the strong nuclear force: the force is very strong but acts over a very short distance; the force becomes repulsive when acting over extremely short distances

16.6

Nuclear reactions It is important to point out that the structure of the nucleus is far more complex than that discussed so far. In the remainder of this chapter, other reactions the nucleus can undergo are dealt with in addition to natural radioactive decay.

Fermi’s artificial transmutation experiments Soon after the discovery of neutrons by Chadwick, Italian scientist Enrico Fermi (1901–1954) realised that neutrons, due to their lack of charge, would have great potential to reach the nucleus of atoms to cause reactions. Between 1934 and 1938, Fermi bombarded many elements with neutrons. In the majority of the cases, heavy isotopes of the same element were formed. For example: 12 6C

Enrico Fermi

2. The strong nuclear force is a very powerful force; however, it only acts over a very short distance. The best way to illustrate the size and profile of this force is by using a graph as shown in Figure 16.4. As you can see, the strong nuclear force is at its greatest strength, about 104 N, when it is acting over a distance of approximately 1.3 × 10–15 m, which is the average distance of separation between the nucleons. This force is very adequate in holding the nucleons together. However, the size of the force declines very quickly and reaches almost zero when acting at a distance of 2 × 10–15 m. Also, note that at a distance less than 5 × 10–16 m, the force becomes repulsive. This is also significant as the repulsiveness of the force prevents the nucleons getting too close or fused together. In other words, the force profile of the strong nuclear force is such that it tries to hold the nucleons apart at an approximately constant distance of 1.3 × 10–15 m.

+ 10 n →

13 6C

In some cases, these heavier isotopes were actually unstable and would undergo radioactive (beta) decay to form new elements. Note: The reason for beta decay is that the element now has an excessive number of neutrons.

For example: 103 45Rh

Then

+ 10 n →

104 45Rh

104 45Rh

104 46Pd



+

0 –1e

+ v–

A change in the identity of the element has been achieved. This has been done via an artificial means. This is an example of artificial transmutation. Fermi went on to postulate that if the same was to be done with uranium—the largest naturally occurring element—a new element

282

chapter 16  the nucleus and nuclear reactions

with an atomic number one larger than uranium could be produced. This can be demonstrated in the following equations: 238 92U

Then

+ 10 n → 239 92U



239 92U 0 –1e

+

239 93Np

+ v–

Through these reactions, Fermi produced element 93, named neptunium, which was the first artificially produced element. Neptunium was also the first human-made element beyond uranium and so it was referred to as the first transuranic element. Not surprisingly, transuranic reactions are another example of artificial transmutations. Definition

Transuranic elements are elements that have atomic numbers larger than that of uranium. Since uranium is the largest naturally occurring element, all transuranic elements are human-made. Also, neptunium-239 would undergo a further beta decay to form plutonium-239 and thus create the second transuranic element: 239 Np → 239Pu + 0e + v– 93

94

–1

The need for slow neutrons During the neutron bombardment experiments, Fermi noticed the bombardment was much more efficient when the neutrons used were slowed down. This is because fast neutrons tend to go through the nuclei without being captured, and thus do not cause nuclear reactions. To slow down neutrons, a moderator needs to be used. Moderators are materials that possess the property of slowing down the neutrons as the neutrons pass through them. They facilitate the neutron-capturing by the nuclei—hence transmutations. Some examples of materials used as a moderator are: ■■ D2O (heavy water) (where D is deuterium) ■■ H2O ■■ graphite (carbon) ■■ beryllium or beryllium oxide

The accidental discovery of nuclear fission reactions n

Describe Fermi’s initial experimental observation of nuclear fission

16.7

There was no doubt that during Fermi’s neutron bombarding experiments, transuranic elements were produced and detected. However, Fermi and his team were puzzled when they found that apart from the anticipated transuranic elements, there were also many other isotopes produced. All were beta emitters with different measurable halflives. Clearly, this indicated that another process must be taking place at the same time but they could not understand the nature of this process. It was not until 1939 that Austrian physicists Lise Meitner (1878–1968) and Otto Frisch (1904–1979) explained this observation by stating that the neutron bombardment led to the breaking down of uranium-235 into two smaller nuclei of roughly equal size. The term nuclear fission was coined to describe this process.

283

from quanta to quarks

Definition

Nuclear fission refers to the process when a large atom, such as uranium, is hit by a slow neutron. It breaks down to give two smaller nuclei of roughly the same size (daughter isotopes) and at the same time, emit a few more neutrons and release energy. For example: Figure 16.5  Mass distribution pattern of products of nuclear fission

Percentage of fission fragments

10 1 0.1 0.01 0.001 0.0001 40

60

16.8

284

235 92U

+ 10 n →

141 56Ba

+

92 36Kr

+ 3( 10 n) + energy

Some examples of the elements that can undergo fission include uranium-235 and plutonium-239. Note that the two daughter isotopes of the fission reaction are randomly generated as a probability function. For uranium, this can be any pair from hydrogen (Z = 1) and protactinium (Z = 91) to two of palladium (Z = 46). (Both give a total of 92, the atomic number for uranium.) Nevertheless, the most likely pairs formed will be ones that include one relatively smaller element and another that is relatively heavier. Such a pattern can be represented by a graph as shown in Figure 16.5. To provide explanation for Fermi’s initial observations, uranium has two naturally occurring isotopes: uranium-238 and uranium-235. Uranium-238 was the isotope that was capable of undergoing neutron capturing and transuranic reaction; this was expected by Fermi and his team. Uranium-235 on the other hand was to undergo fission having 80 100 120 140 160 180 been hit by a neutron, and those puzzling beta emitters with variable half-lives were in fact various pairs of daughter Nucleon number (A) isotopes produced by the fission reaction.

Fermi’s discovery of chain reactions 1 141 92 1 It is apparent from the fission equation 235 92U + 0 n → 56Ba + 36Kr + 3( 0 n) + energy, that fission produces more neutrons than it consumes. If these extra neutrons are then allowed to cause more fission reactions, even more neutrons will be produced, thus leading to even more fission reactions. A chain reaction is set up. Using the above equation as an example, one neutron gives rise to three neutrons, which will then cause three uranium atoms to undergo fission to produce nine neutrons, thus the number of neutrons and the number of uranium atoms undergoing fission increases by a factor of 3n. This is shown in the Figure 16.6 (a). If this is allowed to proceed uncontrolled, then very quickly the number of uranium atoms undergoing fission will be very high. At the same time, a huge amount of energy will be released and an explosion will occur. This forms the basis of nuclear bombs. In cases where a steady rate of energy production is required, such as in nuclear power plants, this chain reaction needs to be controlled. To achieve this, a neutron-absorbing substance is used to take away the extra neutrons produced, so that the number of neutrons involved to initiate each of the subsequent fission reactions remains constant. This results in a steady rate of fission, and therefore energy liberation. This is shown in Figure 16.6 (b).

chapter 16  the nucleus and nuclear reactions

neutron uranium-235 fission product

Neutrons absorbed

Figure 16.6  (a) Uncontrolled fission chain reaction: all neutrons produced by the fission reaction are allowed to strike more uranium atoms to cause more fission

Neutrons absorbed

Figure 16.6  (b) Controlled fission chain reaction: some neutrons produced by the fission reaction are absorbed so that the number of neutrons involved in the fission reaction is constant

Comparing controlled and uncontrolled fission reactions n

16.9

Compare requirements for controlled and uncontrolled nuclear chain reactions

Definition

Uncontrolled fission reactions are fission reactions where all the neutrons produced during the reaction are allowed to strike more fissionable material to cause further nuclear fission so that the process continues exponentially. These reactions are adopted in nuclear weapons. Definition

Controlled fission reactions are fission reactions that allow the extra neutrons produced during the reaction to be absorbed, so that approximately the same number of neutrons will be present for each of the subsequent fission reactions; in these cases, the rate of fission is steady. These reactions are adopted in nuclear power plants to produce heat energy at a steady rate.

Nuclear power station at Three Mile Island, Pennsylvania, USA

A mockup of the Fat Man nuclear device

285

from quanta to quarks

The concept of controlled and uncontrolled nuclear fission is demonstrated in Figure 16.6. Although the two types of fission reactions are different, they share common features. These are discussed below.

The similarities Both controlled and uncontrolled fission reactions require: ■■ fuel ■■ moderator Fuel

Both controlled and uncontrolled fission reactions require either fissionable uranium-235 or plutonium-239 as fuels. Natural uranium ore only contains less than 1% fissionable uranium-235 compared to the 99% of uranium-238. (Recall that uranium-238 is not fissionable, rather it undergoes transuranic reactions.) In the case of the controlled chain reactions, processes are required to concentrate the fissionable uranium-235 to a concentration of at least 5%; whereas in the case of uncontrolled nuclear fission reactions, such as those for nuclear bombs, the required concentration may be as high as 95%. The processes for concentrating uranium-235 will be discussed in the next chapter. Plutonium-239 on the other hand does not occur naturally, thus extra effort is required to ‘breed’ it through transuranic reactions. In addition to the required concentration, the fissionable fuels also need to have their mass above the critical mass. Definition

Critical mass is the smallest amount of fissionable material that would sustain a chain reaction. In simple terms, the amount of fissionable material needs to be sufficiently large so that a chain reaction can be sustained. This is because: ■■ There is a possibility that neutrons can be captured by the fissionable atoms (fuel) without causing fission. ■■ There is a possibility that neutrons will be captured by other non-fissionable elements mixed within the fuels. ■■ Also, there is a possibility that the neutrons will escape without being captured. Thus there needs to be a sufficiently large amount of fissionable material so that the probability of the neutrons striking the fissionable atoms is sufficiently high to sustain a continuous fission process. In most controlled fission reactions, enriched uranium (concentration of about 5%) is made into separate rods that together exceed the critical mass. These are called fuel rods. Each fuel rod is enclosed in a gas-tight aluminium case. The fuel rods are inserted into the reaction core for nuclear fission to take place to produce energy. Once the uranium is used up, the rods are removed and new ones are inserted. In the case of uncontrolled nuclear fission reaction, fragments of fissionable material are brought together very quickly so that the total mass exceeds the critical mass before the bomb explodes. This is discussed in the next chapter. Moderator

A moderator is used to slow down a neutron, as discussed. Slow neutrons are more efficient in causing nuclear reactions than fast neutrons.

286

chapter 16  the nucleus and nuclear reactions

The differences Controlled nuclear fission reactions have extra requirements: ■■ control rods ■■ coolant ■■ radiation shield

freely moveable Moderator

Control rods

Control rods

In a controlled fission reaction, extra neutrons produced by the fission reaction are absorbed by the control rods so that the number of neutrons present for each of the subsequent fission reactions remains roughly constant. Some materials Fuel rods that can be used to make control rods are cadmium or boron. Control rods are designed to be able to move into and out of the reactor core freely. When they are moved further into the core, they absorb more neutrons, and therefore slow down the reaction, whereas when they are removed from the core, they absorb fewer neutrons, so the reaction speeds up. The movement of the rods is controlled by a computer system such that the rate of fission reaction is always controlled tightly. An illustration of how the control rods can be set up along with the fuel rods is shown in Figure 16.7.

Figure 16.7  Control rods being inserted in between the fuel rods within a moderator

Coolant

In a controlled fission reaction, the coolant is pumped to circulate through the reaction core. The role of the coolant is to first carry away the heat from the reactor core so that the core, which is where the nuclear fission takes place, will not melt down as a result of the extreme heat produced. Second, as the coolant carries the heat away, it is able to move the energy from inside the reactor core to somewhere else where the energy can be utilised to do useful work. This can, for example, be to heat up water to produce steam, which is in turn used to power a turbine attached to a generator. Some examples of the coolants used are molten sodium or molten sodium chloride, both of which are able to carry away sufficient amounts of energy quickly enough to be effective. Radiation shield

The radiation shield of a nuclear fission reactor consists of an inner layer, which is made of lead, and an outer layer, which is a thick layer of high-quality concrete. The inner lead layer is designed to reflect most of the neutrons produced by the fission reaction back into the reaction core. This will not only result in fewer neutrons reaching the outer environment to do damage but also ensure that there are more neutrons in the nuclear reactor core to facilitate the nuclear reaction. The outer concrete layer acts as a biological shield to further block the radiation coming out of the reactor core. The principal role of the radiation shield is therefore to ensure that the radiation is contained within the reactor core so that the surrounding environment is not affected. Although the most important application of a controlled nuclear fission reaction is to produce energy, there are other important uses for it. The fission products themselves may have extensive applications in various industries, medicine and agricultural settings. As well, the neutrons produced by the fission reaction can be used to carry out transmutations via neutron capturing. In Australia, these take place at the Australian Nuclear Science and Technology Organisation (ANSTO) facility at Lucas Heights, near Sydney.

287

from quanta to quarks

PFA

H4 ‘Assesses the impacts of applications of physics on society and the environment.’

TR

PFA scaffold H4 Mapping the PFAs

n

Compare requirements for controlled and uncontrolled nuclear chain reactions

What are the applications of physics in this case? Nuclear fission, as first observed by Fermi, has been utilised for nuclear weapons, large scale nuclear-powered electricity generation as well as nuclear-powered naval vessels and satellites for more than 60 years. Many radioactive isotopes used in medicine and industry require a nuclear reactor for their production.

What impacts have there been on society? There have been significant and major impacts on society due to controlled and uncontrolled nuclear chain reactions. The first impact on society was brought about by the dropping of the two atomic bombs (both fission bombs but of slightly different designs) on the Japanese cities of Hiroshima and Nagasaki. The subsequent deaths of tens of thousands of civilians, the destruction of large parts of these cities and the release of radioactivity shocked the world. It was the reason Japan surrendered, ending the war in the Pacific, even though the US had no more atomic bombs ready at the time. Should the Japanese have decided to fight on, thousands more allied and Japanese soldiers would have been killed using conventional warfare. The arms race and the Cold War, during which the US and the USSR produced sufficient nuclear weapons to destroy the major cities of the world many times over, caused the release of radioactivity into the atmosphere and oceans from the many hundreds of test bombs detonated by both sides. The production of electricity harnessing the heat energy from controlled nuclear chain reactions has been used widely in many countries, particularly in Europe, the USSR (now Russia and smaller states), Scandinavia, Canada, Japan and the US. With a few exceptions, this source of electricity has been reliable and affordable; however the Three Mile Island (US) and the far worse Chernobyl (Ukraine) accidents had serious adverse effects. The Chernobyl accident released large amounts of highly radioactive material, resulting in deaths, birth defects and cancers for many years after the event. Small power generators using nuclear fission to develop heat and thus electricity have been used to power satellites and space probes successfully for many years. The use of radioactive isotopes in medicine and industry is possible partly due to their production using neutron bombardment. The neutrons required for this are sourced from nuclear reactors. The ANSTO facility at Lucas Heights in Sydney’s southern suburbs is Australia’s only operating nuclear reactor, and is used for the production of radioisotopes and for research.

What impacts have there been on the environment? The immediate environments surrounding the Hiroshima and Nagasaki detonation sites have remained contaminated by radioactive fallout (although the cities themselves have been rebuilt). Radioactive contamination surrounds the test sites near Woomera in South Australia and the Monte Bello islands off Western Australia. Numerous similar sites around the world remain uninhabitable. Radioactive waste from controlled nuclear chain reactions within nuclear power plants is a major concern to the countries faced with its disposal. The waste remains dangerous for thousands of years. Australia has been at the forefront of developing

288

chapter 16  the nucleus and nuclear reactions

safe methods for its storage. In the past, some countries simply dumped the waste at sea in metal drums, which have since begun to rust and leak. One positive side to nuclear power plants that has gained increased attention over recent years is that they do not produce carbon dioxide gas in order to generate heat. Coal, gas and oil-fired power stations are the major source of this greenhouse gas worldwide. However, the mining operations and transporting of the uranium fuel used in nuclear power stations does produce carbon dioxide.

The assessment of these impacts There is no doubt that uncontrolled and controlled nuclear chain reactions have made significant and major impacts on society and the environment over the past 60 or so years. These impacts have been positive and negative. The debate about the merits or otherwise of using controlled nuclear chain reactions will continue well into the future. Useful websites The Australian Nuclear Science and Technology Organisation website: http://www.ansto.gov.au

>WWW

Information and background to the social and environmental consequences of the Chernobyl disaster: http://www.chernobyl.info/index.php

Fermi’s first controlled chain reaction n

Describe Fermi’s demonstration of a controlled nuclear chain reaction in 1942

In 1942 the first sustained fission chain reaction was observed. A team of scientists led by Fermi built a nuclear pile in a converted squash court at the University of Chicago. The pile consisted of graphite blocks surrounding a core of uranium. The quantity of uranium required, approximately 60 tonnes, represented almost all the US reserves of the material at the time. To prevent an uncontrolled reaction that would cause an explosion, cadmium rods, which absorb neutrons, were inserted between the blocks of the uranium fuel. Radioactivity monitors were used to detect the sustained reaction, which continued once the cadmium control rods had been slowly withdrawn from the uranium core.

Mass defect and binding energy n

Explain the concept of a mass defect using Einstein’s equivalence between mass and energy

16.10

Fermi’s first nuclear fission pile in 1942

16.11

The total mass of the neutrons, protons and electrons that make up an atom is greater than the mass of the same atom as a whole. This discrepancy in mass is summarised in Figure 16.8. Two new terms require attention: mass defect and binding energy.

289

from quanta to quarks

Definition

Loss of mass (mass defect) therefore emits energy Total mass of individual subatomic particles

Mass defect is the loss in mass when the mass of an atom as a whole is compared to the mass of its made-up components individually—protons, neutrons and electrons.

Total mass of the atom Requires energy (binding energy) therefore gains mass

Figure 16.8  Relationship between mass defect and binding energy

It follows that if mass is lost, energy must be liberated according to Einstein’s equation E = mc2. On the other hand, the same amount of energy must be put back if one is to separate the atom into its individual components and at the same mass is restored. This energy is known as binding energy. Definition

Binding energy is the energy needed to separate an atom into its separate parts. The concept of mass defect and binding energy can be further illustrated through the follow example: Example 1

Calculate the mass defect for a helium (He) atom and its corresponding binding energy, given that the rest mass of the helium atom is 4.002602 u. Express the energy in mega electronvolts (MeV). Solution

Mass defect = total mass of the separate parts that make up the helium atom – mass of the helium atom = (mass of a neutron × 2 + mass a proton × 2 + mass of an electron × 2) – mass of the helium atom Because we are working with the atomic mass unit (u), we need to convert the mass of the neutrons, protons and electrons from kg to atomic mass unit (either using the values from Table 16.1 or from the HSC data sheet) . This can be done by dividing the mass in kg by 1.661 × 10–27, since 1 u = 1.661 × 10–27 kg. Hence: Mass defect =

1.675 × 10–27 1.673 × 10–27 9.109 × 10–31 × 2 + × 2 + × 1.661 × 10–27 1.661 × 10–27 1.661 × 10–27 2 – 4.002602

(

)

(

)

(

)

= 0.0298013 u

Therefore, an energy equivalent of this mass that needs to be put back in if we need to separate this atom back into its separate parts is governed by the equation E = mc2. To use this equation, the mass must be converted back to kg by multiplying by 1.661 × 10–27. Hence: Ebinding = 0.0298013 × 1.661 × 10–27 × (3 × 108)2 J Converting the unit from joules to electron volt (eV) can be done by dividing the value in joules by the charge of an electron: 1.602 × 10–19. Dividing this by 106 will convert the value from eV to MeV. Hence:

290

chapter 16  the nucleus and nuclear reactions

Ebinding = 0.298013 ×

1.661 × 10–27 × (3 × 108)2 1.602 × 10–19 × 106

Ebinding = 27.8 MeV 1.661 × 10–27 × (3 × 108)2 give rises to a constant, which is 933.1. 1.602 × 10–19 × 106 (Note that the HSC data sheet quotes 931.5, which is calculated from non-rounded off data.) In fact,

Binding energy per nucleon It might seem logical to say that the larger the binding energy, the more energy is required to break down the atom into its separate parts, implying the atom is more stable. However, the problem is, as the atom gets bigger, its binding energy will naturally increase as there are now more separate parts put together. This large binding energy does not reflect how strong the atom is held together but rather the size of the atom. To make a fair comparison, one must be able to eliminate the effect of the size of the atom on binding energy. To do this, the binding energy is divided by the number of nucleons the atom has to give binding energy per nucleon. Since this is the energy per nucleon, the effect of the size of the atom on binding energy is eliminated and this gives an accurate reflection of the level of energy required to split the atom, and hence the stability of the atom. A higher binding energy per nucleon means an atom or a nucleus is more stable. More energy is required to break it apart.

Example 1

Calculate the binding energy per nucleon for the helium atom. Solution

Helium has two neutrons and two protons, therefore altogether four nucleons. Using the previous value obtained for the binding energy of He: Ebinding per nucleon =

27.8 = 6.95 MeV/nucleon 4

Example 2

Calculate the binding energy per nucleon for magnesium-24 with a mass of 23.98504 u. Solution

Mass defect = total mass of the separate parts that make up the magnesium atom – mass of the magnesium atom = (mass of a neutron × 12 + mass a proton × 12 + mass of an electron × 12) – mass of the magnesium atom

291

from quanta to quarks

1.675 × 10–27 1.673 × 10–27 9.109 × 10–31 × 12 + × 12 + × 1.661 × 10–27 1.661 × 10–27 1.661 × 10–27 12 – 23.98504 = 0.20938 u

Mass defect =

(

)

(

)

(

)

1.661 × 10–27 × (3 × 108)2 1.602 × 10–19 × 106



Ebinding = 0.20938 ×



Ebinding = 195.38 MeV

Since magnesium has 24 nucleons, therefore: 195.38 Ebinding per nucleon = = 8.14 MeV/nucleon 24

9

Binding energy per nucleon (Mev)

8

L 20

Ne O C 4 He 16

12

Fe

Ca

fusion 6

4

2 2

H

0

Figure 16.9  The binding energy per nucleon versus the atomic number

292

If the binding energy per nucleon is plotted against the atomic number, a graph as shown in Figure 16.9 is obtained. Kr Using this graph, some of the phenomena Hg happening at a nuclear level can be explained. fission First, the graph can be used to explain the emission of alpha particles (helium nucleus, also see Chapter 16). When a nucleus is too large so that it is unstable, it emits small ‘packs’ of particles as a part of its decaying process. These ‘packs’ consist of two protons and two neutrons, therefore are helium nuclei. The reason for this is that as shown in Figure 16.9. Helium has a very high binding energy per 200 150 100 nucleon compared to other small elements, Mass Number A therefore it is easiest and most energy efficient for the unstable nucleus to release protons and neutrons in the form of a helium nucleus. Although many other elements, for example carbon, have a larger binding energy per nucleon, they are generally too large to be released from an unstable nucleus. From the graph it can also be seen that iron has the highest binding energy per nucleon, indicating that it is the most stable nucleus in the entire periodic table. It follows that for elements that are to the left of iron on the graph, if they are combined together at a nuclear level to form larger elements, they will become more stable, and in turn, energy will be released. This is basically the principle of fusion reactions, where smaller elements, when fused together, make heavier elements and release energy. This is the method by which stars generate their energy. Of course, the fusion process has to stop at iron, as beyond that, the elements gradually start to have lower binding energy per nucleon, thus their energy content is increased. To synthesise elements that are heavier than iron, energy input is required. This is also why even a large star will end its life once the fusion has reached iron, and elements that are heavier than iron are only formed during the supernova of the star—where the supernova provides the energy for the synthesis of elements beyond iron. Indeed, for elements that are to the right of iron on the graph, energy is only released when they are split to form small elements, that is, towards iron. This is the source of energy for fission reactions.

M

50

H

chapter 16  the nucleus and nuclear reactions

Energy liberation in nuclear fission n

Solve problems and analyse information to calculate the mass defect and energy released in natural transmutation and fission reactions

16.12

Following from the previous section, when a large element is split to form smaller daughter elements, the atomic number moves to the left along the graph towards iron as shown in Figure 16.9. Consequently, the daughter elements will have lower energy content and thus energy will be liberated. The amount of energy liberated corresponds to the change in binding energy per nucleon between different elements, which is directly related to the change in mass between different elements—the reactants should have a larger total mass than the products. Use this concept to calculate the energy liberated during a nuclear fission reaction; this is shown in the example below. Example

During a nuclear fission reaction, a slow neutron collides with and splits a U-235 nucleus. The product nuclei are Ba-141 and Kr-92 and a number of neutrons. If the kinetic energy of the colliding neutron is negligible, calculate the energy in MeV released by this nuclear reaction. Given that: Mass of U-235: 235.0439 u Mass of Ba-141: 140.9139 u Mass of Kr-92: 91.8973 u Solution

The first step is to write a balanced nuclear equation for this reaction. Knowing 1 141 the reactants are 235 92U and 0 n (neutron), and products are given to be 56U and 92 36Kr, using the fact the mass number and atomic number have to be conserved, that is, equal on both sides, therefore three neutrons must be released as byproducts. Hence the equation is written as: 235 92U

+ 10 n →

141 56Ba

+

92 36Kr

+ 3( 10 n) + energy

The energy liberated is directly related to the loss in mass when the reactants go to form the products. Hence: The mass loss in u = mass of reactants in u – mass of products in u 141 92 = mass of 235 92U and one neutron – mass of 56Ba and  36Kr and three neutrons

( (

= 235.0439 +

1.675 × 10–27 – 140.9139 + 91.8973 + 3 × 1.661 × 10–27

1.675 × 10–27 1.661 × 10–27

) [

)]

= 0.21584 u

Therefore, the energy liberated can be calculated as:

1.661 × 10–27 × (3 × 108)2 1.602 × 10–19 × 106 = 201.41 MeV

E = 0.21584 ×

293

from quanta to quarks

Note: The concept used to calculate the binding energy from the mass defect is used here in the same way to calculate the amount of energy released from the mass loss. This is because under all circumstances, energy is always related to mass via Einstein’s equation E = mc2.

Observe radiation emitted from a nucleus using a Wilson cloud chamber first-hand and secondary source investigation physics skills H12.1 a, b, c, d H12.2 b

n

Perform a first-hand investigation or gather secondary information to observe radiation emitted from a nucleus using Wilson cloud chamber or similar detection device

The functional principle of the Wilson cloud chamber has already been discussed in this chapter, in the description of Chadwick’s experiment. You should be familiar with the set-up of a cloud chamber in a school laboratory. The trace that alpha, beta or gamma radiations would leave as they are going through the cloud chamber can be shown.

Condensation trails in a Wilson cloud chamber caused by emitted radiation

chapter revision questions   1. Define the term ‘nucleons’.   2. (a) Describe Chadwick’s experiment that allowed him to discover the existence of the

neutron. (b) What was the significance of using the paraffin wax in his experiment? (c) There were two laws of conservation used in his experiment. What were they and why were they important for the discovery of the neutron?   3. Describe how a cloud chamber can be used to detect the presence of alpha, beta or

gamma radiation.   4. Define ‘radioactivity’ and ‘transmutation’.   5. (a) Determine whether the following elements are alpha emitters or beta emitters.

(i) Potassium-40 (ii) Thorium-232 (iii) Radium 226 (iv) Iodine-131 (b) Write a nuclear equation to describe the decay of each of these elements.

294

chapter 16  the nucleus and nuclear reactions

  6. Iron-59 undergoes gamma decay; write an equation to describe this reaction.   7. Under what circumstances were neutrinos discovered? Why was the law of conservation

of energy important for the discovery of neutrinos?   8. (a) When aluminum is bombarded with alpha particles, a highly unstable isotope of

phosphorus is formed. Write a nuclear equation to describe this reaction. (b) This radioisotope of phosphorus then undergoes a decay to form phosphorus-30 and another product. Identify this product and write a nuclear equation to describe this reaction.   9. Nitrogen-14 when bombarded with an alpha particle will give rise to oxygen-17 and one

other element. Write a nuclear equation to describe this reaction. 10. A certain nucleus absorbs a neutron. It then undergoes beta decay and eventually

breaks up to form two alpha particles. (a) Identify the original nucleus and the two intermediate nuclei. (b) Will there be any neutrino emitted? Explain. 11. (a) Define ‘strong nuclear force’.

(b) Describe two properties of strong nuclear force. (c) Why is strong nuclear force important in stabilising the nucleus? 12. What are control rods and why are they significant for nuclear fission reactions? 13. What are the key differences between a controlled nuclear fission reaction and an

uncontrolled fission reaction? 14. The daughter isotopes for the fission of uranium-235 are caesium-141 and rubidium-93.

(a) How many neutrons are released during this fission reaction? (b) Write a nuclear equation to describe this reaction. 15. When uranium-235 is bombarded by a slow neutron, xeon-139 and strontium-95 may

be produced and a few neutrons are also released. Write a nuclear equation to describe this reaction. 16. Fermi was known as the father of nuclear physics. With the use of appropriate equations,

describe his contributions to nuclear physics, making particular references to the production of transuranic elements and nuclear fission reactions. 17. Calculate the mass defect, binding energy and binding energy per nucleon for the

following elements: (a) Lithium-7, mass = 7.016 003 u (b) Zinc-64, mass = 63.929 15 u (c) Radon-219, mass = 291.009 48 u 18. The energy released during normal radioactive decay can be calculated in the same

way as to the energy released in a nuclear fission reaction. Essentially, the mass of the products will be less than the mass of the reactants, with the change in mass corresponding to the energy released. Calculate the energy released by an alpha decay of uranium-235, given that the mass of uranium-235 is 235.043 92 u and mass of thorium-231 is 231.03630 u. (Hint: an alpha particle is essentially a helium nucleus.)

SR

19. When carbon-12 is bombarded by a proton, nitrogen-13 is produced. Write a nuclear

equation to describe this reaction. Given that the mass of carbon-12 is 12.000 000 u and the mass of nitrogen is 13.005738 u. Calculate the energy released by this reaction in both MeV and J.

Answers to chapter revision questions

295

from quanta to quarks

CHAPTER 17

Applications of nuclear physics and the standard model of matter An understanding of the nucleus has led to large science projects and many applications Introduction This chapter will focus on the practical applications of nuclear reactions and alpha, beta and gamma radioisotopes. Later, the development and the basic features of the standard model of matter, including quarks, will complete the module ‘From Quanta to Quarks’.

17.1

An application of nuclear fission reactions—a typical fission reactor n

Explain the basic principles of a fission reactor

Chapter 16 described the principle and conditions needed for a controlled nuclear fission reaction. Such a reaction can be used primarily to produce energy or act as a rich source of neutrons to initiate nuclear transmutation reactions. The diagram in Figure 17.1 shows how a nuclear power plant is designed to utilise the energy produced by the controlled fission reaction to produce electricity. As shown in Figure 17.1, a typical nuclear power plant consists of three parts: 1. The reactor core, where the fission reaction takes place. It contains fuel rods and control rods embedded in the moderator. The control rods are able to be moved up or down freely in order to control the rate of the fission reaction. This was described in Chapter 16. 2. The heat exchanger. This is where the heated coolant circulates out of the nuclear reactor core (also known as the primary coolant) and heats water to make steam, which is in turn used to operate the generator to produce electricity. The primary coolant then circulates back into the reactor core to carry away more energy and the cycle repeats. The primary coolant from the nuclear core forms a closed loop, and thus does not mix with the steam. This design minimises the transfer of nuclear waste from the reactor to the generator, reducing the chance of the nuclear waste leaking out into the environment. Note: Recall that examples of materials that may be used as primary coolant include

molten sodium or molten sodium chloride.

296

chapter 17  applications of nuclear physics and the standard model of matter

Cooling tower Power lines Containment structure Steam line

Power plant

Control rods Reactor core

Heat exchanger

Steam turbine

Generator

Fuel rods

Reactor Primary coolant

Pumps

Condenser

Figure 17.1  A nuclear power station

3. The generator and the secondary coolant. The steam is used to turn the turbine of the generator. After that, the steam is cooled and condensed to warm water by using a secondary coolant before it is recirculated back into the heat exchanger so that more steam is produced. The secondary coolant is usually just cool water taken directly from a natural source, such as a river, and after it is used, it is discharged back into the environment. To prevent thermal pollution caused by discharging this slightly warm water into the environment, a cooling pond may be used. Note: Thermal pollution is caused by discharging warm or hot water directly into a local

waterway, causing a rise in the temperature of the water, resulting in a reduction in the level of dissolved oxygen in the water and impaired reproductive cycle of aquatic species. The reduction in the level of dissolved oxygen in the waterway may lead to the death of many aquatic species.

Some countries have employed fission technology (see Fig. 17.1) as their main source of electricity supply. However, currently in Australia, we do not use fission technology for power; rather, we use it as a source of neutrons in the nuclear reactor at Lucas Heights, Sydney. In such a reactor, the design is simpler as no heat exchanger or generator is required. Coolant is still required to carry away the excess heat from the reactor core in order to prevent the meltdown of the core. Transmutation reactions using neutrons take place inside the core, as this is where neutrons are produced.

297

from quanta to quarks

17.2

Radioisotopes and their applications n

Describe some medical and industrial applications of radioisotopes

Uses of radioisotopes secondary source investigation PFAs H4, H5 physics skills H12.3a, b, c, d H12.4f H14.1a, c, e, f, g, h

n



Identify data sources and gather, process and analyse information to describe the use of: – a named isotope in medicine – a named isotope in agriculture – a named isotope in engineering

Chapter 16 discussed the phenomenon of alpha, beta and gamma decay: the way in which the decay occurs and the products of the decay, using some simple examples. The physical properties of alpha, beta and gamma radiation are summarised in Table 17.1. Table 17.1  Properties of alpha, beta and gamma radiation Name

Identity

Charge

Mass (u)

Energy

Ionisation power

Alpha

Helium nucleus 4 2 He

+2

4.03

Low

High

Low ■■ Travels for 7 cm in air ■■ Blocked by a layer of skin or a thin piece of paper

Beta

Fastmoving electron 0 ( –1e)

–1

5.48 × 10–4

Medium

Medium

Medium ■■ Travels for 1 m in air ■■ Blocked by a thin layer of metal sheet

Highest frequency EMR (γ)

0

High

Low

High ■■ Penetrates through thin metal sheets ■■ Blocked by thick lead sheets or concrete wall

Gamma

(approx. 0

1 1825

)

Penetration power

Isotopes of the elements that undergo alpha, beta or gamma decay are known as radioisotopes. Radioisotopes have extensive applications in many areas. The syllabus requires you to identify the uses of the radioisotopes in medicine, industry, agriculture and engineering. Many of the radioisotopes used commercially are required in large quantities, therefore are usually artificially manufactured using a nuclear reactor—more than 500 radioisotopes are produced at Lucas Heights, Sydney. However, a few exceptional ones are made in a particle accelerator. (Particle accelerators are discussed later in this chapter.) Some examples of the uses of radioisotopes in various fields are described here. You are encouraged to conduct your own research, using the Internet and library resources, either to further extend these examples or initiate your own. Key words are: ‘uses of radioisotopes’, ‘radioisotopes’, ‘nuclear medicine’, ‘radioisotopes in industries’, and so on.

298

chapter 17  applications of nuclear physics and the standard model of matter

Medical use Technetium-99m Technetium-99m is a pure gamma emitter and has a very short half-life of six hours. 99 The fission of uranium produces molybdenum-99 ( 42Mo), which is then extracted and packed into small glass tubes. 99 42Mo undergoes continuous beta decay to form technetium-99m: 99 42Mo



99m 43Tc

+

0 –1e

+ v–

When needed, technetium-99m is then extracted at the site of use by passing saline through the glass tube. The reason for this complicated production method is that molybdenum has a much longer half-life (67 hours) compared to technetium, thus allowing adequate time for the transportation from Lucas Heights to various hospitals around the country. When technetium-99m is extracted at the hospitals, it needs to be used almost immediately because of its short half-life.

A nuclear scan image using Tc-99m

Note: Half-life is defined as the time needed for half the amount of a given radioisotope to

decay or the time for the intensity of its radiation to decrease by a half. Use of the radioisotope Technetium-99m is one of the most commonly used radioisotopes in medicine. It is used as a diagnostic tracer to detect abnormal blood circulations, abnormal lung functions, bone pathologies and many more. For example, when technetium-99m is used to examine the circulatory system, it is made to attach to a biological molecule such as albumin (a blood protein) and is injected into the bloodstream. These molecules are then allowed to circulate and distribute evenly in the bloodstream. The gamma radiation from the technetium-99m, and therefore the distribution of the radioisotopes, is detected by a gamma camera. An abnormal increase in or absence of scintillation (detection) at certain areas could result from either haemorrhage or clots respectively. Properties related to its use n Very short half-life of six hours, so technetium-99m does not last long when injected into the human body, making it relatively safe to use. n Technetium-99m is a gamma emitter: only gamma radiation, but not alpha or beta radiation, are penetrative enough so that it can be detected outside the body. Gamma radiation also causes the least amount of ionisation, making it safer to use compared to the same dosage of alpha or beta radiation inside the body. n Technetium-99m is also relatively cheap to produce. Note: Gamma radiation causes the least amount of ionisation compared to alpha and beta radiation, and therefore is the safest to use inside the body. The impression that alpha and beta radiation are safer to use compared to gamma radiation is related to the fact that they have low penetration power; therefore, when they are emitted from an external source, they are often blocked by air, cloth and skin and are unable to do harm.

Cobalt-60 Cobalt-60 is a co-emitter of beta and gamma radiation and has a half-life of 5.3 years. Cobalt-60 is produced in the nuclear reactor by neutron bombardment. Naturally occurring cobalt-59 atoms are placed inside the nuclear reactor for many weeks to capture neutrons in order to produce cobalt-60. 59 27Co

1

+ 0n →

60 27Co

+ energy

299

from quanta to quarks

Use of the radioisotope To kill cancer cells such as in radiotherapy. Gamma radiation produced by Cobalt-60 destroys the cancer cells. Unfortunately, it also destroys the surrounding healthy tissues which can be an unavoidable side effect of the therapy. Properties related to its use n Cobalt-60 is a gamma emitter and only gamma rays are penetrative enough to reach the deep cancer tissues. Also gamma radiation is energetic enough to destroy the cancer cells. n It has a moderate half-life, meaning that it can last long enough for economical use but at the same time, not so long that the radiation it emits is too weak to kill the cancer cells. Note: Usually the longer the half-life, the lower the intensity of the radiation emission.

Industrial use Strontium-90 Strontium-90 is a beta emitter that has a half-life of 28 years. Strontium-90 is produced via the fission of uranium as one of the daughter isotopes. Use of the radioisotope Strontium-90 is used in industries as a thickness gauge to monitor and control the thickness of sheets as they are being manufactured. Some examples include paper sheet and metal sheet production. A schematic diagram of how a radioisotope can be used as a thickness gauge is shown in Figure 17.2. The thickness gauge works by directing the beta radiation emitted by strontium-90 to pass through the sheet as it is being produced (rolled). The sheet absorbs the radiation partially and the remainder penetrates through to reach the detector. The amount of the radiation being absorbed by the sheet depends on its thickness. The detector measures the strength of the radiation received and feeds this data back to the rollers, instructing the rollers to adjust the pressure in order to keep the thickness of the sheet constant. For instance, a sheet that is too thick will absorb too much of the radiation, resulting in the detector measuring less radiation than normal. This information is fed back to the rollers so that they will roll with more pressure to reduce the thickness of the sheet. Properties related to its use n As a beta emitter, strontium-90 is quite safe to use (compared to gamma emitters, as beta radiation is not as penetrative); only minimal safety precautions are required. n Strontium-90 has a long half-life of 28 years. This means the radioisotope does not have to be replaced constantly, making it more economical. Also, a long half-life means a lower emission intensity. This results in a larger proportion of the radiation being absorbed by the sheet, making any changes in absorption more noticeable, thus increasing the sensitivity of the device. Figure 17.2  Thickness gauges using radioactive sources are widely used in industry to monitor and control the thickness of materials produced, ranging from paper to plastics to steel

Rollers for forming the sheet or film

β particles

Bulk material

Feedback Detector

Varies pressure on rollers to adjust thickness

300

Shielded radioactive source

Readout unit Controller

Sheet or film of material being formed Amount of radiation absorbed by sheet depends on its thickness

chapter 17  applications of nuclear physics and the standard model of matter

Americium-241 Americium-241 is a co alpha and gamma emitter with a half-life of 432 years. Note: Americium-241 is also a transuranic element.

Use of the radioisotope Americium-241 can be used in smoke detectors. Inside the smoke detector, americium-241 constantly releases alpha particles in between the electrodes in the detector to set up a current flow. When there is an increased level of smoke particles, these particles migrate into the interior of the detector to disrupt the movement of the alpha particles. Consequently, the current flow decreases, which in turn triggers the alarm.

A smoke detector

Properties related to the use Alpha radiation can be stopped very easily, so its movement can be disrupted readily by the smoke particles. This forms the functional basis of the detector. The low penetration power of the alpha radiation also makes americium-241 safe to use.

Agricultural use Phosphorus-32 Phosphorus-32 is a beta emitter with a half-life of 14 days. It can be produced by neutron bombardment of the naturally occurring phosphorus-31. Use of the radioisotope Phosphorus-32 can be used as a biological tracer to study natural processes such nutritional uptake by plants in the natural environment and in agricultural settings. For example, 32 3– phosphorous-32 can be introduced into plants or crops as radioactive phosphate ions ( PO 4 ). (Phosphate is an essential nutrient for the growth of plants and crops.) The plants or the crops process the radioactive phosphate in the same way as they handle normal phosphate. The only difference is that the radioactive phosphate continues to emit beta radiation. By tracing the radiation, biochemical processes such as nutritional uptake, transportation and storage can be studied. The efficacy of phosphorous based fertiliser can also be studied in a similar way. Properties related to its use The fact that phosphorous-32 can be easily introduced into biological systems and be traced makes it ideal for this use. Compounds formed by phosphorous-32 have the same chemical properties compared to those formed by non-radioactive phosphorous (phosphorous-31) so they are indistinguishable by plants. Cobalt-60 Use of the radioisotope Cobalt-60 can be used in agricultural settings to irradiate crops after they have been harvested. Cobalt-60 produces gamma radiation that is energetic enough to kill the micro-organisms living on the crops. This enables the crops to be preserved longer; an example is irradiation of wheat. Properties related to its use Just as with cancer cells, gamma rays are damaging enough to kill those micro-organisms.

Engineering use Sodium-24 Sodium-24 is a co-emitter of gamma and beta radiation. It has a half-life of 15 hours. Sodium-24 is produced in the nuclear reactor by neutron bombardment of the naturally occurring sodium-23, similar to the production of cobalt-60.

301

from quanta to quarks

Use of the radioisotope Sodium-24 is used in detecting leakage from underground water pipes. It is introduced into 24 the water pipes as a compound such as 11NaCl, and undergoes continuous decay to give off gamma and beta radiation. The radiation (mainly gamma) is then detected at ground level, with any abnormal distributions or increased level of the radiation detected indicating a leakage. Properties related to its use n Sodium-24 is a co-emitter of gamma and beta radiation. The gamma radiation is penetrative enough to reach the surface so that it can be detected. n It has a short half-life of 15 hours, which is long enough for the process of detecting the leakages but does not stay in the water system long enough to cause harm to the end users. Many other gamma emitters are commonly used for detecting flaws in welding, ships and aircraft, similar to how X-rays may be used to detect breakage in bones.

17.3

Neutron scattering and probing n

Describe how neutron scattering is used as a probe by referring to the properties of neutrons

Apart from using neutrons to initiate nuclear fission reactions and artificial transmutation, neutrons can also be used to study the internal structure of matter. In 1994, Bertram Brockhouse (1918–2003) and Clifford Shull (1905–2001) shared the Nobel Prize for their pioneering work on the development of neutron scattering. Definition

Neutron scattering or probing is the method that utilises the wave characteristics of neutrons to study the internal structure and properties of matter. The principle used by neutron scattering is very similar to that of electron microscopes and Braggs’ X-ray diffraction technique. When conducting neutron scattering, high flux neutrons are required; hence, the investigation needs to be carried out near or inside a nuclear reactor. The reason for using a large number of neutrons is that neutrons do not have any charge and therefore they are very hard to manipulate. Analogy: Using neutrons to probe an object is like pouring a bucket of paint onto an object—very little control exists. Therefore more paint is required than if the object is painted precisely using a paintbrush.

These high flux neutrons are first made to pass through certain crystals such as sodium chloride, so that they all possess the same amount of kinetic energy. The neutrons are then directed to bombard the material that is to be analysed. These neutrons will collide with the atoms that make up this material, or more correctly, their nuclei, and subsequently lose a specific amount of energy according to the nature of the collisions. Side-on collisions compared to head-on collisions will result in the neutrons losing less of their kinetic energy, and their occurrences are determined by the arrangement of the atoms that make up this material. Furthermore, collisions between neutrons and small elements will result in a significant amount of energy loss from the neutrons (to the atoms), whereas colliding with large elements will cause neutrons to bounce off without losing much energy at all.

302

chapter 17  applications of nuclear physics and the standard model of matter

Analogy: When a tennis ball is used to hit another tennis ball (representing an atom), the first

ball loses a significant amount of its kinetic energy to the second one, causing the first tennis ball to slow down and the second one to start to move. On the other hand, when a tennis ball is made to hit a wall (a large element), it bounces back almost with the same speed, hence kinetic energy.

Therefore, after the neutrons have interacted with the material, they will be scattered and returned with various levels of energy, hence momentum. Recall that any small particles, like these neutrons, also exhibit wave characteristics, with h λ= , where mv is the momentum of the neutrons. Hence, the scattered neutron mv waves will return with different wavelengths, which in turn will generate a specific interference pattern. Lastly, by analysing the interference pattern, the nature of the neutron waves, and therefore the nature of their initial interaction with the material, can be determined. From this, the internal structure and the composition of the sample material can also be deduced.

The advantages of using neutron scattering First, neutrons are not charged, therefore they do not interact with the electrons around the nucleus. If they do not hit the nucleus of any atoms, they will pass through these atoms, thus making them extremely penetrative. Consequently, neutron scattering is able to analyse the entire depth of the sample material. This is an advantage compared to an electron microscope, which can only analyse the surface of matter or thin specimens because of the extensive interactions between the electrons used by the electron microscope and electrons around the atoms of the sample specimen. A similar, but less extensive, problem is encountered with the use of X-rays for X-ray diffraction. Second, neutrons are good for probing the nucleus. This is related to the fact that neutrons are not charged, therefore they are able to penetrate through the electron cloud to reach the nucleus. Compared to the smaller electrons and the even smaller X-ray photons, neutrons and the nuclei usually have a comparable size, which makes the interaction more efficient. Finally, neutrons are very useful for probing small elements and proton-rich materials. In these materials, the relative low amount of electrons leads to poor results when they are imaged using an electron microscope or X-ray scattering, since both work on the electrons in the material. The fact that neutrons work on the nuclei will circumvent the problem. Small elements and proton-rich materials make up organic matter, such as living tissues or viruses. Therefore neutron scattering or probing has an important role in analysing these materials.

Some applications of neutron scattering 1. Finding structural faults in welds and metals. 2. Developing magnetic material for computer data storage. Neutrons are neutral, therefore when they are used for probing magnetic materials, they will not be influenced or interfered with by the magnetic field. 3. Developing new superconductor materials, again due to their neutral charge. 4. Identification and study of viruses, which are rich in small elements and protons, and therefore can be analysed efficiently using neutron probing. Neutron scattering is an expensive process. It is also difficult to use, as neutrons are very difficult to control. Also, the actual investigation needs to be carried out near or inside a nuclear reactor, making neutron scattering an uncommon process.

303

from quanta to quarks

The Manhattan Project secondary source investigation PFAs H1, H4, H5 physics skills H12.3a, b, d H12.4f H14.1b, d, f, g, h SR

‘Assess’

Manhattan Project officials, including Dr Oppenheimer (white hat), inspect the detonation site of the Trinity atomic bomb test, 16 July 1945.

n

Gather, process and analyse information to assess the significance of the Manhattan Project to society

The Manhattan Project was the code name used by the US army to describe the project to develop atomic bombs during World War II. The project got its name from the Manhattan district of the US, where much of the early work was done in developing the atomic bombs. The key chronological events and impacts of the project are summarised below. You are encouraged to conduct your own research, using internet and library resources, by typing in ‘Manhattan Project’, to expand your knowledge of the project and appreciate its significance. n In 1938, German scientists discovered nuclear fission. n Later in the year, refugees from the Nazis including Leo Szilard, Edward Teller and Eugene Wigner raised the possibility that Germany was utilising nuclear fission technology to develop nuclear weapons. n In 1939, Leo Szilard, Edward Teller and Eugene Wigner convinced Albert Einstein to write the famous letter to the US President Franklin D. Roosevelt to advocate the development of atomic bombs. President Roosevelt set up an advisory committee on uranium in October 1939, headed by L. J. Briggs, director of the National Bureau of Standards. This marked the starting point of the Manhattan Project. n In March 1940, it was confirmed that uranium-235 was able to undergo fission, whereas uranium-238 was not. n In 1941, plutonium-239 was also identified as capable of undergoing fission but at a faster rate compared to uranium-235. n On 7 December 1941, Japanese forces attacked Pearl Harbor and the US entered the war. This accelerated the development of the project and the War Department was given joint responsibility for it. n In December, 1942, Fermi successfully carried out the first ever controlled fission reaction in a basement of the University of Chicago (see Chapter 16). This gave a huge boost to the project. n Unlike Fermi’s reaction pile, the atomic bombs needed high concentrations of fissionable material in order to produce very rapid fission reactions for the explosion. Between 1942 and 1943, two huge industrial plants were constructed to provide fissionable material for the bombs; one was at Oak Ridge, Tennessee, and the other was at Hanford Engineer Works, Washington. n At Oak Ridge, the concentration of uranium-235 was taking place. The uranium-235 was concentrated from its natural composition of 0.07% to 95% by two methods: – Gaseous diffusion separation: Uranium (containing both uranium-238 and uranium-235) was first reacted to form uranium hexafluoride (UF6) gas. The gas was then allowed to pass through a series of membranes. The heavier 238UF6 would diffuse slightly slower compared to the lighter 235UF6. This difference in speed of migration as the gas mixtures were passing through the membranes allowed separation. – Electromagnetic separation: Uranium was ionised and was then allowed to pass through a uniform magnetic field, perpendicular to the field lines. As shown in Chapter 10, when a charged particle passes through a magnetic field perpendicularly, it will describe an arc of a circle. The radius

of the circle can be described by the equation, r =

mv qB

.

(Why?) The fact that the uranium-238 had a slightly higher mass compared to uranium-235 would result in a slightly larger radius as the uranium ions were passing through the magnetic field. This allowed the separation to be made.

304

chapter 17  applications of nuclear physics and the standard model of matter

n At Hanford, production of plutonium-239 by transuranic reactions was taking place: uranium-238 was converted to form plutonium-239 by neutron bombardment as described in Chapter 16 (page 283). n Also during 1942 and 1945, under the leadership of Robert Julius Oppenheimer, the actual weapon design was taking place simultaneously at Los Alamos, New Mexico. The weapon designed focused on how to assemble the fissionable materials into a bomb as well as the construction of a deliverable weapon. n By 1945, enough concentrated uranium-235 was produced so that it could be packed into a gun-barrel-type atomic bomb. The gun-barrel-type atomic bomb consisted of two pieces of uranium, both sub-critical in mass, which would be brought together quickly upon explosion to achieve a super-critical mass. On the other hand, plutonium-239 bomb was made into an implosion-type of bomb. The implosion-type atomic bomb utilised many sub-critical mass pieces of plutonium, which needed to be brought together even more quickly upon explosion to reach a super-critical mass. Note that with both designs, an initiating neutron was required to start off the fission reaction. n In 16 July 1945, a test explosion was performed near Alamogordo, New Mexico. This bomb produced energy equivalent to 15 000 to 20 000 tons of TNT. n On 6 August 1945, the uranium-235 bomb was dropped on Hiroshima and three days later the plutonium-239 was dropped on Nagasaki. The next day, Japan surrendered. n The two bombs together killed 100 000 people and caused a similar number of people to be wounded. The radioactivity produced by the bombs (mainly beta emitters) was spread by the wind and remained a threat to lives many decades after the bombs were dropped. n After the war, many scientists discovered that the Germans were not close to developing atomic bombs. Einstein particularly regretted the letter he wrote to the US president to advocate the development of the nuclear weapon.

Impacts of the Manhattan Project

The Hiroshima Peace Memorial, Genbaku Dome, believed to be the exact spot where the atomic bomb exploded during World War II: it has been preserved as a reminder of the destruction

The Manhattan Project can be seen to have both positive and negative impacts. You can choose to emphasise the positive or negative impacts, or present a balanced view. Positive impacts 1. The project ended World War II promptly; three days after dropping the two atomic

bombs, Japan surrendered. Some people argued that the use of the atomic bombs, although they killed many people, also saved many other people’s lives by ending the war quickly, avoiding a long process of invasion with even more casualties. 2. The project facilitated the development of technologies to produce fissionable fuels. The method of gas diffusion and electromagnetic separation, as well as the method for producing plutonium-239 are still used today to produce fissionable materials for nuclear reactors and nuclear power stations. 3. The project accelerated the development of nuclear technologies, which gave us the ability to manipulate nuclear power. The best scientists in the world gathered to work for the project, which led to a rapid and accelerated advancement of nuclear technologies during World War II. Without the Manhattan Project, our ability to control nuclear power would now be much less developed. 4. It was argued that the existence of nuclear weapons prevented a war between the USSR and NATO (North Atlantic Treaty Organization). The fact that both possessed nuclear weapons meant both were afraid to enter into another war—they knew the damage that nuclear weapons could cause.

305

from quanta to quarks

Negative impacts 1. The project was expensive. It cost US$2 billion dollars for the project in 1945—which

would be much more if translated to today’s currency. 2. One hundred thousand people were killed when the two bombs were dropped and many

more were wounded. The bombs also left behind radiation, so that even many decades later, Japanese people were still dying from radiation-related diseases, such as leukemia. 3. Just as for the Manhattan Project, governments are putting more and more money into nuclear research because it seems to countries that lack nuclear weapons that they are more vulnerable. This huge amount of money could be used for social welfare and medical research instead. Many countries’ nuclear policies still remain controversial. 4. Nuclear weapons are so powerful that humans now have the power to destroy themselves. A world war involving nuclear weapons would effectively end the human race and much life on Earth.

17.4

Particle accelerators n

Identify ways by which physicists continue to develop their understanding of matter, using accelerators as a probe to investigate the structure of matter

The term particle accelerator is a collective name for a series of devices that are designed to use electric fields and/or magnetic fields to accelerate charged particles to very high speeds before smashing the particles against a target. This can subsequently cause transmutation or breaking up of the particles as a result of the ‘smashing’. The latter forms the basis by which sub-atomic and fundamental particles were discovered. The fundamental particles are discussed in the next section. Note: Only charged particles can be influenced by electric and magnetic fields, therefore, particle accelerators work well for particles like protons, electrons and various ions. They do not work for neutrons!

There are many different types of particle accelerators. Some examples include: circular particle accelerators: cyclotrons, synchrocyclotrons, synchrotrons and betatrons ■■ linear accelerators Most particle accelerators, except the linear accelerator, are circular. Although these particle accelerators are designed to utilise the electric fields and/or magnetic fields in different ways to accelerate charged particles, they share a similar functional principle. The cyclotron will be examined in detail in order to illustrate the basic functional principle. ■■

Cyclotron A cyclotron consists of two D-shaped hollow metal cases, called ‘Dees’, which are mounted between the poles of two powerful magnets. The size of the Dees can range from just over a hundred metres to hundreds of metres. A schematic drawing of a cyclotron is shown in Figure 17.3 (a). Figure 17.3 (b) shows a simplified diagram of the same cyclotron when viewed from the top. The charged particle (source) that is to be accelerated is placed in between the two Dees, a little bit off the centre (of the particle accelerator). The

306

chapter 17  applications of nuclear physics and the standard model of matter

Figure 17.3  (a) A cyclotron N

Magnetic field

High-frequency alternating current

Electric field

Source Dees

Particle path S

Target

High-frequency alternating voltage Electric field

Pathway of the particles

F

F

v

Source

Electric field

High-frequency alternating voltage

v F

Source

F

Pathway of the particles with a larger radius than before Magnetic field out of the page

Dees

Figure 17.3  (b) Top view of a cyclotron

Dees

Magnetic field out of the page

Figure 17.3  (c) Top view, after the charged particle has exited the Dees on the left

electric field produced in between the Dees as a result of the alternating voltage that supplies the Dees will accelerate the particle to increase its linear velocity and the particle enters the Dee to the left, shown in Figure 17.3 (a). As the particle travels inside the hollow Dee, it experiences no more electric force (the electric field inside a hollow conductor is zero) and is acted upon by the force due to the uniform magnetic field running perpendicularly through the Dees. Under the influence of the magnetic force, the particle subsequently bends to describe an arc of a circle inside this Dee and then exits. At this point, the polarity of the Dees is reversed by the AC voltage, so that the particle is again accelerated when it is in between the two Dees. This is shown in Figure 17.3 (c). Subsequently, the particle enters the Dee to the right with an increased linear velocity and is again bent by the magnetic field. As this process repeats, the particle will continue to describe a circular pathway inside the Dees and is accelerated by the electric field every time it passes in between the Dees. Thus the particle’s velocity increases continuously, which is accompanied by an ever-increasing radius of its pathway, as shown in Figure 17.3 (a). Note: As r =

mv qB

, as v increases, r increases.

307

from quanta to quarks

Once the particle has reached a desirable velocity, it is then allowed to exit the particle accelerator, resuming a linear motion to collide with a target. This highvelocity collision will either: ■■ smash the particle apart for analysis ■■ create transmutation

Other circular particle accelerators A synchrocyclotron operates similarly to a cyclotron, the only difference being the synchrocyclotron takes into account the effect of mass dilation as the particle speeds up, which can delay the particle’s motion and its arrival at the opposite Dees. By taking into consideration mass dilation, the particle can be accelerated to a higher velocity. A synchrotron on the other hand utilises a variable magnetic field (compared to a constant magnetic field used by the cyclotron and synchrocyclotron), such that the increase in velocity of the particle is coupled with a proportional increase in the applied magnetic field, so that the radius of the pathway described by the particle is kept constant. This helps to reduce the size of the device. Betatrons have specific designs that cater for accelerating electrons or positrons.

Linear accelerator

Figure 17.4  A linear accelerator

v

A linear accelerator consists of many hollow tubes, called drifting tubes, that are aligned in a line. The drifting tubes are made progressively longer and their total length can reach a few kilometres. Each second tube is connected to one of the terminals of a high-frequency AC power supply—so the polarities of the tubes alternate. As shown in Figure 17.4, to accelerate a positively charged particle, the first tube is made negative. This attracts the particle and accelerates it through the tube. Upon its arrival at the second tube, the polarities of the tubes are reversed, such that the first tube (which was negative before) is now positive and the second tube is now negative. As a result, the positively charged particle is repelled by the positive tube behind it and is attracted to the negative tube in front of it, so that it is further accelerated. As the whole process repeats, the particle will increase its velocity as it travels through all of the tubes of the accelerator. Finally, the particle exits the last tube to strike the target. Note that the length of the tubes is made progressively longer in Drifting tubes order to accommodate the increase in velocity of the charged particle, so Target that the particle will always arrive in between the tubes at a constant time interval, which must be in synchrony with the timing of the polarity change. Note: As the particle speeds up, it is able to travel greater distance in a given time.

High-frequency AC voltage

308

chapter 17  applications of nuclear physics and the standard model of matter

The standard model of matter n

Discuss the key features and components of the standard model of matter, including quarks and leptons

The standard model of matter was initially developed in an attempt to describe all matters and forces in the Universe using fundamental particles. The standard model of matter can be summarised by a flow chart shown in Figure 17.5.

17.5

Everything

Matter particles

Force particles —Bosons

Quarks In 1964, two physicists, Murray GellQuarks Leptons Mann and George Zweig, proposed the existence of particles with charges that were sub-multiples of electron charges, termed quarks. Baryons Later the quarks were recognised as and Mesons fundamental particles—the smallest particles that could not be broken down further. In simple terms, there are six ‘flavours’ of quarks as well as six corresponding anti-quarks. The names, symbols and charges of the six flavours of quarks are summarised in Table 17.2. Although the six anti-quarks are not shown, their existence should be emphasised. For instance, an anti-up quark has the – 2 symbol u– and a charge of – while an anti-down quark has the symbol d and a 3 1 charge of + . There are also other differences between quarks and anti-quarks at a 3 quantum mechanical level but these are beyond the scope of this course.

Figure 17.5  The standard model of matter

Table 17.2  The six ‘flavours’ of quarks Generation

Quarks

Symbols

1

Up

u

Down 2

Charm Strange

3

Top Bottom

d c s t b

Charges + – + – + –

2 3 1 3 2 3 1 3 2 3 1 3

309

from quanta to quarks

Hadrons Although quarks can be identified as fundamental particles through the use of particle accelerators, they usually do not exist by themselves as they are unstable. Quarks usually exist in more stable forms by combining with one or two other quarks. The combination of quarks is known as hadrons. All hadrons have integral charges. There are two types of hadrons: baryons (three-quark combinations) and mesons (two-quark combinations). Baryons

Baryons make up everyday matter as they form the nucleons. Protons consist of two 2 1 up quarks and one down quark, hence a charge of 2 × (+ ) + (– ), which gives a 3 3 value + 1. Neutrons on the other hand consist of one up quark and two down quarks, 2 1 giving an overall charge of 0, that is, (+ ) + 2 × (– ). 3 3 All quarks, thus all baryons, act through strong nuclear force. This is the reason for the strong nuclear force to act equally between proton–proton, proton–neutron and neutron–neutron, since all are made from different numbers of the same quarks. Mesons

A meson consists of a quark and an anti-quark. One example is positive peon (π+), 2 1 which is made from an up quark (+ ) and an anti-down quark (+ ), giving it an 3 3 overall charge of −1. Mesons are generally unstable therefore short-lived; this makes them hard to detect or identify.

Leptons Leptons are another type of fundamental particle that have either very little or no mass. There are six ‘flavours’ of leptons, also grouped into three generations. Within each generation, an electrically charged lepton is coupled with its corresponding neutrino. This is summarised in Table 17.3. Also note that, just like quarks, for every lepton there is a corresponding antilepton. For example, there exist anti-electrons, termed positrons, which have the symbol of e+. Positrons are quantum mechanically differently to electrons but more noticeably, they carry a charge of +1. The existence of anti-electrons—positrons—is discussed in Chapter 16 (recall that they occur along with β− decay). All leptons interact through weak nuclear force and the charged leptons also interact through electromagnetic force.

Table 17.3  The six ‘flavours’ of leptons Generation 1 2 3

310

Leptons

Symbols

Charges

Electron

e–

−1

Electron-neutrino

υe

0

Muon

m–

−1

Muon-neutrino

υm

0

Tau

τ–

−1

Tau-neutrino

υτ

0

chapter 17  applications of nuclear physics and the standard model of matter

A word about ‘generation’ Note that in Tables 17.2 and 17.3, the category ‘generation’ is used. The first generation particles are ones that make up ordinary matter. For instance, the generation 1 quarks make up the nucleons, whereas generation 1 leptons include the electrons and the electron-neutrinos that are released during ordinary β− decays. The second generation particles are less stable and quickly decay to form the first generation particles; the third generation particles are even less stable and decay rapidly to form the second generation particles. The fact the second and third generation particles are unstable and short-lived means they cannot constitute everyday matter and are harder to detect. Also, as the generation number increases, the mass of the particles also increases.

Bosons—the force particles As discussed in Chapter 16, there are four fundamental forces in the Universe. Using the standard model of matter, these four forces are thought to act through the exchange of force particles, called bosons: 1. Electromagnetic force acts through photons. 2. Strong nuclear force acts through gluons. 3. Weak nuclear force acts through weakons. 4. Gravity was thought to act through gravitons, however, although gravitons had been hypothesised, unlike the other force particles, their existence has not been proved. They were included in the standard model of matter for the sake of completeness. The way that force particles are thought to convey attraction forces (between matter) is by having the matter pulling on the force particles as they are exchanged, whereas repulsion forces are conveyed by having the force particles being pushed away as they are exchanged.

Conclusion The concept of the standard model of matter and the existence of quarks, leptons and bosons is an area of physics that has existed for more than 40 years. We may also consider this model as a further advancement in our understanding of the atom. Scientists have come a long way from the most ‘primitive’ model of the atom suggested by Thompson, to the idea of the quanta and Bohr’s, and eventually to the more sophisticated quantum mechanics, based upon which de Broglie, Wolfgang and Heisenberg further advanced the model of the atom.

chapter revision questions 1. For a typical nuclear power station:

(a) Describe two features of the use of the primary coolant. (b) Describe the role of the heat exchanger. (c) Describe the role of the secondary coolant and the cooling pond. (d) Explain what type of energy transformation is taking place in the nuclear reactor. 2. Describe the function of the nuclear reactor at Lucas Heights, Sydney.

311

from quanta to quarks

3. Radioisotopes have extensively applications in:



(a) medicine (b) industries (c) agriculture (d) engineering For each of the above fields: (i) Choose a radioisotope and describe how it can be used in this field. (ii) Discuss the properties of the radioisotope that make it suitable for this particular use. (iii) Describe the production of this radioisotope.

4. (a) Describe the use of neutrons for probing the internal structure of matter.

(b) Why are neutrons particularly useful for probing organic matter and materials with magnetic properties? 5. Regarding the Manhattan Project:

(a) What were the events that led to the initiation of the project? (b) What were the methods developed for preparing fuels required for the atomic bombs? (c) Why did the bomb design have to be carried out at the same time as the nuclear fuel preparation and what could be some of the implications of proceeding with both simultaneously? (d) Evaluate the social impacts of the project. 6. (a) Describe the functional principle of a cyclotron that allows it to accelerate a charged

particle to high speeds. (b) Will such a particle accelerator work for a sodium ion? Explain your answer. (c) Within this cyclotron, a proton is accelerated to a radius of 100 m. If the cyclotron was able to provide a magnetic field strength of 5.00 × 10–3 T, determine the speed of this proton. SR

Answers to chapter revision questions

312

7. (a) What are quarks?

(b) There are six ‘flavours’ of quarks, plus their corresponding anti-quarks. What are they? (c) What are leptons? (d) There are six ‘flavours’ of leptons, along with their corresponding anti-leptons. What are they? (e) What are the four classes of bosons and what are their roles?

SR

Mind map

medical physics

medical physics

CHAPTER 18

Ultrasound The properties of ultrasound waves can be used as diagnostic tools Introduction The option module Medical Physics focuses on the designs and physics principles behind various imaging methods used in clinical medicine. These image methods are: ultrasound; X-rays and computer axial tomography (CAT or CT); endoscopies; nuclear medicine and positron emission tomography (PET); and magnetic resonance imaging (MRI). In a hospital setting, an ultrasound can be done with a portable device or in the radiology department. X-ray and CT are the most commonly performed scans in the radiology department. Endoscopies, depending on the type, are used by different departments in the hospital. For instance, colonoscopies (see Chapter 19) are performed by gastroenterology departments, whereas arthroscopies (see Chapter 20) are performed by orthopaedic (bone) surgeons. MRIs are only available in the radiology department of large hospitals or private hospitals as they are very expensive. The advantages and disadvantages of the clinical use of these scans as well as their cost-effectiveness will be evaluated throughout this module. By studying this module, you should be able to appreciate the choices of certain scans for particular reasons, both in terms of their accuracy and cost-effectiveness. This chapter discusses the physical principles behind ultrasound as well as its clinical uses.

18.1 SR

Sound waves Pressure variation

P

t=0

Po

x -Po

Simulation: Sound harmonics

Rarefaction Rarefaction Compression Compression

Wavelength Wavelength

316

Leading edge

Motion of the air particles

Sound waves are energy that propagates through a medium, which causes the particles of the medium to vibrate back and forth along the direction of the propagation. For this reason, they are classified as mechanical waves, since they require a medium to travel through. The nature of the vibration of the particles makes sound waves longitudinal waves.

Figure 18.1  A sound wave; note the compression, rarefaction and wavelength

chapter 18  ultrasound

Note: For a longitudinal wave, particles vibrate back and forth, in contrast to a transverse wave, where particles vibrate up and down perpendicularly to the direction of the wave propagation.

A representation of a sound wave is shown in Figure 18.1. Note the regions where particles are closer together, which are known as compressions and regions where particles are further apart, known as rarefactions. Also, the distance between two consecutive compressions or rarefactions is equal and is termed the wavelength.

Ultrasound n

Identify the differences between ultrasound and sound in normal hearing range

18.2

The human ear can hear sound between the frequencies of 20 to 20 000 Hz. The frequency of the sound waves corresponds to their pitch: a low frequency corresponds to a low pitch and a high frequency corresponds to a high pitch. Ultrasound, on the other hand, is sound waves that have higher frequencies than the upper limit of the human hearing range, that is, they have frequencies greater than 20 000 Hz. Nevertheless, ultrasound actually exists in nature. It is produced by animals such as dolphins and bats as a part of their navigation systems. For clinical uses, ultrasound waves are produced artificially, via the piezoelectric effect, which is discussed in the next section. The ultrasound produced for clinical uses usually has a specific frequency determined by the nature of the body parts being scanned. Generally a higher frequency produces a better resolution but a lower frequency increases the penetration. It is logical then that for a superficial organ, such as the skin or muscle, the sound waves do not need to penetrate far. These may be scanned with a high frequency (say 10 MHz) in order to produce better quality images. Scanning a deep body organ, such as the liver, will require more penetrative ultrasound waves, and hence a lower frequency (say 1 MHz). The trade-off in this case will be a lower resolution for the images produced. Note: The intensity or the loudness of a sound wave is determined by its amplitude.

Piezoelectric materials and the piezoelectric effect n

Describe the piezoelectric effect and the effect of using an alternating potential difference with a piezoelectric crystal

18.3

The piezoelectric effect forms the basis for the production and detection of ultrasound waves. Definition

The piezoelectric effect is the phenomenon where mechanical vibrations of a substance are converted into electric signals and vice versa.

317

medical physics

Piezoelectric crystal

Dipole molecules

Note: Dipole molecules are molecules that have a positive pole at one end and a negative pole at the other.

Figure 18.2  (a) A piezoelectric material in its ‘normal’ state

Electrode

Electrode

Dipoles realign

AC power source

Circuit

Figure 18.2  (b) A piezoelectric material when placed inside an electric field (when a potential difference is applied)

Ultrasound waves produced

Piezoelectric materials (crystals) are materials that are able to demonstrate the piezoelectric effect. Some common piezoelectric crystals include quartz and barium titanate. Piezoelectric crystals are made up of dipole molecules arranged in a lattice form.

Contraction to the original dimension

Vibration

Normally the arrangement of the dipole molecules is such that the piezoelectric crystals would not express any net polarity. This is shown in Figure 18.2 (a). When a piezoelectric crystal is subjected to a potential difference (electric field), its dipole molecules will realign and the new orientation of these molecules will change the dimension of the crystal. In this case, as shown in Figure 18.2 (b), the crystal will expand. If alternating voltage—a voltage that swings from a positive maximum to zero, to a negative maximum then repeats (a sine wave)—is used, then the piezoelectric crystal will continue to alter its dimension, between expansion and contraction to its original dimension (when the AC voltage swings to zero). This results in vibrations that in turn vibrate the air to produce sound. See Figure 18.2 (c). Not surprisingly, the frequency of the vibration of this piezoelectric crystal, which is determined by the frequency of AC supply, will determine the frequency of the sound waves produced. When the frequency of the AC supply is sufficiently high, ultrasound waves are produced. Also, by adjusting the frequency of the AC supply, ultrasound with specific frequencies maybe produced. The same piezoelectric crystal can also be used to convert ultrasound waves to electric signals. When an ultrasound wave strikes the crystal, it causes the crystal to resonate. Note: To resonate in this case is to vibrate at the same frequency as the ultrasound wave.

Expansion due to the realignment of the dipole molecules Figure 18.2  (c) The vibration of the piezoelectric material produces ultrasound waves

318

The vibration of the crystal results in a dimension change, which is accompanied by an realignment of the dipole molecules. This realignment allows the crystal to express a net polarity; hence, an electric field or potential difference is created. This will cause a current to flow if there is a complete circuit connected to the piezoelectric crystal. (This is essentially the reverse of that shown in Figure 18.2b.) Because vibrations will cause a constantly changing alignment of the dipole

chapter 18  ultrasound

molecules, a changing current that reflects the pattern of the vibration (hence the sound wave) will be produced.

Transducers A transducer unit is a functional unit that produces and detects ultrasound. One or more transducer units may be contained by a single ultrasound probe, one that is held by the clinician to conduct an ultrasound scan. Each transducer unit contains a piezoelectric crystal, and in most transducer units, the same piezoelectric crystal is used to both produce and receive ultrasound.

The basic principle behind ultrasound imaging Ultrasound scans are widely used in clinical medicine to produce images of body organs. The details of the designs and functional principles of an ultrasound machine can be quite complicated. However, these can be simplified by considering the schematic drawing shown in Figure 18.3. To summarise the sequence of events: 1. Ultrasound waves are produced by the piezoelectric crystal that is embedded inside the transducer (probe) (as discussed). 2. Ultrasound waves penetrate through the body wall, with a small portion of the waves being reflected. 3. Ultrasound waves hit the first wall of the target organ. Some of the wave energy is reflected while some goes through. 4. Ultrasound waves reach the other wall of the organ and behave the same as in step 3. 5. Those reflected waves (echoes) from 3 and 4 again penetrate the body wall. 6. The returning ultrasound waves are picked up by the transducer and are converted into electrical signals by the embedded piezoelectric crystal. These signals are then fed through a computer to construct the images of the scanned organ. (Note that the numbers shown in the diagram correspond to the Body wall numbers used in the text.) Ultrasound waves penetrate through The reflection and penetration Ultrasound waves reflect back of the ultrasound waves as well as the image Target organ formation process by the computer are explained in detail later in this chapter.

18.4

An ultrasound machine

Figure 18.3  Performing an ultrasound scan of a body organ

319

medical physics

18.5

A closer look at the reflection and the penetration of ultrasound waves n n n n

SR

■■

Worked example 20

■■

Define acoustic impedance: Z = v and identify that different materials have different acoustic impedances Describe how the principles of acoustic impedance and reflection and refraction are applied to ultrasound Define the ratio of reflected to initial intensity as: Ir [Z 2 – Z1]2 = Io [Z 2 + Z1]2 Identify that the greater the difference in acoustic impedance between two materials, the greater is the reflected proportion of the incident pulse Solve problems and analyse information to calculate the acoustic impedance of a range of materials, including bone, muscle, soft tissue, fat, blood and air and explain the types of tissues that ultrasound can be used to examine Solve problems and analyse information using: Z = v and

Ir [Z2 – Z1]2 = Io [Z2 + Z1]2 It has been mentioned that every time ultrasound waves meet the boundary between two media, such as when they reach the body wall, or the target organ, some of the wave energy is reflected while some penetrates. Although it is the reflected waves that are analysed to form images, the penetrated waves are also important, as they allow the ultrasound to reach deeper body organs that are beyond the first boundary. So an obvious question is, what determines the amount of reflection and penetration of the ultrasound waves as they hit the boundary? After studying this section, it will be apparent that the relative amount of reflected and penetrated ultrasound waves depends on the difference between the acoustic impedances of the two media that the waves are propagating through.

Acoustic impedance Definition

Acoustic impedance of a material refers to the ease with which a sound wave can pass through. The higher the acoustic impedance, the more difficult it is for the sound wave to pass through. Mathematically the acoustic impedance of a material can be represented by the equation:

320

chapter 18  ultrasound

Where: Z = the acoustic impedance of the material, measured in kg m–2 s–1 or rayls ρ = the density of the material measured in kilograms per metre cube (kg m–3) v = the velocity or speed of the sound wave when travelling through this material, measured in m s–1

Z = ρv

Example 1

When an ultrasound wave travels through muscle tissues, it has a speed of 1.57 × 103 m s–1. Knowing that the muscle tissues have a density of 1.06 × 103 kg m–3, calculate the acoustic impedance of the muscle tissues. Solution

Z = ρv = (1.06 × 103) × (1.57 × 103) = 1.66 × 106 kg m–2 s–1 (rayls) Table 18.1 lists the density of some common types of tissue found in a human body as well as the speed of the sound as it travels through the tissue. Try to calculate the acoustic impedance for each of the tissue types. Table 18.1 Tissue/material

Density (kg m–3)

Speed of the sound (m s–1)

Air

1.25

330

Water

1.00 ×

103

1.54 × 103

Bone

1.75 × 103

3.72 × 103

103

1.57 × 103

Muscle

1.06 ×

Fat

953

Blood

1.00 × 103

1.56 × 103

Brain

1.04 ×

103

1.52 × 103

Liver

1.07 × 103

1.55 × 103

1.48 × 103

Reflection and penetration Acoustic impedance by itself does not determine the reflection or penetration of the sound waves. Rather, it is the difference in the acoustic impedance encountered as the sound waves enter from one medium to another. More precisely, the proportion of sound waves that will be reflected back when they hit the boundary between any two media can be mathematically determined using the equation:

321

medical physics

Where: Ir = the intensity of the reflected sound waves Io = the original intensity of the sound waves

Ir [Z2 – Z1]2 = Io [Z2 + Z1]2

Ir represents the proportion of the Io sound waves being reflected and can be expressed as a percentage if necessary. Also because Ir and Io are expressed as either a ratio or a percentage, they do not have any specific units as long as they are measured using the same units. Z1 is the acoustic impedance of medium 1 and Z2 is the acoustic impedance of medium 2. Again, both Z1 and Z2 are measured in kg m–2 s–1 or rayls.

Important things to note for the above equation: When applying this equation to any two media, it does not matter which medium is treated as ‘medium 1’ and which is treated as ‘medium 2’. This is because Z2 + Z1 is the same as Z1 + Z2 and the square of Z2 – Z1 is the same as that of Z1 − Z2. ■■ To work out the proportion of the penetrated sound waves, it would simply be I 1– r . Io ■■ The bigger the difference between Z1 and Z2, the larger the proportion of the sound waves reflected and the smaller the proportion that will penetrate. The reverse is also true. This can be illustrated in Figure 18.4 (a) and (b). ■■ When Z1 and Z2 are equal, there will be no reflection of the sound waves. ■■

Medium 1 (Z1)

Medium 2 (Z2)

Incident sound waves

Reflected sound waves

Medium 1 (Z1)

Medium 2 (Z2)

Incident sound waves Sound waves that penetrate through

Figure 18.4  (a) When there is a big difference between Z1 and Z2, more sound waves will be reflected and a smaller proportion will pass through the boundary

Reflected sound waves

Sound waves that penetrate through

Figure 18.4  (b) When the difference between Z1 and Z2 is small, a greater proportion of the sound waves will penetrate and fewer will be reflected

Example 1

Determine the percentage of ultrasound waves that will be reflected when they reach the junction between air and fat tissues.

322

chapter 18  ultrasound

Solution

Ir [Z – Z1]2 = 2 [Z2 + Z1]2 Io Z1 = acoustic impedance of air = 1.25 × 330 Z2 = acoustic impedance of fat tissues = 953 × (1.48 × 103) Ir [(953 × 1.48 × 103) – (1.25 × 330)]2 = [(953 × 1.48 × 103) + (1.25 × 330)]2 Io

= 0.999 = 99.9%

The clinical significance of this particular example is that when performing an ultrasound scan, even if the transducer is pushed tightly against the skin, a small air gap will still exist between the transducer and the skin (for the purpose of this discussion, assume the skin tissue is made up of fat). The existence of the air–fat junction means that 99.9% of the ultrasound waves produced will be reflected at the skin surface and only 0.1% of the waves will be transmitted. This small amount of ultrasound will not be adequate in forming images of deep organs. To eliminate the air gap, gels with similar acoustic impedance to skin (fat tissues) are used. When the transducer with gel applied is pushed against the skin, the absence of an air gap, as well as a small difference in the acoustic impedance of the gel and the skin, will result in a minimal reflection at the skin surface. Consequently, most of the ultrasound wave energy will penetrate to scan the deep organs. Example 2

Calculate the percentage of the ultrasound waves that are reflected at the junction between the skull bone and the brain. Calculate the percentage of the ultrasound waves that will reach the brain. Solution

Ir [Z – Z1]2 = 2 [Z2 + Z1]2 Io Z1 = acoustic impedance of bone = (1.75 × 103) × (3.72 × 103) Z2 = acoustic impedance of brain = (1.04 × 103) × (1.52 × 103) Ir [(1.04 × 103 × 1.52 × 103) – (1.75 × 103 × 3.72 × 103)]2 = [(1.04 × 103 × 1.52 × 103) + (1.75 × 103 × 3.72 × 103)]2 Io

= 0.371 = 37.1%

Therefore the percentage of the ultrasound that will penetrate through to the brain: I =1– r Io

= 1 – 0.371 = 0.629 = 62.9%

323

medical physics

The clinical significance of this example is that when trying to visualise the brain with ultrasound, at the junction between the skull bone and the brain, 37.1% (or even more, given the fact the skull bone has a higher density than the value provided in Table 18.1) of the ultrasound waves are reflected. On returning towards the transducer, another 37.1% is reflected. The consequence of this is that only a small portion of the ultrasound waves will be received by the detector for image formation. Hence ultrasound scans produce poor images of the brain. Furthermore, this concept can be extended to all body organs that are covered by bones, for instance, the lungs, which are covered by the ribcage. As an ultrasonographer would say, ‘The bone casts shadows over the soft tissue organs.’

18.6

Different types of ultrasound scans n

Describe the situations in which A-scans, B-scans and sector scans would be used and the reasons for the use of each

So far, the production of the ultrasound waves and the rules that determine their reflection and penetration have been described. This section will discuss how the reflected ultrasound waves from a target organ can be analysed by the computer system to form images. In summary, different types of images can be formed by decoding the received ultrasound waves in different ways. These include A-scans and B-scans, with the B-scans forming the basis for linear scans and sector scans.

A-scan (amplitude scan) In an A-scan, the reflected ultrasound waves are displayed using wave amplitudes (hence amplitude scan), which are plotted as a function of time. A typical A-scan image is shown in Figure 18.5. Figure 18.5  An A-scan image of a body organ

Organ Body wall

Body wall

Transducer

Voltage

Time

324

chapter 18  ultrasound

In an A-scan the amplitude of the peaks will provide information about the nature of the organ, whereas the position of the peaks will provide information about the location and dimension of the target organ. As shown in Figure 18.5, as the ultrasound wave hits the near side body wall, a small portion of it reflects. The detector (the piezoelectric material inside the transducer) receives it and converts it into an electrical signal, which is displayed by the computer as a small peak. The ultrasound wave then reaches the near surface of the target organ. Assuming there is more of the ultrasound wave reflected at this interface, a larger peak will be displayed. A similar process will take place when the ultrasound wave reaches the far surface of the organ and the opposite body wall. Essentially, the amount of reflected ultrasound wave—which depends on the type of organ being scanned (some organs produce more reflection than others due to a larger acoustic impedance difference)—will determine the displayed amplitudes. The distance between the peaks is related to the delay between each subsequent reflected ultrasound wave and this depends on the distance between each of the interfaces. Thus the distances between the peaks are a good estimate of the dimension as well as the position of the organ. Because the images formed by A-scans are two-dimensional and are difficult to interpret, A-scans are not commonly used clinically. A-scans are occasionally used to measure the internal dimensions of the eyes, for instance, the distance from the lens of the eye to the retina, which can be shortened when there is a tumour growing on the retina.

SR

Note: The distance between the wave peaks relates to the dimension of the target organ. Simple atlas of the eye

Note: A-scan is also known as A-mode.

B-scan (brightness scan) In a B-scan, the reflected ultrasound waves are displayed as fluorescent dots. The more intense the reflection, the brighter the dot; whereas a minimal reflection is Simple atlas of displayed as a grey or black dot. The separation the eye of the dots, just like the separation of the peaks Suspensory ligaments in an A-scan, depends on the dimension and Three Sclera position of the target organ. This is summarised coats Ciliary body Choroid of the in Figure 18.6. Importantly, B-scans form the Retina eyeball Upper eyelid basis of linear scans and sector scans. As shown in Figure 18.6 overleaf, the Iris reflected ultrasound wave is displayed using Cornea fluorescent dots. The more the reflected wave, the brighter (whiter) the dot; whereas less Pupil Fovea reflection corresponds to a grey dot. Blind Lens

spot

Conjunctiva

Note: The scale used to measure the

brightness of the dots is known as the grey scale. The brightest dot correlates to the most intense reflection, whereas non-fluorescence (a dark or black dot) correlates to no reflection. Various levels of greyness exist in between the extremes.

Optic nerve Lower eyelid

Aqueous humour

Vitreous humour

325

medical physics

Figure 18.6  A B-scan image of a body organ

Organ Body wall

Body wall

Transducer

Note: B-scan is also known as B-mode.

Linear scan and sector scan As shown in Figure 18.7, if the transducer that is producing the B-scan is moved back and forth, many B-scans will be produced in line with the transducer. These B-scans will be seen to be adjacent to each other, such that a two-dimensional image of the target organ is created. Figure 18.7  A linear scan of a body organ, formed by many B-scans

Organ Body wall

Transducer

326

Body wall

chapter 18  ultrasound

This can either be done by manually moving Single transducer unit a single transducer unit back and forth or by hundreds of adjacently placed transducer units. Fortunately, producing ultrasound images by manually moving the transducer unit is an old technology. Steady manual movement is technically difficult and the images formed in Transducer units are this way are often poor in quality. In modern placed in a clinical practice, it is more frequent to scan with straight line an array of the transducer units (10s to 100s of units placed side by side) so that hundreds of B-scans are produced adjacent to each other Parallel ultrasound waves and without manually moving the transducer units. a rectangular image field This improves the quality of the images formed and lengthens the longevity of the probe. Figure 18.8  (a) A linear array or If the transducer units are placed side by side in a straight line, then the ultrasound linear scan: the waves will be emitted parallel to each other. This will produce a rectangular image transducer units with its width equal to the width of the array of the transducer units. This is known inside the probe are placed in a as a linear array or linear scan and is shown in Figure 18.8 (a). (The image it straight line produces is like the one shown in Figure 18.7.) If however, the transducer units are placed next Single transducer unit to each other in a convex fashion, then the emitted ultrasound waves will be spread out, so that a sectorUltrasound waves fan out shaped image will be produced. This is known as a curved array or sector scan, and is shown in Transducer units are Figure 18.8 (b). placed in a curved line

Phase scans [Not in syllabus] Phase can be added to both the linear scan and the sector scan to improve the quality of the images produced. For the sake of simplicity, adding phase to a linear scan is discussed here. In a linear phased scan, ultrasound waves are sequentially emitted by each of the transducer units with a small delay between each emission. The delays are controlled by the computer and are adjusted when necessary. Consider Figure 18.9, a probe with five transducer units (in reality, there will be over a hundred units); in (a), transducer unit 1 is first to emit an ultrasound wave. After a small delay, unit 2 then emits an ultrasound wave. This is followed by the third, the fourth and the fifth transducer unit. The overall result of this is that the wavefront of the ultrasound produced will have its direction steered to one side of the probe. For the next round of emissions, the delays are decreased; and when the delays are reduced to zero, the ultrasound wavefront will return to the neutral position. This is shown in Figure 18.9 (b). After this, the reverse happens. The delays are restored, however, with transducer unit 5 emitting ultrasound first and unit 1 last. This will cause the wavefront to be steered

Sector shaped image field

Figure 18.8  (b) A curved array or sector scan: the transducer units are placed in a curved line

327

medical physics

Single transducer unit

Single transducer unit

Single transducer unit

Last

Last

First

First

Wavefront

Wavefront Wavefront

Figure 18.9  (a) Linear phased scan: ultrasound is emitted sequentially from transducer unit 1 to 5; this causes the wavefront to be steered to the right

Figure 18.9  (b) Linear phased scan: ultrasound is emitted by the transducer units with no time delay; the wavefront returns to the neutral position

Figure 18.9  (c) Linear phased scan: the sequence is reversed, ultrasound is emitted sequentially from transducer unit 5 through to 1; this causes the wavefront to be steered to the left

to the opposite side. This is shown in Figure 18.9 (c). If the above sequences are repeated rapidly, the emitted ultrasound waves will swipe from side to side, producing a sector shaped image field. For this reason, a linear phased scan is also known as an electronic sector scan (because the sector-shaped image field is produced electronically using phase rather than mechanically using a curved array of transducer units). Similarly, phase can also be added to sector scans to improve the image field. Note: Wavefronts are lines that join together the corresponding points on adjacent waves; usually they are either crests or troughs.

Adding phase to linear scans or sector scans is not just to improve their image field. Adding phase improves the quality of the images produced in other ways. For example, the use of phase enables electronic focusing, so that structures at various depths can be focused electronically such that all can be viewed with clarity. Phase can also generate echoes that will return from the target organ at different angles. The echoes are then analysed to form a compounding image of the target organ, which improves the quality of the image. Electronic focusing and electronic compounding are quite difficult concepts and are not required for this course. Both the linear scans and the sector scans, with or without phase, are also collectively known as the standard pulse-echo ultrasound in clinical medicine. This is because when performing these scans, a new ultrasound wave is only sent after the echo of the previous wave is received.

Real-time and three-dimensional ultrasound Both the sector scans and the phase scans produce two-dimensional grey scale images that can be displayed using either a monitor or printed films. One example is shown in Figure 18.10. When the images produced are refreshed at a fast enough rate (say 30 images per second, displayed using a computer monitor), the temporal

328

chapter 18  ultrasound

sequence of the images can then be appreciated so that movements can be visualised (like motion pictures). This is known as real-time ultrasound imaging. In recent years, three-dimensional ultrasound imaging has been developed. A three-dimensional ultrasound image can be produced by computed summation of many adjacent parallel two-dimensional images. Three-dimensional ultrasound images provide more detail about the anatomy of the body parts than two-dimensional images.

Figure 18.10  An ultrasound image of a foetus in the uterus

The clinical uses of ultrasound n

Gather secondary information to observe at least two ultrasound images of body organs

Ultrasound scans are used as a diagnostic tool in a variety of medical specialties. Some examples of the body organs or pathologies that can be imaged using (pulse-echo) ultrasound include: 1. Cardiology and vascular surgery—to look at the function and the anatomy of the heart or the blood vessels. This is discussed in the Doppler ultrasound section later in this chapter. 2. Gynaecology—to look for an ectopic pregnancy (pregnancy outside the uterus) or ovarian cysts. 3. Obstetrics—to look for the position and the state of the foetus and guide amniocentesis. Amniocentesis is a technique used to take a fluid sample from the sac around the foetus so that tests can be performed to detect any abnormality of the growing foetus. The insertion of the needle is guided by ultrasound in order to visualise and avoid injuries to the foetus. See Figure 18.10. 4. Endocrinology—to scan the thyroid gland for thyroid cysts or thyroid tumours.

secondary source investigation physics skills 12.3a, d 13.1b

18.7

Note: The thyroid gland is a gland in front of the voice box, just below the skin. It produces thyroid hormone, which controls the rate of the body’s metabolism.

A thyroid gland containing multiple small cysts

329

medical physics

5. Gastroenterology—to detect gall bladder stones.

A normal gall bladder

Ultrasound of a gall bladder showing a gallstone

6. Renal—to determine the size of the kidneys and the ureters (ureters are the tubes that

connect the kidneys to the bladder) or to detect kidney stones.

A normal kidney

Ultrasound of a kidney showing a kidney stone

7. Ultrasound is very useful for diagnosing soft tissue injuries such tears in muscles and

tendons. Ultrasound scans are particularly useful in detecting cysts, stones or tears in soft tissues. Cysts contain fluids, and fluids (water) have very different acoustic impedance compared to surrounding tissues. Thus at the cyst–tissue boundary, a large portion of the ultrasound waves will be reflected, making the cyst stand out. The same principle applies to stones, which also have very different acoustic impedance compared to the surrounding tissues. Injuries such as tears in the muscles and tendons will result in fluid collections at the torn sites. Just like the cysts, fluids are shown well using ultrasound. Ultrasound cannot be used to visualise organs beyond any bony tissues, such as the brain, because much of the ultrasound waves will be reflected at the bone–tissue interface. It follows that in order to perform an ultrasound scan of the heart, the probe must be placed in between the ribs to avoid reflection of the ultrasound waves by the ribs.

Extended activity You are required to gather and observe ultrasound images of body organs. This chapter has provided ultrasound images of a variety of body organs. You may wish to obtain more images. You could ask your GP, and search the internet and medical journals or textbooks. Local hospital ultrasound departments may also be contacted. Observing the ultrasound images, note and describe the following: n What features would help you to identify these images as ultrasound images? Describe the colour, resolution and contrast of these images.

330

chapter 18  ultrasound

n What organs are shown and what pathologies (diseases, problems, etc, if any) are present? Why is ultrasound done for this particular pathology? n Are ultrasound images easy or difficult to interpret?

The advantages and disadvantages of using ultrasound as diagnostic tool Advantages

Ultrasound scans do not use ionising radiation; therefore they are safe and have no side effects. This is particularly important in obstetrics, where the foetus has to be viewed. A foetus is a not-yet-formed human being very susceptible to the ionising radiation used in X-rays and CT scans. Both scans are contraindicated (not recommended) during pregnancy. Ultrasound scans on the other hand have no harmful effect on the growing foetus and are therefore safe to use during pregnancy. ■■ Ultrasound scans are one of the cheapest forms of medical imaging available. ■■ Ultrasound scans show soft tissue quite well compared to X-rays and are excellent for detecting cysts and stones. ■■ Ultrasound machines may be made small and portable. These days, an ultrasound machine can be fashioned with a lap-top design. ■■

Disadvantages

Certain body parts cannot be scanned using ultrasound. These include organs that are covered by bony tissues, such as the brain and the lungs. ■■ The resolution of ultrasound scans is low compared to other scan methods. Compare the head of the foetus shown using ultrasound (Figure 18.10) to the CT image of the brain (in Chapter 19)—the ultrasound image is poor in quality. ■■ Ultrasound scans are performed by ultrasonographers who need to manipulate the probe to acquire the desired images. Therefore the quality of the ultrasound images produced and their interpretation are largely operator dependent. ■■

Doppler ultrasound n

Describe the Doppler effect in sound waves and how it is used in ultrasonics to obtain flow characteristics of blood moving through the heart

Doppler ultrasound employs the Doppler effect to study the movement of blood in the body.

18.7 SR

Definition

The Doppler effect is defined as the apparent change in the frequency of sound when the source of the sound is moving relative to its receiver.

Simulation: the Doppler effect

331

medical physics

Note: It is the relative motion between the sound source and receiver that creates the Doppler effect. Between a moving source and a stationary receiver there will be a Doppler shift as there would be between a stationary source and a moving receiver. On the other hand, no Doppler effect will occur if the source and the receiver are both moving at the same speed and in the same direction.

When the source and the receiver are moving away from each other relatively, the frequency of the sound decreases and the wavelength increases. When the source and receiver are moving closer to each other relatively, the frequency of the sound increases and the wavelength decreases. This is shown in Figure 18.11. Note: In both cases, the speed of sound remains constant.

Note: Similar behaviours are observed for electromagnetic radiation and are known as the red shift and blue shift respectively.

Figure 18.11  The Doppler effect: note that λ is greater than λ

1

2

3

4

5 1

M

WWW>

2

3

4

5

vs

6

M

Useful website The Doppler effect: http://cat.sckans.edu/physics/doppler_effect.htm

How the frequency changes (whether it increases or decreases) is an indication of the relative motion between the sound source and the receiver; whereas the magnitude of the change correlates to the magnitude of the relative velocity between the source and the receiver.

332

chapter 18  ultrasound

n

Describe the Doppler effect in sound waves and how it is used in ultrasonics to obtain flow characteristics of blood moving through the heart

What is the new technology and how does it compare with the old technology? Doppler ultrasound is a technique that combines the established ultrasound procedures with the Doppler effect when a wave is reflected off a moving target. The ability to combine the two sets of information simultaneously provides information about the speed of blood flow through critical arteries within and near the heart. Abnormal blood flow, including restrictions, blockages, dilations and other conditions can be imaged with Doppler ultrasound. The additional information gained from an analysis of the frequency shift of the reflected sound wave is used to measure the velocity of the blood flowing through the heart. The physics involved—a combination of improved transducer (using piezoelectric technology) and application of the Doppler effect when analysing the reflected signal—has been critical in the development of this technology. Older forms of ultrasound technology provided much less detail and no information about the motion of blood within the heart. Other forms of diagnosis, possibly involving radioisotopes, were required to provide information about blood flow in the heart.

PFA

H3 ‘Assesses the impact of particular advances in physics on the development of technologies’

TR

PFA scaffold H3 Mapping the PFAs

Assessment of the impact of advances in physics on the development of this technology Knowledge of the behaviour of waves, particularly with reference to reflected waves from moving objects, has been essential to the development of Doppler ultrasound techniques. Computing speed and power to interpret the data and produce images in real time also plays a critical role in this medical technology. Microprocessors capable of handling this task have become available at a suitable price and size only in recent years. Any assessment of the impact of the advances in physics on the development of Doppler ultrasound must take into account how essential these advances are to the technology. Useful websites

>WWW

About vascular ultrasound: http://www.radiologyinfo.org/en/info.cfm?pg=vascularus Information about piezoelectric transducers and more detail about ultrasound: http://www.ndt-ed.org/EducationResources/CommunityCollege/Ultrasonics/EquipmentTrans/ piezotransducers.htm

333

medical physics

18.8

Doppler ultrasound as a diagnostic tool n

n

n

PFAs H3 physics skills H11.1e H12.3 a, b

Figure 18.12  (a) Doppler shift of the ultrasound wave as it is travelling towards the red blood cell

Describe the Doppler effect in sound waves and how it is used in ultrasonics to obtain flow characteristics of blood moving through the heart Outline some cardiac problems that can be detected through the use of the Doppler effect Identify data sources and gather information to observe the flow of blood through the heart from a Doppler ultrasound video image

To understand how Doppler ultrasound can be used as an imaging tool to study blood flow, consider the simplified representation shown in Figure 18.12 (a), where an ultrasound wave is emitted by the transducer unit and is directed towards the blood vessel in which a red blood cell is flowing towards the transducer. At this moment, the red blood cell, which can be viewed as the ‘receiver’, is moving closer towards the sound source. This will result in an increase in the frequency of the ultrasound wave proportional to the speed of the moving red blood cell. After the ultrasound strikes the red blood cell and is reflected off (shown in Figure 18.12b), the red blood cell will act as the ‘source’, which is again moving closer towards the ‘receiver’—the transducer. This will result in a further increase in the wave frequency. The conclusion drawn for this scenario is that the ultrasound wave, after reaching the red blood cell and then being reflected back to the transducer, will have increased its frequency twice. As a consequence, the detected echo (reflected ultrasound) will have a higher frequency compared to the original wave. This difference enables the computer to calculate the speed at which the red blood cell is moving towards the transducer unit, thereby providing information about the blood flow. A similar but opposite process will take place in cases where the red blood cell is moving away from the transducer. The frequency of the ultrasound wave will be reduced twice, resulting in a lower frequency of the detected echo. The reduction in frequency will be interpreted by the computer as being the blood flowing away from the transducer unit and the magnitude of the change will be used to calculate this receding speed.

Types of Doppler ultrasound and image display

Transducer unit Skin

Doppler shift of ultrasound, frequency increases

Soft tissue

v

Red blood cell

334

Blood vessel

There are two ways in which Doppler ultrasound can be emitted and received to provide information about blood flow. One is to emit and receive the ultrasound continuously, known as the continuous Doppler mode. The other is to emit and receive the ultrasound in discrete pulses, similar to the pulse-echo grey scale imaging technique described, known as the pulse Doppler mode. The continuous Doppler mode provides information about

chapter 18  ultrasound

blood flow either in an audio form or visually as a spectral display (discussed later). The pulse Doppler can provide the information in an audio form, visually as a spectral display as well as two-dimensional, colour-coded images. In modern practice, Doppler ultrasound (mainly the pulse Doppler) is combined with the standard pulse-echo grey scale ultrasound so that the anatomical information and blood flow can be studied simultaneously using realtime images.

Transducer unit Skin A further Doppler shift with a further increase in frequency

Soft tissue

v

Red blood cell

Blood vessel

The continuous Doppler mode For the continuous Doppler mode, there are two separate pieces of piezoelectric material within a single probe. One piece of the piezoelectric material sends out a continuous ultrasound wave at a frequency of 2–10 MHz and the other one continuously receives echoes from the moving blood cells. The received echoes will show Doppler shift, the extent of which depends on the speed of the moving blood cells. These echoes are converted into electrical signals, which are electronically subtracted from the electrical signals that generate the original wave. Such a difference is often in the audible frequency range and can be used to drive a loudspeaker. The sound produced can be listened to and the pattern of the blood flow can be evaluated by the clinician. The difference in frequency can also be displayed visually after being processed using a fast Fourier transformation. Such a display is known as a spectral display; it plots the level of Doppler shift (frequency) as a function of time (see Fig. 18.13). A wall filter is employed in both cases to filter out the much lower frequency Doppler shift produced by the moving tissues, to minimise interferences. The downside of the continuous Doppler mode is that it detects all Doppler shifts along the entire scan line (along the entire depth the sound wave can reach). Therefore, when there are many blood vessels close to each other—for instance, an artery and a vein next to each other—they cannot be analysed separately and a lot of false signals will be produced. A similar difficulty will be encountered when an attempt to examine the blood flow through the heart is made. Clinically, continuous Doppler can be used as an audible device to listen for blood flow through an artery, which can be important when the

Figure 18.12  (b) Doppler shift of the ultrasound wave as it is being reflected off the red blood cell

Figure 18.13  A spectral display of Doppler ultrasound

335

medical physics

arterial pulse is impalpable such that clots in the artery are suspected. It may also be used to detect the blood flow through the foetal heart as a way of monitoring a foetus’s well-being, in antenatal clinics as well as during labour. Note: Foetal heart sounds, unlike those of adults, are difficult to hear using a stethoscope.

The pulse Doppler mode For the pulse Doppler mode, ultrasound is sent out as discrete pulses. In other words, one pulse is sent and its echo is detected before the second pulse is sent. The Doppler shift can be analysed in an audio form or using a spectral display, similar to those used by the continuous Doppler mode. Pulse Doppler can also display the Doppler shift using two-dimensional, colour-coded images. Red is used to denote blood flowing towards the transducer unit, whereas blue is used to denote blood flowing away. The redder or bluer the colour, the faster is the flowing velocity of the blood through the blood vessel. The colour Doppler is shown in Figure 18.14. The main advantage of this mode is its ability to selectively study the blood flow at a single depth (if necessary). This may be the lumen of a single blood vessel or a particular valve within the heart chamber. This is made possible because knowing the speed of the sound wave in a particular tissue, the time of arrival of the echo from a known depth can be determined. Thus any echo arriving outside this time (e.g. echoes from other blood vessels along the scan line) is filtered and will not be displayed. This is known as range gating. In order to select a correct range gating for the returning echo, the accurate depth, and hence the anatomy, must be known. Therefore pulse Doppler is often made to combine with standard grey scale ultrasound in a scanning device known as the duplex scan. The sector or phase scan displays the anatomy of the blood vessel, including its depth, which provides information for the correct selection of the range gating.

(a)

(b)

Figure 18.14  Colour Doppler: red (a) is blood flowing towards the transducer unit, whereas blue (b) is blood flowing away

336

chapter 18  ultrasound

The use of Doppler ultrasound in cardiovascular medicine Duplex scans are frequently performed in cardiology and are also known as echocardiograms. The standard sector or phase scan displays the anatomy of the heart including its muscle walls and valve architecture. Such information is displayed using a grey scale. At the same time, the Doppler ultrasound provides information about the blood flow through the heart and is often displayed using colour codes. These are shown in Figure 18.15 (a) and (b) respectively. The blood flow may also be listened to or analysed quantitatively using the spectral display. The spectral display enables the analysis of blood flow at a specific location, such as the aortic valve, by selecting a specific range gating based on the anatomy seen on a grey scale display. An example is shown in Figure 18.5 (c). Furthermore, modern duplex scans are capable of refreshing many images (30 or more) per second, such that the movement of the heart as well as the dynamic nature of the blood flow can be seen. As mentioned before, this is known as real-time images (like a video). Echocardiograms are useful for diagnosing many common cardiac (heart) pathologies. The real-time grey scale images are useful for assessing heart muscle movements, which may be impaired due to scarring from a previous heart attack. The colour Doppler images can detect abnormal blood flow through the heart due to a malfunctioning valve, whether as a result of stenosis (narrowing) or regurgitation (abnormal backward flow of blood through an incompetent valve). Spectral display can measure the pressure gradient across a diseased valve, providing quantitative information about the severity of the valvular stenosis.

Figure 18.15  (a) A grey scale ultrasound image of the heart

Figure 18.15  (b) The grey scale ultrasound image combined with colour Doppler

Figure 18.15  (c) A spectral display for analysing blood flowing through a valve

337

medical physics

Duplex ultrasound scans (Fig. 18.16) are also frequently used to assess the level of blood flow through an artery and are extremely valuable in assessing the degree of narrowing in the target artery.

Figure 18.16  A duplex scan showing an incompetent mitral valve—mitral valve regurgitation

Ultrasound as a tool for measuring bone density secondary source investigation PFAs H3 physics skills H11.1e H12.3a H12.4f H14.1a, b, g, h

n

Identify data sources, gather, process and analyse information to describe how ultrasound is used to measure bone density

About a decade ago, it was proposed that ultrasound was able to assess bone density to screen for osteoporosis. Osteoporosis is a medical condition, usually in elderly people, where there is a gradual loss of bone minerals. This results in brittle bones, which are prone to fractures. Osteoporosis can cause very serious fractures in the elderly with a very high associated mortality rate. Although osteoporosis is not reversible once diagnosed, it is to some extent preventable, so early detection and treatment are important. The detection of osteoporosis relies on an accurate measurement of bone (mineral) density. When using ultrasound to assess bone density, the patient is asked to place his or her foot into a warm bath. An ultrasound wave is produced by a transducer unit to pass through the heel bone of the foot and is detected on the opposite side. The speed as well as the attenuation of the ultrasound at various frequencies (known as broadband ultrasound attenuation) are calculated by the computer attached to the detector. These values are compared to age-specific mean values so that the quality of the bone can be analysed to detect osteoporosis. Such an ultrasound bone density scanner is small, and hence can be made mobile. The scanner is also cheap and is available at local pharmacies. The scanner does not use harmful ionising radiation, and therefore is safe. Consequently, it was thought that such a device would be an ideal choice for diagnosing osteoporosis. However, research has shown that the ‘density’ measured by ultrasound is not clinically useful, for instance, in determining the likelihood of fractures, as measured by the traditional dual energy X-ray absorptiometry (DEXA). Note: DEXA assesses the bone mineral density using X-rays. The device passes low energy X-rays through the vertebra and hip bones. The absorption of the X-rays is measured and from that the bone mineral density is calculated. This method has a very high sensitivity and specificity in diagnosing osteoporosis.

Because of the lack of accuracy, ultrasound bone density scans are becoming less popular in today’s medical practice. Patients who are suspected to have osteoporosis based on their risk factors are sent directly to have a DEXA scan. Most doctors see this practice as more economical overall and argue that a small amount of ionising radiation from DEXA will not do significant harm.

338

chapter 18  ultrasound

chapter revision questions   1. Using wavefronts, draw a sound wave that contains two wavelengths. Label on the

diagram compression, rarefaction and wavelength.   2. (a) What are piezoelectric materials?

(b) Give two examples of piezoelectric material. (c) What are the roles of piezoelectric materials when used in ultrasound scanners?   3. (a) Calculate the percentage of the ultrasound waves that will be reflected when

they reach the junction between muscle and fat tissues. You may wish to refer to Table 18.1. (b) Calculate the percentage of the ultrasound waves that will pass through the junction between the liver and the fat tissues.   4. What are A-scans? What types of information do A-scans provide?   5. What are B-scans? What is the relation between B-scans and sector scans?   6. Lisa presented to her GP with a lump in the neck. After examining her, the GP decided

that such a lump was likely to be related to the thyroid gland. As well as performing some blood tests, the GP ordered an ultrasound scan of the thyroid gland. (a) What thyroid problems are best shown using ultrasound and why? (b) Give one reason why ultrasound may be potentially better than other imaging methods in this situation.   7. Jane was six months into her pregnancy. Her obstetrician performed an ultrasound scan

of her uterus to examine the position of the foetus. Why is ultrasound the investigation of choice for examining a foetus during pregnancy?   8. List two other clinical uses of ultrasound.   9. Define the Doppler effect. 10. What application does the Doppler effect have in clinical medicine?

SR

11. (a) How is ultrasound used in a duplex scan?

(b) What clinical uses does a duplex scan have? 12. A decade ago, ultrasound was thought to have the ability to measure bone density.

(a) Describe how ultrasound can be used to measure bone density. (b) Why did ultrasound bone density scans lose favour in recent years?

Answers to chapter revision questions

339

medical physics

CHAPTER 19

X-rays, computed axial tomography and endoscopy The physical properties of electromagnetic radiation can be used as diagnostic tools Introduction As discussed in Chapter 18, sound waves can be used to form images for diagnostic purposes. In this chapter, the use of electromagnetic radiation (EMR) for medical imaging is studied. The chapter will discuss the use of X-rays in the form of plain X-ray films, as well as in computed axial tomography. The use of visible light in endoscopies will also be examined.

19.1

The nature of X-rays n

Compare the differences between ‘soft’ and ‘hard’ X-rays

X-rays are high-frequency EMR. They are between gamma rays and UV rays in the EMR spectrum, which is shown in Figure 19.1. X-rays are very energetic as a result of their high frequency (E =hf ) and are therefore also very penetrative. There are two types of X-rays: hard X-rays and soft X-rays. Hard X-rays are those that have higher frequencies. As a result, they are able to produce high-resolution pictures and are therefore preferred for medical imaging. Note: A high-frequency wave, due to minimal diffraction, will enable the differentiation of two very closely placed points, allowing their visualisation, and hence increasing the resolution of the image. This property is common to all EMR. For instance, a purple-light microscope can resolve smaller objects compared to normal light microscopy due to the higher frequency of the purple light. Figure 19.1  The EMR spectrum

The EMR spectrum red, orange, yellow, green, blue. violet

LowHighMicrowave frequency frequency radiowave radiowave

Smallest f Longest λ

340

Infrared

Visible light

Ultraviolet (UV)

X-ray

Gamma ray

Highest f Shortest λ

Soft X-rays, on the other hand, are X-rays that have lower frequencies. Lower frequencies result in lower penetration power, which makes the soft X-rays unable to penetrate through the body tissues to reach the X-ray films to produce an image. (See later this section for the formation of X-ray images). Furthermore, lower frequencies also lead to lower resolution. Consequently, soft X-rays are not useful for medical imaging.

chapter 19  x-rays, computed axial tomography and endoscopy

19.2

The production of X-rays for medical imaging n

Describe how X-rays are currently produced

X-rays used for medical imaging are produced in a device called an X-ray tube. A schematic drawing of an X-ray tube is shown in Figure 19.2. In the X-ray tube, X-rays are produced in two ways. Filament for thermionic emission

Lead radiation shield

Figure 19.2  An X-ray tube is used to produce X-rays for diagnostic purposes

Copper

Target

Cathode shield

Coolant in Anode Coolant out Cathode Glass envelope

Thin glass Electron beam

Window X-ray beam

The bremsstrahlung (braking) radiation

Number of photons emitted

The majority of the X-rays produced by the X-ray tube are derived from the deceleration of the electrons as they hit the anode (more correctly, as they are bent around the nucleus of the target atoms). Electrons are first produced at the cathode via thermionic emission. The thermionic emission is used to free the electrons from the surface of the cathode so that they can be easily accelerated towards the anode by the high voltage applied across the two electrodes. The fast-moving electrons are then decelerated as they collide with the anode. As a consequence of the collisions, the electrons lose proportions of their kinetic energy, which are converted into mainly heat energy (98%–99%) and X-rays (1%–2%)—the law of Characteristic conservation of energy. The X-rays produced in this way peaks are given the name bremsstrahlung (braking) radiation. The frequency of the X-ray produced relates directly to the energy of the X-ray. The bremsstrahlung radiation has a continuous spectrum of frequencies as a result of variations in the kinetic energy the electrons possess as well as the proportions that are converted into X-rays. The maximum frequency, and hence energy, of the X-rays produced depends on the size of the voltage supplied to the X-ray Bremsstrahlung tube. A graph can be used to demonstrate a spectrum of Frequency X-rays produced using an X-ray tube (see Fig. 19.3).

Figure 19.3  Spectrum of X-rays produced by an X-ray tube: the continuous part of the line represents bremsstrahlung radiation; whereas the spikes represent characteristic X-rays

fmax

341

medical physics

The characteristic radiation Characteristic radiation or X-rays are produced via a different mechanism. When electrons from the cathode reach the anode, they may strike the inner electrons of the atoms and knock them away from their usual positions. This process leaves vacancies in the lower energy shells that are in turn filled by the electrons falling from outer energy shells and, in so doing, emitting energy in the form of X-rays (only for large atoms). Distinguishing them from the braking radiation, these X-rays are characteristic of the metal used for the anode. In other words, different materials used for the anode will produce different characteristic X-rays. These are seen as spikes on the X-ray spectrum, as illustrated in Figure 19.3.

The features of an anode The anode is the target that the electrons strike and decelerate. During the deceleration process, huge amounts of heat are produced. The features adopted by the anode design are all aimed at withstanding the heat as well as dissipating the heat quickly and efficiently. ■■ First, tungsten metal is used as the material for the anode target. This is because tungsten has the highest melting point among all metals and this is essential for withstanding the heat. ■■ The tungsten metal is placed at an angle to the incoming electrons in order to increase the surface area that will be interacting with these electrons. Consequently, the heat is allowed to be distributed over a larger area, which in turn decreases the amount of heat experienced per unit area of the tungsten metal. This prevents melting of the metal. ■■ The tungsten metal rotates at a speed of 3600 revolutions per minute so that different parts of the tungsten metal are exposed to the incoming electrons at different times. This allows the heat to be distributed evenly throughout the entire tungsten metal. ■■ Last, the tungsten metal is mounted on a piece of copper to help conduct the heat away from the tungsten metal. Furthermore, a coolant, often oil, is made to circulate through the anode to help carry away the excessive heat. Overall this method of production of X-rays is extremely inefficient as the majority of the electrons’ kinetic energy is converted into heat and only a very small amount is used to produce X-rays. The huge amount of heat can cause the entire X-ray tube to melt down; therefore, the cooling system of the anode needs to be operating efficiently at all times.

19.3

342

Using X-rays for medical imaging: the principle The first step in X-ray imaging is to produce the X-rays using an X-ray tube. This happens when the radiographer turns on the X-ray machine after the patient is positioned in a particular posture depending on the organ or the system being examined. In order to produce good-quality X-ray images, the X-ray beam produced needs to be narrow and this is achieved by using a lead collimator. Furthermore, because soft X-rays only produce low-resolution images and have poor penetration power, they are not useful for medical imaging and are filtered out after their production. This also reduces the amount of X-rays the body is exposed to. The soft X-rays can be filtered using a thin sheet of aluminium while the hard X-rays penetrate through it to image the body.

chapter 19  x-rays, computed axial tomography and endoscopy

The principle of the image formation is that as the X-rays pass through the body, they are attenuated (absorbed) by the tissues in the body differently. Some tissues—for example, the bones—attenuate X-rays more than other tissues—for example, muscle and fat—whereas air minimally attenuates X-rays. The consequence is that different amounts of X-rays pass through the body, with a pattern determined by the nature of the tissue structures encountered, to reach the X-ray film. The arriving X-rays expose the film, similar to casting a shadow. X-rays that have been attenuated the most will not expose much of the film, leaving the film whitish when developed. On the other hand, X-rays that are minimally attenuated will expose the film maximally, making the film appear dark. Of course, there will be all shades of grey in between: for example, bone will appear whitish on a plain X-ray picture, air in the lungs will appear dark, and fat and soft tissues will appear grey. These are shown in Figures 19.4 and 19.5 (overleaf). The organ or system to be examined should be placed as close as possible to the film so that the image produced will have a sharp edge and minimal magnification.

An X-ray machine

Analogy: The way X-rays are used to produce images is very similar to the way light casts shadows (except X-rays can penetrate through flesh, whereas light cannot). If an object blocks out light completely, a definitive shadow is produced. If, on the other hand, an object only blocks out a proportion of the light, a grey shadow is created. Analogy: The way X-ray images are displayed using X-ray films is similar to the

negatives of normal photos. Maximum exposures to light will turn the films dark, whereas non-exposure to light leaves the films white (clear).

In recent years, X-ray films have been less frequently used as they have gradually been replaced by computer technologies. Today, in most metropolitan hospitals, instead of displaying the image of a body part by exposing an X-ray film, the image data can now be collected using a computer system and the same image is recreated and displayed on the screen. This system makes the viewing of the images more convenient and enables the adjustment of contrast while viewing, as well as avoiding the need to store hard copies of X-rays films, which can take up a lot of space.

The clinical uses of X-ray imaging Viewing the skeletal system

Figure 19.4  A typical chest X-ray

19.4

Because bones attenuate X-rays well, they create sharp shadows on X-rays films. Recall that with a conventional X-ray film (also known as a plain X-ray film), bones appear white. Also, the high frequency of the X-rays will result in a high-resolution image. A plain X-ray film of the bones can show a high level of structural details of the bones that no other medical imaging can achieve. It is therefore the investigation of choice for assessing bone diseases or problems. Common bone pathologies diagnosed by X-rays are fractures (see Fig. 19.5 overleaf), dislocations and tumours of the bone. Plain X-ray films are also useful for assessing joint pathologies, such as osteoarthritis, a condition that involves

343

medical physics

wearing-out of the cartilages that line the articulating bones, leaving a bare bone-to-bone contact. Cartilages do not attenuate X-rays well, and therefore are not seen on X-ray films. Thus a normal joint will appear to have a ‘gap’ (where the cartilages exist) between the articulating bones on X-ray films. In osteoarthritis, this ‘gap’ is reduced (due to the loss of the lining cartilages). The articulating bones also look frail. See Figures 19.6 (a) and (b) An extension of using X-rays for bone imaging is the imaging of teeth, which have a similar composition to bones. X-ray images of teeth can be useful for assessing the growth of wisdom teeth in adolescents to determine whether they are growing in the correct orientation or if they will require surgical intervention.

Figure 19.5  An X-ray film of the ankle bones, showing a fracture in the tibia and fibula

Figure 19.6  (a) An X-ray film of a normal joint

Figure 19.6  (b) An X-ray film of a joint with osteoarthritis

Gathering X-ray images secondary source investigation physics skills 12.3A, b, d 13.1c, e

344

n

Gather information to observe at least one image of a fracture on an X-ray film and X-ray images of other body parts

You must search for X-ray images that show fractures. After acquiring a few X-ray images that show fractures, think about the following points. n What are the sources of these X-ray images? n What purposes do these images serve? Is it possible to identify the clinical scenario that is associated with each of the X-ray images? n What are the features that would help to identify these images as plain X-ray films? Describe the colour, resolution and contrast of these images. n What bones are shown in each of these images? n Where is the fracture in each of the images? Try to describe it. n You may wish to search for the following terms to increase your knowledge of fractures: ‘transverse fractures’, ‘oblique fractures’, ‘spiral fracture’, ‘segmental fractures’, ‘comminuted fractures’, ‘buckle fractures’ and ‘greenstick fractures’. Also look up ‘displaced’ and ‘rotated fractures’. Do any of these terms apply to the fractures shown in the films?

chapter 19  x-rays, computed axial tomography and endoscopy

Chest examination The other very common use of X-ray imaging is for the examination of the lungs and heart. Many patients presenting to the emergency department have a chest X-ray. The lungs are filled with air, which minimally attenuates X-rays, consequently they form a good contrast with the surrounding tissues. The heart, which is positioned in between the lungs, will also have its shadow outlined by the lungs. A normal chest film is shown in Figure 19.7 (a). You can see a clear lung field as the X-rays pass through the air in the lungs with minimal attenuation. Small markings in the lung field are the bronchi and arteries in the lungs and are normal. The heart shadow is seen between the lungs. Many problems will show up clearly on a chest film as they often disrupt the normal aeration of the lungs. These problems include infection in the lungs (pneumonia), lung cancer, lung abscess, lung collapse, and many more. An example is shown in Figure 19.7 (b). The heart shadow will allow its size and orientation to be assessed, which may be used as screening criteria for certain cardiac diseases.

Stones

SR

A simple atlas of the human chest

Figure 19.7  (a) A normal chest X-ray film showing clear lung fields

Some stones are calcified, which means they contain calcium. This allows them to attenuate X-rays similar to bones so that they will appear whitish on X-ray films. These stones are termed radiopaque. Examples include calcified kidney stones or gallstones. However, some stones are not calcified, and hence may be missed if only plain films are ordered. These include a great proportion of gallstones and some kidney stones; they are termed radiolucent.

The digestive system The digestive system, including the stomach and bowel, is not usually seen on plain X-ray films due to the poor contrast between soft tissues. However, sometimes the bowel can be visualised using X-rays—for example, when the bowel is blocked due to certain pathologies, air accumulates inside the bowel, which enables a contrast between the bowel and the rest of the abdominal content and hence its visualisation. This is the basis by which bowel obstruction can be diagnosed using a Figure 19.8  (b) A plain plain abdominal X-ray Figure 19.8  (a) A plain abdominal X-ray film showing the normal abdominal X-ray film showing film. This is shown in appearance of intra-abdominal bowel obstruction; note the air Figure 19.8. organs accumulated inside the (small) bowel

Figure 19.7  (b) A chest X-ray film that shows diffuse pulmonary pathology

SR

A simple atlas of the human abdomen

345

medical physics

The bowel can be visualised better using a contrast medium, one that has the ability to attenuate X-rays. A common example is barium sulphate. This is taken orally by the patient, and it passes through the gastrointestinal tract. As it attenuates X-rays, it makes the bowel radiopaque (which will appear whitish) and allows its visualisation. This method of imaging can be used to assess swallowing, the anatomy of the oesophagus (food pipe), stomach and small intestine (see Fig. 19.9). Nevertheless, contrast X-ray images of the gastrointestinal tract are now largely replaced by computed axial tomography (discussed later in this chapter).

Arterial blood flow Blood flow through an artery can also be studied using X-ray imaging after introducing an iodine-based contrast into the artery. Such an image technique provides information about blockages or dissection of an artery.

Figure 19.9  An X-ray contrast material outlining the gastrointestinal tract

A further note on gathering: You must search for X-ray images of other body parts. Some examples are provided in this section. When viewing these images, try to recognise the features that identify them as X-ray films. Think about why these body parts show up on X-ray films. What pathologies are shown in each of these images?

Advantages of X-ray imaging X-ray imaging is one of the cheapest imaging methods available at a hospital, therefore is often ordered as the first line of investigation to establish an initial diagnosis. ■■ X-rays are able to produce high-resolution images of bones and tissues that contain air due to their high frequency. X-ray films are the investigation of choice for detecting bone and lung problems. (These organs cannot be visualised using ultrasound.) ■■

Disadvantages of X-ray imaging X-rays are harmful as they cause ionisation and damage to body tissues. Nevertheless, the amount of X-rays used for a plain X-ray film is minimal and health side effects are also minimal if they are irregularly used. However, X-rays should not be used for pregnant women as the growing foetus is more susceptible to even a trace amount of ionising radiation. ■■ The main limitation of X-ray imaging is its poor ability to differentiate between different types of soft tissues. All soft tissues, including muscles, tendons and skin, attenuate X-rays to a similar extent. When viewed using X-rays, they will all appear grey, and hence cannot be differentiated. ■■ X-ray imaging only produces two-dimensional images, which may make interpretation of certain pathologies difficult. For instance, when a round lesion is seen in a chest film, it is difficult to ascertain whether the round lesion is in front of or inside the lung. Fortunately this problem may be solved by taking X-ray pictures from different angles, such as from the side. ■■

346

chapter 19  x-rays, computed axial tomography and endoscopy

Computed axial tomography As mentioned already, the main limitation of X-ray imaging is its lack of ability to differentiate between different types of soft tissues. Therefore, clinically, it is difficult or nearly impossible to distinguish between adjacent soft tissue organs—such as the muscles from the skin, the liver from the stomach, or the bowel from the bladder—as they all have very similar attenuation to the X-rays. To help overcome this problem, an imaging technique known as computed axial tomography (CAT or CT) was developed. The basic principle of a CT scan is to use a computer to reanalyse the attenuated X-rays as they pass through the body and use this data to electronically reconstruct the images of the target organ. Since CT relies on computer reconstructions, it is far more sensitive in distinguishing small differences in the attenuation of the X-rays as they pass through the body. This allows differentiation of different types of soft tissue organs. CT reproduces the images of the body organ using slices that orientate either horizontally or vertically.

19.6

The functional principle of CT scans n

Explain how a computed axial tomography (CAT) scan is produced

A photograph of a typical CT machine is shown in Figure 19.10 (a) and a schematic diagram of the machine is shown in 19.10 (b). The patient who is to have the scan lies on a table and is moved slowly into the gantry. As the section of the body to be scanned passes through the gantry, crosssectional images (slices) of it will be produced. The gantry is the doughnut-shaped structure that mounts an X-ray tube. The X-ray tube produces a beam of X-rays that pass through the body and attenuate as they encounter different types of tissues. Finally, the attenuated X-rays arrive at the detector on the other side of the gantry, which measures the intensity of the arriving X-rays. From this, the level of the attenuation of the X-rays Scanning can be calculated and the data is fed drum is rotated through to the computer for analysis. In a first 360 degrees generation CT machine, the X-ray tube X-ray beam is made to rotate one degree at a time passes through the body with a corresponding movement of the detector to the opposite direction, and the same process is repeated until the entire 180 degrees are completed. See Figure 19.10 (b). Every time the X-ray tube rotates, the X-rays will pass through the body at a slightly X-ray detector different orientation; this is essential for records intensity of X-rays generating all the necessary data for transmitted the image reconstruction of the slice through the body of the body (section) being scanned.

19.5

Figure 19.10  (a) A CT machine

X-ray tube emits X-rays as the scanner rotates around the body

Movable bed allows any part of the body to be scanned

Figure 19.10  (b) A schematic drawing of a CT machine

347

medical physics

As the patient is moved progressively through the gantry, many other slice images of the body section are created. In order to understand how a CT scan is able to reconstruct a slice image of the body from the X-ray attenuation data collected by the detector, one may consider a simple scenario as shown in Figure 19.11. Figure 19.11 shows a square cylinder that has a hollow cross inside. Imagine if this square cylinder were placed inside the gantry: the CT would have to reconstruct the Square cylinder containing cross-sectional image of this square a hollow cross inside cylinder for the slice that is inside the gantry—a two-dimensional cross. Figure 19.11  A square cylinder Inside the gantry, X-rays are made to pass through the body slice being scanned containing a hollow from different angles (the entire 180 degrees). When these X-rays are detected and cross analysed, they will all be resolved into two vector components, one coming vertically down and the other horizontally across. For the cylinder, assume the X-rays will be analysed in five vertical columns and five horizontal rows (this is grossly simplified). Because the solid parts of the cylinder will attenuate X-rays more than the hollow cross and the X-rays coming down vertically through column 1 and 5 will not encounter any hollow parts, these two columns will have the maximum attenuation and will be assigned the value ‘5’. On the other hand, the X-rays that pass through the middle (column 3) will encounter the greatest number of hollow parts (3 parts) therefore will be attenuated the least. This column is assigned with an attenuation value of ‘2’ (5 minus 3). A similar principle can be used for the X-rays passing through column 2 and 4: both will be assigned with a value of ‘4’. The same process will apply for the X-rays passing through the horizontal rows. These are summarised in Figure 19.12 (a). The next step is to add up the values of the X-ray attenuation so that each little square will have a number that is the sum of the values assigned to the vertical column and the horizontal row that make up this square. This is shown in Figure 19.12 (b). Note that the little squares represent the pixels of CT images, which are the smallest image-forming units. A typical CT will have 512 × 512 pixels for each image. Finally, colours are assigned to these squares. In this case, if black Cross-sectional image inside the gantry that needs to be reconstructed

Slice inside the gantry

X-rays

X-ray attenuation

X-rays

1-6 = black 7-10 = white X-ray attenuation Figure 19.12  (a) The resolved X-rays passing vertically down and horizontally across the square cylinder and are analysed in five columns and five rows

348

Figure 19.12  (b) The values of the columns and rows are added to give each little square a sum value

Figure 19.12  (c) Colours are assigned; the hollow cross is reconstructed

chapter 19  x-rays, computed axial tomography and endoscopy

is to represent hollowness, then the squares with the value of ‘1’ through to ‘6’ are made black; whereas those with the value of ‘7’ through to ‘10’ are made white to represent the solid part. The result is that a two-dimensional hollow cross is reconstructed, as shown in Figure 19.12 (c). A real organ will have a more complicated shape compared to the hollow cross, therefore more pixels (more columns and rows) are required. Although the above example illustrates the principle of reconstructing a cross-sectional image by the computer, reconstructing images of real body organs requires more data to be processed. Furthermore, a typical CT scan will produce hundreds of slices, again indicating that the volume of the data needed to be processed by the computer is enormous. Frequently, CT cross-sectional images are interpreted by a radiologist (a doctor who specialises in reading medical images) who has an extensive knowledge of the anatomy of the body and is able to mentally reconstruct a threedimensional representation of the body parts. Occasionally, in order to assist the doctor in making more accurate assessment of the anatomy of the body, these horizontal slices may also be added vertically by the computer to electronically recreate three-dimensional images. One example is shown in Figure 19.13. This imaging technique is useful for assessing complex pathologies such as fractures of the facial bones, for which a good reconstruction is based on an accurate knowledge of the pattern of the damage.

Figure 19.13  A threedimensional CT image

The use of a radio-contrast medium Radio-contrast materials, like those used for plain X-ray films, can also be used for CT scans to enhance the contrast between tissues, enabling certain organs to be visualised more precisely than without the use of the contrast. Examples include visualising a brain tumour or visualising lymph nodes inside the abdomen.

The modern CT machines For first generation CT machines, as described above, the image formation process was extremely slow. This is because the X-ray tube needed to rotate one degree at a time and pause after each rotation to emit and detect the X-rays. The speed of the machine was further limited by the speed the computer was able to analyse the data and reconstruct the images. Modern CT machines are much faster. Most hospitals use third generation CT machines, which employ an X-ray tube that emits a fanshaped X-ray beam. Such a beam will pass through a wider sector of the body and hence enable fewer rotations of the X-ray tube to complete the scan. Sixty-four detectors are placed on the opposite side of the gantry to receive this fan-shaped X-ray beam simultaneously. Furthermore, the gantry also has 64 arrays of these detectors in the longitudinal axis so that 64 slices of the body section can be reconstructed simultaneously with one pass of the X-rays. (The X-rays therefore also have to fan out in the longitudinal axis, hence making more of a cone shape.) The development of fast computer systems enabled more voluminous data handling and more rapid data processing so that the speed

A contrast CT image of the abdomen: note that the bowel lights up as it contains contrast material

349

medical physics

of image reconstruction was also greatly enhanced. Modern CT scans are so fast that the patient can be moved through the gantry in a stepless, smooth motion and the images are produced at the same time without any delay.

19.7 SR

A simple atlas of the human brain

The clinical uses of CT scans n

Describe circumstances where a CAT scan would be a superior diagnostic tool compared to either X-rays or ultrasound

CT scans are used in a variety of clinical settings to visualise diseased body parts. Indeed, with the appropriate use of a radio-contrast medium, CT scans can help to diagnose almost all problems. CT scans are commonly used for scanning the following.

Brain

A horizontal slice CT image of a normal brain

A CT scan, unlike a plain X-ray film, is good for visualising both the skull bone and the brain itself. The brain and its structure can be visualised well by CT because its computer system is able to analyse the small differences in the attenuation of the X-rays as they pass through brain substances. These differences are otherwise too small to be differentiated on a plain X-ray film. CT scans are used for diagnosing strokes (when the brain is injured by having been deprived of blood flow or by a haemorrhage from a blood vessel that supplies the brain). It is also good for detecting brain abscesses and brain tumours. Recall that the adult brain cannot be visualised using ultrasound. This is because at the bone and tissue interface, significant amounts of ultrasound waves are reflected, leaving only a small proportion to reach the brain.

Lungs Although plain X-ray films are adequate (in most cases) in diagnosing simple lung problems, more subtle lung problems are better assessed using CT scans. Examples of these include pulmonary fibrosis (scarring of the lungs) and lung malignancies. Also, CT scans are more accurate in measuring the size of any abnormal fluid collections around the lungs (pulmonary effusion) compared to the plain X-ray films. This is important as it guides treatment. Ultrasound cannot be used to assess lung problems. Why is that? (See Chapter 18.)

Abdominal organs A horizontal slice CT image showing bleeding compressing the brain (arrow)

350

CT scans are frequently used for examining the digestive system, including the oesophagus, stomach, small bowel, large bowel, liver, gall bladder, spleen, pancreas, kidneys, bladder, and so on. The use of the contrast material is not essential as the computer can easily analyse small differences in the attenuation of the X-rays as they pass through various organs. Nevertheless, the use

chapter 19  x-rays, computed axial tomography and endoscopy

A coronal section of a normal chest

A cross-section of a normal chest

A coronal section of an abnormal chest

A cross-section of an abnormal chest showing fluids collecting around both lungs

of contrast will further enhance the view of certain organs or certain diseases. Some of the common intra-abdominal illnesses that can be diagnosed quite accurately using CT are malignancies, diverticulitis (a condition that involves infection of abnormal pockets protruding from the wall of the large bowel), appendicitis, pancreatitis (inflammation of the pancreas), and many more. Due to the ability to differentiate soft tissues and the high level of detail CT images can provide, they have replaced plain X-ray imaging for diagnosing almost all gastrointestinal pathologies.

Soft tissues CT scans may be used to detect muscle tears as well as tendon or ligament ruptures. Recall that soft tissues cannot be visualised using plain X-ray films, due to their inability to detect small differences in the attenuation of the X-rays. Although ultrasound is able to visualise soft tissues, they produce lowresolution images that can make the interpretation of the images a challenging task. Nevertheless, ultrasound is cheap and does not expose the patient to the harm of X-rays. They are suitable for diagnosing soft tissues that are superficial and do not have complex structural details. For deeper and more complicated soft tissue injuries, CT scans are a little more superior. However, as discussed in Chapter 21, magnetic resonance imaging is the investigation of choice for soft tissue injuries.

351

medical physics

A coronal section of a normal abdomen

A cross-section of a normal abdomen

A coronal section of the abdomen showing a tumour in the liver (arrow)

A cross-section of the abdomen, showing three tumours in the liver (arrows)

The uses of CTs are vast and CT scans are now employed in many hospitals as the preliminary imaging technique to establish a diagnosis. The only exception is when the diagnosis can be established accurately either using ultrasound scans or plain X-ray films, which are cheaper and safer to use.

Evaluation of the use of CT scans Because they can differentiate between different types of organs and tissues, CT scans are used as a standard diagnostic tool to establish an initial diagnosis for almost all body systems. ■■ CT scans are more expensive compared to ultrasound scans and plain X-ray films, thus a cost-effective consideration needs to be employed when using them. Nevertheless, CT scans are still considerably cheaper compared to other imaging methods discussed later. ■■ A CT scan exposes the patient to more X-rays (approximately 40 times) compared to a plain X-ray film, and thus is more likely to do harm. CT scans are absolutely not advised for pregnant women and should not be repeated too often. ■■

352

chapter 19  x-rays, computed axial tomography and endoscopy

Observing and comparing CT scans n

Gather secondary information to observe a CAT scan image and compare the information provided by CAT scans to that provided by an X-ray image for the same body part

You are asked to compare images of a body part produced by a CT scan to that produced by (plain) X-ray imaging. Besides comparing images provided in this text, you may also obtain your own images. Ask your GP whether you can see plain X-ray films and CT scans performed for the same body part (same disease). Your relatives may also have had both of the scans done for a particular body part (disease). After obtaining these images, compare and comment on the following: n Features of each type of images: their colour, resolution and contrast. n X-rays produce two-dimensional images of a body part, whereas CT produces many slices (cross-sectional images) of the same body part. Which image type is easier to interpret? What pathologies (if any) are shown in these images? n Summarise how you can recognise an image as a CT image or a plain X-ray image. A table may be used.

Endoscopy Endoscopy is a medical imaging technique that involves the insertion of an optical fibre camera through either a natural orifice or a surgically created opening to examine the interior of a body part. The greatest advantage of endoscopy is that it offers a direct view of the interior of the body parts without the need to surgically expose them (cutting the body parts open). Simple procedures (commonly called key-hole surgeries) can be carried out at the same time as the viewing, so endoscopy differs from other imaging methods as it is both diagnostic and therapeutic. In order to understand how endoscopes work, some basic properties of light need to be discussed.

Refraction

secondary source investigation PFAs H3 physics skills H12.3 a, b H12.4 f H14.1a, c, e, g, h

19.8 SR

Refraction is a wave property. Although refraction applies to all waves, it will only be discussed for light (EMRs) in this section. Definition

Refraction occurs when light travels from one medium to another and the change in its speed as it is doing so is accompanied by a change in its direction.

Simulation: Reflection and refraction

The law of refraction: qualitative

Qualitatively, the law that governs the refraction of light may be summarised as: When light travels from a less dense medium to a denser medium or when its speed reduces as it passes from one medium to another, its pathway bends towards the normal. See Figure 19.14 (a).

353

medical physics

Normal Angle of refraction Denser medium

Light ray bends towards the normal

Light ray reduces its speed

Less dense medium Incident angle Light ray Figure 19.14  (a) Refraction of light: when light travels from a less dense medium to a denser medium or when its speed reduces, it bends towards the normal

Angle of refraction Less dense medium Denser medium

When light travels from a denser medium to a less dense medium or when its speed increases as it passes from one medium to another, its pathway bends away from the normal. See Figure 19.14 (b). Note: Both the incident angle and the angle of refraction are measured from the normal, not from the interface between the two media.

Importantly it is not the change in density of the media that determines the direction of bend of the wave; rather, it is the change in speed of the wave. This is evident when one studies the refraction of a sound wave, which bends away from the normal when it travels from a less dense to a denser medium because its speed increases!

Critical angle As light travels from a denser to a less dense medium, its speed increases and subsequently bends away from the normal. Recall from the Preliminary Course that the ratio of the sine value of the size of the incident angle to the sine value of the angle of refraction is a constant (termed Normal the refractive index). Consequently, when the size of the Light ray bends incident angle is increased, there will be an accompanying away from the normal increase in the size of the angle of refraction. The angle of Light ray refraction will eventually reach 90° as the incident angle increases its speed continues to increase in size. At this point the incident angle is referred to as the critical angle. This is illustrated in Figure 19.15. Incident angle

Definition

Light ray Figure 19.14  (b) Refraction of light: when light travels from a denser medium to a less dense medium or when its speed increases, it bends away from the normal

The critical angle is the size of the incident angle such that when refraction occurs, the angle of refraction is 90o to the normal. Less dense medium

Angle of refraction equals 90˚

Critical angle Light ray

Denser medium

354

Figure 19.15  Critical angle

chapter 19  x-rays, computed axial tomography and endoscopy

Total internal reflection

SR

Following from the previous section, if the incident angle is to increase further, the angle of refraction will increase beyond 90o, that is, beyond the interface between the two media such that the light now reflects back into the first medium and total internal reflection is said to have occurred. This is shown in Figure 19.16.

Animation: Total internal reflection

Definition

Total internal reflection occurs when the incident angle is greater than the critical angle such that the angle of refraction now exceeds 90o. As a consequence, no light will enter the second medium and all are reflected at the boundary back into the first medium. Although total internal reflection is unusual, as the reflection is caused by a non-reflective surface and arises as a consequence of the initial refraction, it still obeys the law of reflection: 1. the incident angle equals the angle of reflection 2. the incident wave and the reflected wave both lie in the same plane.

Figure 19.16  Total internal reflection

Less dense medium

Critical angle

Denser medium

Incident angle

Note:

Incident Angle of = angle reflection

Optical fibres n

Angle of reflection

Explain how an endoscope works in relation to total internal reflection

One of the most useful applications of total internal reflection is optical fibres. Optical fibres are used in many areas, such as in telecommunications (cable Internet), and importantly for medical imaging. They form the essential components of an endoscope. A typical optical fibre consists of a core, a cladding and a sheath arranged concentrically (with the core as the innermost structure) as shown in Figure 19.17 (a). A longitudinal view of the optical fibre is also shown, Figure 19.17 (b). The way the optical fibre operates is that the core is made to be denser compared to the cladding (more correctly a higher refractive index) and is engineered so that when light enters the core, it will always undergo total internal reflection. (The core and cladding are engineered to have a very small critical angle.) This allows the light to be bounced inside the core and it consequently propagates from one end of the fibre to the other. This

19.9 Figure 19.17  (a) A cross-sectional view of an optical fibre Sheath Cladding Core

355

medical physics

Cladding

Sheath

Core Light ray Figure 19.17  (b) A longitudinal section of the optical fibre showing the propagation of light by total internal reflection

19.10

Endoscopes n n

Figure 19.18  A typical colonoscope

Light, water, and suction cable

Up/down control Left/right control

356

is shown in Figure 19.17 (b). The sheath shields off light in order to minimise the entry of light from the external environment, which may lead to interference. Importantly, the transmitted light carries information. If flashes of light are used and a light flash is interpreted at the receiving end as a ‘1’, and no flash is interpreted as a ‘0’, then digital data is transmitted. This forms the basis of cable internet. For digital data transmission, infrared is often used. Continuous (analogue) visible light may also be transmitted. This transmits visual images that can be either viewed directly or be displayed on TV (like a video camera). This forms the essential part of an endoscope.

Explain how an endoscope works in relation to total internal reflection Discuss differences between the role of coherent and incoherent bundles of fibres in an endoscope

There are many different types of endoscopes with different uses. Different endoscopes have different designs. A typical endoscope, used for examining the large bowel lumen (colonoscope), is described here to illustrate the basic structure and principle of an endoscope. A typical colonoscope consists of a pair of illuminating optical fibre bundles, a single objective optical fibre bundle, an air and water channel and an instrument channel. All are enclosed in a shaft (tube), made from plastic, that contains a metal frame. The shaft is strong but at the same time flexible due to the metal frame, and the plastic makes it resistant to bowel secretions. The length of the shaft may be 1–2 m. The length of the shaft will Eyepiece depend on the endoscope’s purpose. Air pipe Suction control A controller is attached to one end of Control wires Air/water control the shaft (the proximal end), which Illuminating optical fibre is held by the doctor. The controller bundles allows the doctor to manipulate the Objective optical fibre shaft so that it is able to be bent as bundle it passes through the large bowel. Instrument channel Illuminating optical The controller also controls water fibre bundles and air that pass through the scope, Water pipe and contains an opening for the instrument channel. All are shown in Figure 19.18. The free end (the distal end) is inserted into the target organ Endoscope tube to conduct the examination, in this (a) case, the anus.

chapter 19  x-rays, computed axial tomography and endoscopy

The illuminating optical fibre bundle: Incoherent bundle The illuminating optical fibre bundles transmit light from an exterior source into the organ being examined. These bundles are classified as incoherent bundles. Definition

An incoherent bundle is one in which individual optical fibres are randomly placed alongside each other so that the fibres are not in the same relative positions at the ends. A schematic drawing of an incoherent bundle is shown in Figure 19.19. The consequence of the fibres not being in the same relative positions at the two ends is that light patterns entering one end of the bundle will be distorted as they arrive at the other end. However this will not Object matter for illumination as its sole purpose is to carry light into the interior of the organ. Hence, whether the light pattern is distorted or not will not affect the outcome. Usually an incoherent bundle contains up to hundreds of individual fibres and the fibres are thick in order to maximise the transmission of light. Incoherent bundles are also cheaper to manufacture.

Object distorted Figure 19.19  An incoherent bundle

The objective optical fibre bundle: Coherent bundle The purpose of the objective optical fibre bundle is to carry back the visual images of the organ’s interior to the doctor or the video monitor for viewing. Such a bundle is classified as a coherent bundle. Lenses are placed at the ends of the bundle to produce focused images. A schematic drawing of a coherent bundle is shown in Figure 19.20. Definition

A coherent bundle is one in which the individual optical fibres are kept parallel to each other throughout their length, so that they are in the same relative positions at the ends. The importance of coherence for the objective bundle is that the light beams that are carrying back the visual images need to be kept in the same relative positions when travelling through the individual fibres, so that images will not be distorted when reaching the observer. Object If an incoherent bundle is used in this case, the images will be ‘jumbled’ so that they cannot be interpreted. Compare Figure 19.20 to Figure 19.19. The fibres for a coherent bundle are thin and thousands of fibres are included in a bundle in order to improve the resolution of the images produced. A coherent bundle is more expensive than an incoherent bundle as a result of higher technical difficulty in manufacturing.

Object maintains its image integrity Figure 19.20  A coherent bundle

Air and water channel The air channel, as its name suggests, provides a passage through which air can be pumped into the organ to cause inflation so that the organ can be examined more

357

medical physics

easily. Water can be pumped through the same or a separate channel to gently flush the organ lumen to help with the examination. It can also be used to wash the objective lens when it fogs up.

Instrument channel The instrument channel can be used to insert various instruments to perform small operations while the endoscope is inside the organ. The instruments include forceps, biopsy scissors, diathermy, and others. Biopsy of tissue samples, resection of small polyps and using diathermy to stop bleeding can all be performed easily.

19.11

The uses of endoscopes n

Explain how an endoscope is used in: – observing internal organs – obtaining tissue samples of internal organs for further testing

Depending on the particular use of the endoscope, the structure of the endoscope can be quite different. For instance, an arthroscope (an endoscope that is used to view the inside of a joint) does not have an instrument channel so instruments, if needed, have to be inserted through another surgical incision into the joint. Also, an arthroscope has a rigid shaft with a much smaller diameter. Some common endoscopes used clinically are described below. Colonoscope, images showing the lumen of a normal large bowel (left) and abnormal (right) the lumen of the large bowel showing a polyp (abnormal growth)

SR

Video: colonoscopy

An arthroscope

SR

Video: arthroscope

358

Colonoscopes A colonoscope is used to examine the lumen of the large bowel. The patient is given a light sedation so that he or she remains comfortable during the examination. The scope is then passed slowly through the anus and is advanced until the junction between the large bowel and small bowel is reached. The scope can be used to view pathologies such as tumours and polyps of the large bowel, as well as inflammation of the bowel wall in certain diseases. Biopsies of tissues to further confirm a particular diagnosis are frequently performed.

Arthroscopes An arthroscope is inserted via a small surgical incision into the joint after the patient has been given a general anaesthetic or sedation. The scope can view joint pathologies such as arthritis, torn ligaments or tendons. If extra surgical portals are created, instruments can be inserted to perform arthroscopic surgeries. A common procedure is the repair of ligaments.

chapter 19  x-rays, computed axial tomography and endoscopy

Gastroscope showing normal (left) and abnormal (right) digestive tract (ulcer)

SR

Gastroscopes A gastroscope is structurally similar to a colonoscope. After sedating the patient, the scope is passed into the patient’s mouth. It then passes down the oesophagus to reach the stomach, the duodenum and sometimes early parts of the small bowel. The scope can diagnose tumours and polyps, ulcers and bleeding spots within the lumen of the oesophagus, the stomach and early parts of the small bowel. Biopsies are often taken to confirm the diagnosis. Other endoscopic procedures include ceasing acute bleeding from a peptic ulcer by injecting chemicals down the scope, and resection of polyps.

Video: gastroscope

SR

Laparoscopes A laparoscope is a rigid and short endoscope that is inserted into the patient’s abdomen via a small incision through the umbilicus (the belly button) under general anaesthetic. Unlike a gastroscope or a colonoscope, which only examines the luminal aspect of the bowel, a laparoscope visualises the exterior aspect of the bowel as well as other abdominal organs such as the liver, the gall bladder, the spleen and any other organs that are outside the actual gastrointestinal tract. Instrument channels are created separately via further incisions through the abdominal wall. Elaborate equipment is available to carry out many different types of operations. These range from small and common operations, such as the removal of the gall bladder (for gallstones) or the appendix (appendicitis) to large operations such as the removal of the stomach and the large bowel (when they are affected by cancer). These operations are conventionally known as key-hole surgeries.

Video: laparoscopic surgery

Cystoscopes A cystoscope is much smaller than other types of endoscopes. The scope is passed through the urethra to reach the bladder and the ureters to diagnose and treat diseases of the urinary tract. Examples include tumours, polyps and stones.

Bronchoscopes A bronchoscope is passed through the trachea to reach the large then small bronchi. It is the investigation of choice for diagnosing lung cancer as well as obscure infections of the lungs. Again, biopsies and interventional procedures can be done.

359

medical physics

Observing endoscope images secondary source investigation PFAs H3 physics skills H12.3a, b

n

Gather secondary information to observe internal organs from images produced by an endoscope

The materials included in this section give you an opportunity to appreciate the images of the internal body parts provided by various endoscopes. You may wish to collect more images. Also note that endoscopic examinations can be recorded as videos. How do the endoscopic images differ from images produced by other imaging methods, such as ultrasound, X-rays or CT? Are the endoscopic images easier or more difficult to interpret?

Evaluation of the uses of endoscopes The invention of endoscopes has revolutionised medical practice. The fact that endoscopes allow a direct view of internal body organs as well as enabling tissue biopsies makes them accurate and reliable imaging methods. Now, although a provisional diagnosis may be made based on CT scans, the final diagnosis or the extent of the disease is often determined after carrying out an endoscopic examination. For instance, bowel cancer may be diagnosed by CT imaging, but in order to determine the pathological nature of the cancer and its spread, a colonoscopy is required. Endoscopes are considered as only minimally invasive as they are inserted either through natural orifices or small surgical incisions. Before endoscopes were invented, a laparotomy (a vertical cut through the midline of the abdomen to open up the abdominal cavity) may have been performed to establish a diagnosis. Even more important is the invention of endoscopic surgeries. More operations are now done endoscopically. A common example is the removal of a gall bladder (for gallstones) using a laparoscope and associated instruments. The major advantage of a laparoscopic surgery is the lower level of post-operative pain as well as a quicker recovery time. For instance, a patient undergoing a laparoscopic removal of the gall bladder can usually be discharged from the hospital in two to three days with minimal pain and may resume normal daily activities within weeks. For a patient who undergoes a conventional open gall bladder removal (where there will be a large cut under the right ribcage), the hospital stay will be a week and it takes a few months before the patient can return to his or her full functional capacity. Shorter hospital stays and returning to work sooner help to reduce the cost of the disease to an individual and to society. However, the invasiveness (even minimally) of endoscopes is a downside. Performing an endoscopic examination is classified as a medical procedure rather than imaging as it carries specific risks and the patient needs to consent. For instance, when performing a colonoscope, there is a risk that the bowel may be perforated by the scope, in which case the patient may require an urgent open surgery to repair the perforation. Lastly, because endoscopes have to be operated by doctors, the cost of endoscopy largely depends on the fees charged by the doctor and can be variable. The results of endoscopic examinations are also largely operator dependent.

360

chapter 19  x-rays, computed axial tomography and endoscopy

Transfer of light by optical fibres n

Perform a first-hand investigation to demonstrate the transfer of light by optical fibres

The theory of the transmission of light by optical fibres has been described. You are required to perform an experiment to demonstrate the transfer of light by optical fibres. The procedure, safety precautions and some points for discussion are described here.

Aim To demonstrate the transfer of light by optical fibres.

Equipment and materials

first-hand investigation PFAs H3, H5 physics skills 12.1a, b, d 12.2b TR

A light source, a ray box, an optical fibre bundle, some paper.

Procedure 1. Obtain a bundle of optical fibres; this may be provided by the teacher (school), or found in

old toys or optical fibre cables that are no longer in use. 2. Observe and describe the optical fibre bundle; are the fibres made from plastics or glass? Measure the length of bundle. 3. Obtain a light source; this can be a light bulb placed inside a ray box. 4. Secure the optical fibre bundle onto the light source; this can be done with tapes. 5. Close the room curtains, turn off the room lights and switch on the light source. Ensure no light escapes where the bundle joins the light source. 6. Demonstrate the transfer of light by viewing the light coming through the other end of the bundle. This can be assisted by placing a piece of paper (preferably black) at the end of the bundle. 7. Bend the bundle to various angles to assess whether bending affects the transmission of the light. 8. Coherence of the bundle may be checked. If a particular light pattern can be created at the ray box (for example, a circle or a square shape), see if the same pattern is recreated at the receiving end—on the piece of paper.

Risk assessment matrix

Safety precautions 1. Optical fibre bundles that are made from glass may be sharp and may also break. Take

care when handling them. 2. Do not look straight into the light source, whether at the ray box or at the receiving end. Intense light may damage the eyes. 3. Ensure careful handling of the electrical appliances. Do not handle them with wet hands. 4. If laser is to be used as a light source, careful supervision is required. No students are allowed to handle the laser source.

Points for discussion 1. If both plastic and glass optic fibres are to be used in the class, describe their differences

in appearance and the ability to transfer light. 2. Does the bending of the bundle affect the transmission of light? If so, at what angle? 3. Are the fibres coherent or incoherent? 4. Describe qualitatively the resolution of the image produced on the receiving paper.

361

medical physics

chapter revision questions   1. (a) What is the EMR spectrum?

(b) What is the position of X-rays in this spectrum?   2. (a) Define ‘hard’ and ‘soft’ X-rays.

(b) Which type of X-rays are best used for medical imaging and why?   3. Describe how X-rays are produced for medical imaging.   4. What is the principle behind using X-rays to produce images of body parts?   5. Bob was a high school student who had a fall when he was playing basketball. He went

to his GP pointing to a painful elbow. His GP performed a quick examination and sent him to have a plain X-ray of the elbow. (a) What was the purpose of ordering the X-ray? (b) Why didn’t the GP order an ultrasound of the elbow?   6. A 40-year-old woman presents to the emergency department with high fever, severe

shortness of breath and chest pain. A plain X-ray film of the chest is ordered. What information maybe obtained with such an investigation?   7. Describe two advantages and two disadvantages of X-ray imaging.   8. CT scans also use X-rays to produce images of the body parts; however, they are

superior to plain X-rays films. Why?   9. Describe in detail how CT scans are able to use X-rays to produce slice images of the

body parts. 10. David presented to his GP with intermittent headaches that had lasted for over a year.

His GP ordered a CT scan of the brain (head). (a) Name one problem of the brain that can be revealed by a CT scan. (b) Why would a CT scan be better than X-ray imaging in detecting brain diseases? (c) Could ultrasound be used to try to reveal the brain problem in this case? 11. CT scans have almost replaced X-ray imaging for diagnosing abdominal illnesses.

(a) What makes CT scans superior to X-ray imaging in detecting abdominal illnesses? (b) Can ultrasound be used for diagnosing abdominal illnesses? 12. (a) Describe briefly two other clinical uses of CT scans.

(b) Describe one disadvantage of CT scans. 13. (a) Describe the law of refraction of light.

(b) Define the term ‘critical angle’ (c) Define ‘total internal reflection’. 14. Describe in detail how the property of ‘total internal reflection’ helps the transmission of

light through optical fibres. 15. Compare the differences between a coherent and incoherent optical fibre bundle. 16. John was a 60-year-old man who was provisionally diagnosed to have colon cancer

based on the images provided by an abdominal CT scan. The doctor advised him to have a colonoscopy. (a) What was the purpose of performing a colonoscopy?

362

chapter 19  x-rays, computed axial tomography and endoscopy

(b) What is the role of the water and air channel in a colonoscope? (c) What is the role of the instrument channel in a colonoscope? 17. Name two types of endoscope (other than colonoscope). For each, describe:

(a) how the scope is inserted into the body (b) what organs or body systems the scope is able to view (c) what pathologies can be detected using the scope

SR

18. (a) When doctors propose a ‘key-hole surgery to remove the gall bladder’, what do they

actually mean? (b) What are the advantages of performing key-hole surgeries for certain illnesses compared to the conventional open surgeries? 19. Describe one disadvantage of using endoscopes.

Answers to chapter revision questions

363

medical physics

CHAPTER 20

Radioactivity as a diagnostic tool Radioactivity can be used as a diagnostic tool

20.1

Radioactivity n n

Outline properties of radioactive isotopes and their half lives that are used to obtain scans of organs Identify that during decay of specific radioactive nuclei positrons are given off

In order to understand how radioactivity can be used as a diagnostic tool, this chapter first examines briefly the nature and behaviour of radioactivity. Definition

Radioactivity is the spontaneous release of energy or energetic particles from unstable nuclei. Naturally, there are three types of radioactivity (radioactive decay): alpha (α), beta (β) and gamma (γ). α and β are particles while γ is an electromagnetic radiation (EMR). Definition

Transmutation is the phenomenon in which one element changes its identity to become another element. Note: Transmutation can be either natural, through α, β or γ decays, or artificial, for instance, through neutron or proton bombardment.

Definition

Examples of radioisotopes

Isotopes refer to the same element with different numbers of neutrons; isotopes of the element that may undergo radioactive decay are referred to as radioisotopes.

Alpha () radiation or decay Alpha decay refers to an unstable nucleus breaking down to emit alpha radiation (α). Alpha radiation (particles) are energetic helium nuclei, in other words, helium atoms without their two electrons, and are written as 42 He. An alpha particle has two protons and two neutrons and therefore is doubly positively charged. What happens when alpha decay occurs?

For each alpha particle emitted, two neutrons and two protons (hence four nucleons) are lost. This reduces the mass number of

364

chapter 20  radioactivity as a diagnostic tool

the radioisotope by four and the atomic number by two. This results in transmutation. A general equation for alpha decay can be written as: A ZX



A–4 Z – 2Y

+ 42 He (α)

transmutation Why does alpha decay occur?

As a general rule, unstable elements become more stable through the process of radioactive decay. When alpha decay occurs, the size of the nucleus reduces and becomes more stable. Hence one can conclude that alpha decay occurs for elements that are too ‘big’; elements are considered too ‘big’ if their atomic number is equal to or greater than 83. Some examples include: 238 92U



241 95Am

234 90Th



+ 42 He (α)

237 93Np

+ 42 He (α) 238

241

Note: Both 92U and 95Am are elements beyond element 83, and therefore are too

large to be stable.

Beta () radiation or decay There are two types of β decay: − decay and + decay. β− decay occurs when an unstable nucleus breaks down to emit β− radiations (particles). β− particles are fast-moving electrons and have the symbol of –10e. The β− particles are derived from the conversion of neutrons into protons inside the nucleus; electrons are the other product and are ejected from the nucleus whereas the protons stay within the nucleus: 10 n → 11 p + –10e What happens when − decay occurs?

For each β− particle emitted a neutron is converted into a proton, therefore the total number of nucleons, and hence the mass number of the radioisotope, should not change. However, because there is now an added proton, the atomic number increases by one. This again results in natural transmutation. A general equation for β− decay is: A ZX



A Z + 1Y

+

0 –1e

(β)

transmutation Why does − decay occur?

Through β− decay, a neutron is converted into a proton and the element becomes more stable. Hence one can conclude that β− decay occurs when the atoms have too many neutrons compared to protons, or too few protons compared to neutrons. Generally, for small elements, the neutron–proton ratio should be about 1:1, whereas for larger elements such as uranium, the ratio can be as high as 1.5:1. It is wise to consult the periodic table to determine whether the number of neutrons in a particular atom is too many.

365

medical physics

Some examples include: 14 6C



14 7N

+

0 –1e

(β–)

90 38Sr



90 39Y

+

0 –1e

(β–)

60 27Co



60 28Ni

Note:

+

0 –1e

(β–)

14 90 6C, 38Sr

and

periodic table.

60 27Co

all have more neutrons than their stable isotope listed in the

On the other hand, β+ decay occurs when an unstable nucleus breaks down to emit β+ radiations (particles). β+ particles are anti-electrons or positrons and have the symbol of +10e. Positrons are the anti-matter pair of electrons. Although they are fundamentally different from electrons, for the purpose of this module, they can be seen as electrons that carry a positive charge. The positrons in β+ decay are derived from the conversion of protons into neutrons inside the nucleus: 11 p → 10 n + +10e What happens when + decay occurs?

During a β+ decay, the total number of nucleons, and hence the mass number of the radioisotope, remains unchanged. However, since there is a conversion from a proton into a neutron, the atomic number should decrease by one. A general equation for β+ decay is: A ZX



A Z – 1Y

+

0 +1e

(β+)

transmutation Why does + decay occur?

In contrast to β− decay, β+ decay occurs when the atoms have too few neutrons compared to protons, or too many protons compared to neutrons. Again, the periodic table should be consulted. Some examples include: 15 8O 18 9F

→ →

15 7N

+

0 +1e

(β+)

18 8O

+

0 +1e

(β+)

Note:

15 8O

and

18 9F

all have fewer neutrons than their stable isotope in the periodic table.

Gamma () radiation or decay Gamma radiation (ray) is the highest frequency EMR in the EMR spectrum. Gamma decay occurs when elements try to discharge excessive amounts of energy from the nucleus. The nucleus would have the excessive amounts of energy usually as a result of some kind of prior disturbance, such as having been bombarded by neutrons from an external source or having previously undergone alpha or beta decay. Gamma radiation is pure energy. By itself it does not cause transmutation. Nevertheless, through gamma decay, the element becomes more stable.

366

chapter 20  radioactivity as a diagnostic tool

Some examples include: 60 27Co

(a)



60m 28Ni

+

0 –1e

60 And immediately 60m 28Ni → 28Ni + γ No transmutation



Overall equation

(b) 99 42Mo →

99m 43Tc

+

60 27Co



60 28Ni

+

0 –1e

m = metastable/excited, indicating that the nucleus has excessive amounts of energy. +γ

0 –1e

99 After a while 99m 43Tc → 43Tc + γ   No transmutation

Note that for cobalt-60, because the gamma radiation occurs immediately after the beta decay, sometimes the two forms of radiation are said to occur together and cobalt-60 is described as a co-emitter of beta and gamma radiations. However, in the second example, the gamma decay for technetium-99 is a delayed process. Consequently, technetium-99m is described as pure gamma emitter and its parent isotope molybdenum-99 is described as a beta emitter. Nevertheless, the principle of the gamma decay is the same in both cases and in particular, gamma decay by itself does not cause transmutation. The physical properties of alpha, beta and gamma radiation are summarised in Table 20.1. Table 20.1  Properties of the alpha, beta– and gamma radiation Name

Identity

Charge

Mass (u)

Energy

Ionisation power

Alpha

Helium nucleus

+2

4.03

Low

High

Low ■■ Travels for 7 cm in air ■■ Blocked by a layer of skin or a thin piece of paper

–1

5.48 × 10–4 1 ) (approx. 1825

Medium

Medium

Medium ■■ Travels for 1 m in air ■■ Blocked by a thin layer of metal sheet

0

0

High

Low

High ■■ Penetrates through thin metal sheets ■■ Blocked by a thick lead sheet or a concrete wall

4 2He

Beta–

Fastmoving electron 0

( –1e) Gamma

Highest frequency EMR (γ)

Penetration power

The radioactivity used in clinical medicine The radiation used for medical imaging purposes is gamma, whether directly through pure gamma decay (e.g. technetium-99m) or via annihilation of matter and anti-matter pair in positron emission tomography (see later). Technetium-99m is the most commonly used radioisotope in nuclear medicine. Gamma rays are penetrative enough to pass through the body to reach the detector. They cause the least amount of ionisation compared to both alpha and beta radiation when inside the body, and therefore are relatively safer for injection into the

367

medical physics

patient. Furthermore, most gamma emitters, in particular, technetium-99m, have very short half-lives (see next section). They will not stay in the body for long after the injection, which further increases their safety for clinical uses.

20.2

Half-life n

Outline properties of radioactive isotopes and their half lives that are used to obtain scans of organs Definition

Nuclei remaining

100%

Half-life is defined as the time needed for half the amount of a given radioisotope to decay or the time for the intensity of its radiation to decrease by a half

75% 50% 25% 0%

0

Figure 20.1  Half-life of a radioisotope: the amount of the radioisotope decreases exponentially



This can be represented using the graph shown in Figure 20.1. As shown in the graph, after the first half-life has elapsed, the amount of radioisotope has dropped to 50% of the original amount. 3 4 2t½ The amount then drops to 25% and 12.5% after two and three half-lives Time respectively, and so on. Also from this graph, any halving in values on the y-axis will correspond to a time elapse on the x-axis that equals to one half-life, and any time interval that is one half-life long on the x-axis will correspond to a reduction of the amount of the radioisotope by a half.

Further example Example 1

Iodine-131 is a co-emitter of beta and gamma radiation and has a half-life of 8.0 days. A sample of iodine-131 has a mass of 6.4 g. (a) Write a nuclear equation to describe the decay of iodine-131. (b) Calculate the mass of iodine-131 remaining after 40 days. (c) How long does it take for the mass of the iodine-131 to reach 2.5 × 10–2 g? Solution

131 0 (a) 131 53I → 54Xe + –1e + γ (b) Forty days is equivalent to five half-lives. Therefore the amount of iodine-131 will be halved five times, hence:

Remaining mass 1 5 = 6.4 × 2 = 0.20 g

()

(c) Let the number of half-lives needed be n: 1 n = 2.5 × 10–2 6.4 × 2

() () 1 2

368

n

=

2.5 × 10–2 6.4

chapter 20  radioactivity as a diagnostic tool

() () 1 2

n

1 2

n

=

1 256

=

1 28

n = 8.0



Since each half-life is eight days, the time required will be eight lots of eight days, that is, 64 days.

Example 2

A sample of Ag-108 has an activity level of 6.4 × 104 Bq. This activity level drops to 2.0 × 103 Bq after 12 minutes. Calculate the half-life of Ag-108. Note: Bq stands for becquerel, the SI unit used to measure the activity of a

radioisotope. Solution

Let the number of half-lives needed be n: 1 n = 2.0 × 103 6.4 × 104 × 2



() () () () 1 2

n

1 2

n

1 2

n

=

2.0 × 103 6.4 × 104

=

1 32

=

1 25

n = 5.0

This means five half-lives have elapsed in 12 minutes, hence one half-life will equal to 12 divide by 5, hence 2.4 minutes.

The significance of half-lives The length of the half-life of a radioisotope is significant for its use in clinical medicine and radioisotopes with shorter half-lives are preferred clinically. Any radioisotopes can potentially do harm to the body due to their energy and ability to ionise, which can cause destruction of the cellular structures, in particular the DNA materials. They have the potential to induce secondary cancer and may affect pregnancies. A shorter half-life means that the injected radioisotope will disappear from the body soon after the scan is completed, hence minimising harm. The dose of the radioisotope used should be kept to a minimum. Importantly, radioisotope scans are contraindicated in pregnancies.

369

medical physics

20.3

Radioactivity as a diagnostic tool: nuclear medicine n

Describe how radioactive isotopes may be metabolised by the body to bind or accumulate in the target organ

The general principle for using radioisotopes to acquire images of a body organ is similar across all sub-types of radioisotope scans. These scanning methods belong to a branch of medicine called nuclear medicine and are performed in the hospital within the department of nuclear medicine by doctors who specialise in this area. The operating principle may be simplified and summarised as follows.

Introducing a radioisotope into the body Introducing a radioisotope into the body is usually done by injecting the radioisotope into the bloodstream through a vein. The radioisotope injected can be in the state of free elements, but is more commonly injected after being attached to natural biological molecules that the body recognises. These biological molecules attached to a radioisotope are termed radiopharmaceuticals. The reason for the use of radiopharmaceuticals is to avoid the radioisotope being recognised by the body as a foreign substance, as well as aiding the transportation in the bloodstream and the accumulation of the radioisotope in the target organ. Different types of biological molecules are used to target different organs since the body will naturally distribute and accumulate organ-specific molecules in the target organs. Iodine-123 is one of a few radioisotopes that can be injected as free elements. This is because the body is able to transport free iodine and selectively accumulate it in the thyroid gland. Technetium-99m, the most commonly used radioisotope, cannot be injected into the bloodstream as free atoms because the body will recognise them as foreign and will not transport them. Technetium-99m therefore must be attached to natural biological molecules, the nature of which depends on the body organ targeted.

Circulation and accumulation of the radioisotope The radioisotope circulates through the body, whether as free elements or bound to other molecules and, given adequate time, will eventually accumulate in the target organ. The amount of accumulation of the radioisotope is determined by the metabolic activity of the target organ, in other words, how quickly the target organ is processing the nutrients supplied in the blood.

Detection of the radiation While the radioisotope is in the organ, it will continue to emit radiation (gamma rays), which can be detected using a detector outside the body. The detected radiation then generates electrical signals and the signals are fed into a computer to form images of the organ. Such a detector is known as a gamma camera (also known as an Anger camera), which is a modified version of a scintillation counter. A photograph of a gamma camera used in a hospital is shown in Figure 20.2 and a schematic drawing of a gamma camera is shown in Figure 20.3. The side of the gamma camera facing the patient is made of a layer of scintillation crystal, such as sodium iodide, which is able to give a flash of light every time a beam of gamma ray strikes it. This light then reaches the photomultiplier tubes directly

370

chapter 20  radioactivity as a diagnostic tool

Photomultiplier tubes Scintillation crystal Lead collimating plate Gamma rays Tumor site Figure 20.2  A gamma camera used in a hospital

Figure 20.3  A schematic drawing of a gamma camera

behind the crystal. The photomultiplier tubes convert the light into electrical signals via the photoelectric effect. The electrical signals are then fed into the computer system to produce the images of the scanned organ. When a lot of radioisotopes are accumulated in the organ (or one part of the organ), the organ emits more gamma rays, which in turn results in stronger electrical signals produced and subsequently ‘hot’ spots are displayed (brighter or dark depending on the type of the scan). When fewer radioisotopes accumulate, then the organ or a part of the organ display ‘cold’ spots. Based on this information, diagnosis of a particular disease can be made. Some examples are given in the next section.

The uses of radioisotope scans n n

Perform an investigation to compare an image of a bone scan with an X-ray image Gather and process secondary information to compare a scanned image of at least one healthy body part or organ with a scanned image of its diseased counterpart

Radioisotope scans are extensively used in clinical medicine for diagnostic purposes. Because they focus on the metabolic activity or functional level of an organ, they may detect or diagnose problems that other imaging methods are unable to provide. However, because the gamma rays may be diffracted and absorbed by the body tissues as they travel towards the camera, the resolution of the images formed is usually poor. Furthermore, because the radioisotope does not get distributed entirely throughout the organ, some detail of the organ may not be shown. As a consequence, radioisotope scans do not provide detailed information about the structure of the organ. There are vast numbers of radioisotope scans and some are more complicated than others. A few common ones are discussed here.

first-hand and secondary source investigation PFAs H3 physics skills H12.3a, b, c, d H12.4c H14.1a, b, e, g, h

Thyroid scan To perform a thyroid scan, iodine-123 is administered orally. Iodine-123 circulates through the bloodstream to accumulate only in the thyroid gland. Note: The thyroid gland uses the accumulated iodine to make thyroid hormone, an important hormone that controls the rate of the body’s metabolism.

371

medical physics

Figure 20.4  (a) A normal thyroid scan

This radioisotope, when accumulated in the gland, continues to emit gamma rays, which are detected by a gamma camera placed just in front of the neck (where the gland is situated). A normal thyroid scan should look like the one in Figure 20.4. Whenever the thyroid is overly active, it will take up more iodine than normal. This is known medically as hyperthyroidism. Accumulation of iodine-123 in the gland leads to more gamma rays being detected by the gamma camera. Consequently, the thyroid gland will appear ‘hot’ on the scan images. Depending on the pattern of the distribution of the ‘hot’ spots, different diseases maybe diagnosed. For example, uniform increase in uptake may indicate Graves’ disease, an autoimmune disease where the thyroid gland is stimulated inappropriately. A patchy increase in uptake may indicate multinodular goitre, an excessive growth of the thyroid gland. When the thyroid gland is underactive (known as hypothyroidism), it will take up less iodine-123, therefore less radiation will be emitted. Consequently, the images of the gland will show ‘cold’ spots. Again this can indicate diseases such as thyroiditis (inflammation of the thyroid gland) or even thyroid cancer, where cancerous tissues have replaced normal thyroid tissues and the gland no longer takes up iodine to make thyroid hormone. The thyroid scan is the investigation of choice when a patient is suspected clinically to have either hyper- or hypothyroidism. The scan and some simple blood tests are usually adequate in establishing a diagnosis. A thyroid scan is also useful to determine the functional state of a thyroid lump: a lump that is ‘hot’ is much less likely to be malignant compared to a lump that is ‘cold’.

Bone scan To perform a bone scan, technetium-99m labelled polyphosphate molecules (e.g. oxidronate) are injected intravenously. These molecules circulate in the bloodstream and finally accumulate in the skeletal system after two to four hours. Figure 20.4  (b) Hyperthyroidism— the thyroid gland is illuminated more than (a)

372

Note: Bones take up phosphate ions as a part of their mineralisation process.

Normal bones have a very low level of metabolism, therefore only a small amount of technetium-99m labelled polyphosphate molecules should accumulate in them. However, when parts of a bone increase their metabolic rate, an increased uptake (accumulation) is expected. Therefore more radiation is emitted and these parts will be seen as ‘hot’ spots. The bone (or parts of it) can become more metabolically active for many reasons: these include fractures, infections (osteomyelitis), or tumour deposits (both primary and metastatic, where cancer has spread from a distant organ). A bone scan is very sensitive in picking up occult fractures (fractures that are not seen on a plain X-ray film), as shown in Figure 20.5. They are also good for diagnosing osteomyelitis and assessing the degree of spread of a primary cancer, which provides important information about a patient’s prognosis.

chapter 20  radioactivity as a diagnostic tool

Bone scans, just like many other radioisotope scans, are extremely sensitive and may sometimes give false positive findings, for example, a hot spot may simply be due to a bruise and hence not be significant. Also, like many other radioisotope scans, bone scans only demonstrate whether the organ (bone) is overly, normally or underactive. The interpretation of the images and hence the final diagnosis can only be made after collating the clinical history as well as findings from other investigations.

A normal whole body scan

Figure 20.5  (a) A plain X-ray film of a hand, where a fracture cannot be seen

Figure 20.5  (b) A bone scan of the same hand, showing a ‘hot’ spot in one of the hand bones indicating a fracture in that bone

A whole body bone scan showing metastatic cancer in the right hip (circle)

Ventilation and perfusion scan A ventilation and perfusion scan is used to evaluate lung function. To perform the scan, albumin molecules (a protein found in the blood), labelled with technetium-99m, are injected intravenously so that they will circulate through the lungs, hence perfusion. At the same time, aerosols of a radioactive gas such as krypton-81m are inhaled by the patient into the lungs, hence ventilation. The perfusion component of the scan studies the blood circulation through the lungs, whereas the ventilation component studies the air movement in and out of the lungs.

A normal ventilation and perfusion scan

An abnormal ventilation and perfusion scan— note the missmatch between the anterior ventilation and anterior perfusion

373

medical physics

For normal lungs, areas that are ventilated are matched with areas that are perfused. If an area of the lung has normal perfusion but poor ventilation, this may indicate an obstruction of the airway, such as by a tumour or more commonly by a foreign body such as an inhaled peanut in young children. On the other hand, images showing an area that has normal ventilation but impaired perfusion may indicate a blood clot in the arteries that supply that section of the lung. This is known as pulmonary embolism, which may present clinically with chest pain or breathlessness—even death. Once again, the results of the ventilation and perfusion scan need to be interpreted based on clinical suspicion and the results of other investigations.

Myocardial perfusion scan A myocardial perfusion scan is performed by injecting a compound known as sestamibi, which forms a complex with technetium-99m. The compound circulates to accumulate in the heart muscle and the amount of accumulation depends on both the volume of the heart muscle and the level of blood supply of the heart muscle. Images of the heart muscles are taken when the patient is exercising and resting. A myocardial perfusion scan is valuable in diagnosing heart problems as the cause of chest pain. The scan is able to differentiate between angina (when the heart muscles are temporarily deprived of blood supply) and myocardial infarction (when the heart muscles have been deprived of blood supply for a prolonged period of time and are now dead).

Comparing medical images When comparing medical images, a list of the properties of the images for comparison should be made first. These include the overall appearance, resolution, colour, contrast and range of tissues and pathologies shown. Comparing multiple images obtained from various sources will increase the reliability of the comparison. Medical images may be obtained from the Internet, textbooks, journals and medical imaging facilities (such as the hospital). All information obtained may be recorded in a table format and a summary should be made.

20.4

Positron emission tomography Positron emission tomography (PET) is a special type of radioisotope scan that uses positrons emitted from the radioisotopes to form images of the target organ.

The positron emitters As described already, positrons are emitted from radioisotopes that have too many protons compared to neutrons or too few neutrons compared to protons. Oxygen-15 and fluorine-18 are common examples. In general, positron emitters have very short half-lives, in the order of a few seconds to minutes (see Table 20.2), therefore most of the positron emitters used for PET need to be artificially manufactured on site where the PET scan is performed. Since all positron emitters have fewer neutrons compared to their stable isotopes, they cannot be produced by neutron bombardment, which would only add more neutrons to the stable isotope. Consequently, positron emitting radioisotopes must be produced using a particle accelerator. Table 20.2  Half-lives of positron emitters

374

Radioisotope

Half-life (minutes)

Carbon-11

20.4

Oxygen-15

2.04

Fluorine-18

110

chapter 20  radioactivity as a diagnostic tool

Note: Particle accelerators are devices that are designed to use electric fields and/or magnetic fields to accelerate charged particles to very high speeds before smashing the particles against a target. Such smashing leads to transmutation. For more detail, see Chapter 17.

The particle accelerator accelerates protons to a high speed to enable them to overcome the electrostatic repulsion force when they are smashed into the (positive) nucleus of the target atoms to cause transmutation. Fluorine-18 is the most commonly used positron emitting radioisotope and is produced via proton bombardment (accelerated using a particle accelerator, e.g. a cyclotron) of oxygen-18 enriched water: 18 8O

+ 11 p →

18 9F

+ 10 n

Fluorine-18 is recovered as an aqueous solution of fluoride-18 (H2O/18F–) and can be extracted by ion-exchange chromatography. Oxygen-15 can be produced by bombarding nitrogen-14 with accelerated protons: 14 7N

+ 11 P →

15 8O.

The operating principle of a PET scan n n

Discuss the interaction of electrons and positrons resulting in the production of gamma rays Describe how the positron emission tomography (PET) technique is used for diagnosis

20.5

To perform a PET scan, the positron emitting radioisotope is first attached to a biological molecule, similar to the radioisotope scans described before. Again, this compound is now called a radiopharmaceutical. Fluorine-18 is the most commonly used positron emitter and glucose molecules are used for attachment. Fluorine-18 attaches to the glucose molecules to form 2-fluoro-2-dioxy-D-glucose (FDG) molecules, which are then injected into the patient intravenously. The molecules circulate in the bloodstream and are concentrated in the target organ, for instance, the brain. The patient lies on a table and is positioned so that the target organ is inside a gantry, which contains many sets of modified gamma cameras. This is shown in Figure 20.6 While inside the organ, FDG molecules continue to emit positrons. The positrons will only travel for a few millimetres before they encounter the very abundant electrons. This encounter between a matter and anti-matter pair will result in the total annihilation of the mass of the positrons and the electrons. The annihilated masses are converted into energy in the form of paired gamma rays, governed by the equation E = mc2. The pair of gamma rays travel away from each other perpendicular to the initial direction of the motion of the positron and electron. This is shown in Figure 20.7 (overleaf). The gamma cameras mounted on the gantry will detect the pairs of gamma rays and the signals are fed to the computer to reconstruct the images of the section of the target organ that is placed inside the gantry, hence the term tomography. The patient is then moved slowly through the gantry so that images of the other sections of the organ can be produced.

375

medical physics

Figure 20.6  (a) A patient undergoing a PET scan of the brain (b) The resulting image Gamma ray

Gamma ray

Gamma ray detector



(a)

Figure 20.7  The annihilation of the masses when a positron encounters an electron

(b)

Gamma ray

v Electron

e

e

Gamma ray

v Positron

In order to reconstruct the images, the position of each of the sources of gamma rays must first be located. This can be done by determining the time difference for each pair of gamma rays to arrive at the opposite gamma cameras, shown in Figure 20.6. Obviously, from an off-centre source, the gamma ray travelling towards the closer camera will take less time than the gamma ray travelling towards the further gamma camera. Because FDG molecules will be distributed through the entire organ, analysing the location of each of the FDG molecules will outline the structure of the organ. Note: The accuracy of locating the radioisotopes is about 3 to 5 mm.

The second part to the image formation process is to determine the absolute intensity of the gamma rays produced. This can be done by knowing the intensity of the arriving gamma rays and the attenuation coefficients for gamma rays passing through tissues. The areas that are emitting more intense gamma rays are displayed as ‘hot’ spots, which reflect more accumulation of the radioisotope hence more metabolic activities. Areas that are emitting less intense gamma rays are displayed as ‘cold’ spots, corresponding to areas that are metabolically inactive. A PET of a ‘hot’ spot in a brain can be done. Based on this information, as well as combing the clinical information and the results of other investigations, diagnosis of certain diseases can be made.

376

chapter 20  radioactivity as a diagnostic tool

Applications of PET scans

20.6

Using PET scans to detect diseased organs n

Gather and process secondary information to compare a scanned image of at least one healthy body part or organ with a scanned image of its diseased counterpart

PET scans are expensive, mainly due to the cost of producing the positron emitting radioisotopes using a particle accelerator. Also, because the positron emitters have very short half-lives, they must be produced on site and this limits the availability of the PET scans. Just like the simple radioisotope scans described before, PET scans focus on the functional status of the target organ but do not show the structure of the organ well.

secondary source investigation PFAs H3 physics skills H12.3 a, b, c, d H12.4c

Using PET for brain pathologies One of the common uses for PET scans is to diagnose brain diseases. FDG molecules are taken up well by the brain, and the pattern of the uptake depends on the state of the brain or the presence of a particular disease. A normal brain should have a uniform uptake of the molecules. For a diseased brain, say in a patient with epilepsy, during a seizure activity, the diseased part of the brain will be overly active (the seizure focus), hence this part will take up more FDG molecules, and will be shown as a ‘hot’ spot. After the seizure has ceased, this part of the brain is exhausted and is shut down, therefore the uptake of nutrients, in this case FDG molecules, will be reduced. A ‘cold’ spot will now be seen at the same part of the brain. Based on this, the seizure focus can be located accurately. Although a much cheaper test such as the electroencephalogram (EEG) may provide similar information, PET provides more precise and reliable information about the seizure focus and pattern of the seizure activity. Such information is essential if surgical resection of the affected part of the brain is to be considered for epilepsy that is resistant to drug treatment. PET scans can also be used to diagnose multiple sclerosis, schizophrenia and Alzheimer’s disease (a form of dementia). It is important to remember that PET scans are used to diagnose brain diseases where the brain structures may be normal and the disease arises from an abnormal functional state of the brain. Indeed, whether it is epilepsy, multiple sclerosis or Alzheimer’s disease, there might be no change or just minimal change in the structure of the brain when evaluated using CT or even MRI. For this reason, PET scans are extremely useful as they can diagnose certain diseases that no other imaging method is able to. A common query needs to be addressed: ‘Why can’t the glucose molecules be labelled with technetium-99m when scanning the brain, which would make the scanning process much simpler and cheaper?’ The answer lies in the fact that glucose is a rather small molecule, too small to have the technetium-99m (or any other pure gamma emitters) added without distorting its structure and shape. This distortion means the molecule will no longer be taken up by the brain. Fluorine-18 is a small atom, therefore when attached it does not affect the structure of the glucose molecule. Consequently, in certain cases, where glucose molecules have to be used as the carrying molecules, fluorine-18 is attached and hence PET is the only available option.

377

medical physics

Using PET for detecting metastatic cancer The other common use for PET scans is to detect the spread of cancer (metastasis). Metastases can sometimes be difficult to detect using other imaging methods, such as a CT scan, because they may be small and are embedded in the healthy tissues. However, the cancerous tissues often up-regulate their glucose transporters as a mechanism to increase their nutritional uptake to sustain their rapid growth, consequently, the uptake of FDG will be increased, which allows the cancer tissues to be displayed as ‘hot’ spots on PET images. Because treatment of cancer can be very different depending on the presence or absence of metastasis, the information provided by PET will guide the doctor to plan the best treatment for the patient. High-grade (more advanced) cancers tend to take up more glucose molecules than low-grade cancers, and therefore will ‘light’ up more on PET images. This allows the doctors to assess the nature of a particular cancer without taking tissue samples. Once again, this information cannot be provided by scans that only reveal the anatomical structures.

Using PET for research PET can be used to study the brain activity during certain types of physical tasks such as speaking or undertaking a fine motor activity. Based on the area of the brain that increases the uptake of the FDG molecules when a particular task is performed, the researcher is able to determine the part of the brain that controls a specific activity or task. For example, when speaking, a part of the frontal lobe will ‘light up’ suggesting it is responsible for controlling speech. This increases the knowledge of the regional function of the brain. It also provides valuable information that guides doctors to predict the likely deficit and prognosis after a brain injury (such as a stroke). Students are encouraged to source their own images and refer to page 376 for examples.

20.7

378

Evaluation of the uses of radioisotope scans Radioisotope scans, including PET scans, are not useful for providing information about the anatomical structures of the body. They have very low resolution and do not show fine detail of the target organ. On the other hand, radioisotope scans are the only type of scans that enable the assessment of the functional status of the target organ. Based on this information, many diseases can be diagnosed, especially ones that do not show any structural abnormalities. Also, because radioisotope scans do not show structural details they are often used in conjunction with other imaging methods that do show anatomical structures well, including CTs or MRIs. Furthermore, radioisotopes produce harmful ionising radiation, which can do damage to the body even if administered at low doses. Although in most cases the benefit gained from the results of the scans outweighs the associated risks, they should be used with care. The risk of developing a secondary cancer is small due to the low dose and the short half-life of the radioisotope used; however, the scans are generally contraindicated in pregnancies. Lastly, radioisotope scans, especially PET scans, are expensive. PET scans also depend on particle accelerators for their function, therefore their availability is limited and the waiting list for PET scans is often long.

chapter 20  radioactivity as a diagnostic tool

chapter revision questions   1. Define ‘radioactivity’ and ‘transmutation’.   2. (a) Determine whether the following elements are alpha emitters or beta (plus or minus)

emitters. (i) Potassium-40 (ii) Thorium-232 (iii) Radium 226 (iv) Iodine-131 (v) Nitrogen-13 (b) Write a nuclear equation to describe the decay of each of these elements.   3. Iron-59 undergoes gamma decay; write an equation to describe this reaction.   4. (a) When aluminum is bombarded with alpha particles, a highly unstable isotope of

phosphorus is formed. Write a nuclear equation to describe this reaction. (b) This radioisotope or phosphorus then undergoes a decay to form phosphorus-30 and another product. Identify this product and write a nuclear reaction to describe this reaction.   5. Nitrogen-14 when bombarded with an alpha particle will give rise to oxygen-17 and one

other element. Write a nuclear equation to describe this reaction.   6. A sample of radioactive

214 83Bi,

which has a half-life of 19.9 minutes, has an activity of 5.84 × Bq. (a) What will its activity be after 39.8 minutes? (b) What will its activity be after two hours? 104

  7. Give two reasons for why technetium-99m is the most commonly used radioisotope in

nuclear medicine for acquiring images of a diseased organ.   8. The following questions are about bone scans.

(a) Why does technetium-99m have to be attached to polyphosphate molecules (oxidronate) when performing a bone scan? (b) Under what circumstances would bones increase the uptake of technetium-99m? (c) How is the radiation emitted by technetium-99m detected outside the body? (d) Give two examples of medical problems that can be shown by using a bone scan. (e) Give one example where bone scans can be more superior to plain X-ray films.   9. In general terms, what classes of thyroid pathologies are best diagnosed using a

thyroid scan? 10. What happens when a positron is allowed to collide with an electron? 11. Explain in detail how PET can use fluorine-18 labelled glucose molecules to form

images of the target organ.

SR

12. Compared to CT scans, PET scans are superior in diagnosing brain pathologies such as

epilepsy or Alzheimer’s disease (a form of dementia). Why? 13. Why can PET scans detect metastatic cancer more accurately than other scanning

methods, such as CTs? 14. What are some of the limitations of using radioisotope scans, in particular PET scans?

Answers to chapter revision questions

379

medical physics

CHAPTER 21

Magnetic resonance imaging The magnetic field produced by nuclear particles can be used as a diagnostic tool Introduction Magnetic resonance imaging (MRI) is a relatively new diagnostic tool that can provide doctors with high-quality images of many body systems. It can provide high-resolution details of anatomical structures. Images of certain tissues or diseases may be enhanced. MRI is structurally and functionally complicated. This chapter discusses the basic functional principles of MRI. First, the patient who is to have the scan is placed inside a big open tube. Inside the tube a very strong magnetic field is produced. The magnetic field aligns the nuclei of the atoms of interest in the body. Once these nuclei are aligned, pulses of radio waves are applied to the body. The aligned nuclei are able to absorb these radio waves. Once the external radio waves are switched off, the absorbed radio energy is re-emitted from the nuclei. The returning radio waves are detected and analysed to produce images of the body parts. The result is that the body is turned into a radio wave emitting source. With the different body parts and different diseased or healthy tissues emitting radio waves differently, the pattern of the radio wave emission provides information about the structure of the body parts. Note: Unlike CT, where the image formation process relies on the attenuation of external

energy (X-rays), the signals for MRI images are generated within the body (although induced first by the external radio waves). This can be compared to radioisotope scans, where the image formation also relies on rays emitted from within the body, although the difference is the use of gamma rays emitted by the injected radioisotopes in nuclear scans. An MRI machine

The rest of this chapter elaborates on the points mentioned above, with emphasis on: 1. Why nuclei align when they are subjected to a strong magnetic field. 2. Why aligned nuclei absorb radio waves, and the rules governing the re-emission of the radio waves. 3. How the re-emitted radio waves are analysed to form images of body parts.

380

chapter 21  magnetic resonance imaging

Nuclear spin n n n

Identify that the nuclei of certain atoms and molecules behave as small magnets Identify that protons and neutrons in the nucleus have properties of spin and describe how net spin is obtained Explain that the behaviour of nuclei with a net spin, particularly hydrogen, is related to the magnetic field they produce

21.1

To fully explain the spin of the nucleus, quantum physics is required, which is beyond the scope of this course. To simplify the concept of nuclear spin, the nucleus can be visualised to spin on its axis. A spinning nucleus carries an angular momentum that is related to the spin number (I). The spin number is a quantum mechanic number and is basically determined by the number of protons and neutrons inside the nucleus. There are three groups of value for I: 0, integral values and half-integral values (e.g. 1⁄2, 3⁄2, 5⁄2 and so on). The spin number of a nucleus will be 0 if both its atomic number and atomic mass number are even numbers. When the atomic number is odd and the atomic mass number is even, then the spin number will be an integer. When the atomic mass number is odd, then the spin number is always multiple of 1⁄2. The spin number for some of the common elements is listed in Table 21.1. Table 21.1 Element

Proton number

Neutron number

Spin number (I)

Hydrogen-1

1

0

1

Hydrogen-2

1

1

1

Helium-4

2

2

0

Oxygen-16

8

8

0

Sodium-23

11

12

3

Phosphorus-31

15

16

1

⁄2

⁄2 ⁄2

Note: Students are not required to know the method for calculating the spin number.

Magnetic field associated with the nuclear spin Recall that moving charged particles will create a magnetic field. Since a nucleus contains positive charges (due to its protons), if it spins, there will be an associated magnetic field. Taking the elements listed in Table 21.1 for example, all except helium-4 and oxygen-16 will produce their own magnetic field due to their non-zero spin number and hence angular momentum. The spinning nucleus produces a magnetic field with its net axis parallel to the spinning axis and direction determined by the right-hand grip rule. A small magnet provides a useful analogy. As shown in Figure 21.1 (a), when the nucleus is spinning in a clockwise direction, the fingers curl in the clockwise direction and the thumb points upwards. This is to say that the magnetic field produced by this spinning nucleus has a net axis that is directing upwards, similar to a bar magnet with its north pole pointing upwards. The opposite is true if the nucleus is spinning in the anti-clockwise direction, as shown in Figure 21.1 (b).

381

medical physics

Net magnetic field vector pointing up

Nucleus

Nucleus Net magnetic field vector pointing down Figure 21.1  (a) Spinning of a nucleus in a clockwise direction produces a magnetic field that resembles a bar magnet that has its north pole pointing upwards

21.2

Figure 21.1  (b) Spinning of a nucleus in an anti-clockwise direction produces a magnetic field that resembles a bar magnet that has its north pole pointing downwards

The nucleus in a magnetic field n

n

Explain that the behaviour of nuclei with a net spin, particularly hydrogen, is related to the magnetic field they produce Describe the changes that occur in the orientation of the magnetic axis of nuclei before and after the application of a strong magnetic field

Consider what would happen when bar magnets with random orientations are placed inside a strong uniform magnetic field. As shown in Figure 21.2, these magnets will Figure 21.2  Aligning of the bar magnets inside an external magnetic field

Bar magnets

When there is no external magnetic field

382

Bar magnets

The presence of the external magnetic field causes the magnets to give up their random orientations and align their north poles parallel to the field lines

External magnetic field

chapter 21  magnetic resonance imaging

align with the magnetic field such that the north poles are pointing in the direction of the field. Randomly orientated nuclei that have associated magnetic fields due to their spin will also align when they are subjected to a strong external magnetic field; however, there is a slight difference. As shown in Figure 21.3, nuclei can align with either the net magnetic field axis (‘the north pole’) pointing in the direction of the external magnetic field, known as parallel alignment, or in the opposite direction to the magnetic field, known as anti-parallel alignment.

Parallel alignment of the nuclei inside a strong external magnetic field Net magnetic field vector

Antiparallel alignment of the nuclei inside a strong external magnetic field

External magnetic field

Spinning nucleus

Figure 21.3  Alignment of nuclei inside an external magnetic field

External magnetic field

Absorb energy Spinning nucleus

Net magnetic field vector Re-emit energy

Parallel alignments have a slightly lower energy level than anti-parallel alignments and such energy differences are proportional to the strength of the applied magnetic field. Also, under normal circumstances, there are always more parallel alignments compared to anti-parallel alignments, and once again the difference is proportional to the strength of the magnetic field. When applying a magnetic field strength of 1.5 T, there is approximately one extra parallel alignment for every 100 million nuclei aligned. This has two consequences. First is the formation of the net magnetic field. Before the alignment of the nuclei, the magnetic fields they produce are all randomly orientated, therefore no net magnetic field will be expressed. However, the extra parallel alignments when a strong magnetic field is applied will result in a net magnetic field in the parallel direction. The result of this is known as net magnetisation (M). Second, parallel alignments can absorb energy (externally applied in the form of pulses of radio waves) and move into anti-parallel alignments. This energy can be re-emitted once the anti-parallels return to the original parallel alignments. This conversion between parallel and anti-parallel alignments forms the basis of MRI. All nuclei that have a net spin will align within an external magnetic field in the ways described above. Nevertheless, hydrogen atoms are the chosen targets in MRIs. Hydrogen atoms respond well to the external magnetic field and are abundant in the body, as they are found in water and fat molecules. Note: Nuclei that have 0 net spin therefore will not respond to the external magnetic

field.

383

medical physics

21.3

Precession n

Define precessing and relate the frequency of the precessing to the composition of the nuclei and the strength of the applied external magnetic field

Representing the aligning of the nuclei by using bar magnets as an analogy is adequate but incomplete. This is because even though the net magnetic field axis of the spinning nuclei aligns with the external magnetic in either a parallel or antiparallel fashion, the spinning motion of the nuclei means that their rotational axis will not be parallel to the magnetic field, but rather revolves around it. This is known as precession. Definition

Precession is the movement where the rotational axis of a spinning object revolves around another central axis. Hence the nuclei, when trying to align within the external magnetic field, will do so by either precessing parallel to the magnetic field lines, shown in Figure 21.4 (a) or precessing in an anti-parallel fashion as shown in 21.4 (b).

External magnetic field

External magnetic field

Nucleus

The rotational axis of the nucleus revolves around the external magnetic field line—precession

Rotational axis of the nucleus

Rotational axis of the nucleus

Nucleus

Figure 21.4  (a) Precession of the nuclei parallel to the magnetic field

21.4

Figure 21.4  (b) Precession of the nuclei antiparallel to the magnetic field

Larmor frequency n

n

384

Precession

Define precessing and relate the frequency of the precessing to the composition of the nuclei and the strength of the applied external magnetic field Discuss the effect of subjecting precessing nuclei to pulses of radio waves

chapter 21  magnetic resonance imaging

An important property that is associated with the precession of the nuclei is the Larmor frequency. Definition

Larmor frequency is the frequency at which the nucleus precesses around the external magnetic field lines. In other words, it is the number of revolutions the rotational axis of the spinning nucleus will complete in one second. The Larmor frequency is not a constant but is dependent on two factors: the nature of elements, and the strength of the external magnetic field. This can be described γ by the equation ω = B, where ω is the Larmor frequency, γ is the gyromagnetic 2π ratio (a constant for each element) and B is the strength of the magnetic field. Hence it is obvious that the Larmor frequency is different for different elements and can be changed by changing the external magnetic field strength. Table 21.2 lists a few Larmor frequencies at a magnetic field strength of 1.5 T. The importance of the Larmor frequency for MRI is that the nucleus of the element will only absorb the pulses of radio waves and change their alignment from parallel to anti-parallel if the radio wave frequency corresponds to the Larmor frequency of the element. If the two frequencies do not match, then the nucleus will not resonate and will not absorb the pulses of radio waves. The fact that hydrogen atoms have their unique Larmor frequency means that they can be selectively targeted in MRI by choosing the specific frequency radio waves. The Larmor frequency of the hydrogen atoms may also be augmented slightly γ B) and this forms an by changing the strength of the magnetic field (recall ω = 2π important part of image formation as described below. Table 21.2  The Larmor frequency of various elements Element Hydrogen-1 Hydrogen-2

Larmor frequency at 1.5 Tesla (MHz) 63.86 9.803

Sodium-23

16.90

Phosphorus-31

25.88

Image formation: locating the signals After the body has been subjected to a strong magnetic field (the main magnetic field) and pulses of radio waves, all hydrogen nuclei that have absorbed the radio waves will start to re-emit the radio waves, which now carry the anatomical information for image reconstruction. One crucial step in image reconstruction is to distinguish the radio waves coming from different locations of the body, for instance, distinguishing those emitted by the brain from those emitted by the liver, those from the left-hand side of the organ from those from the right, as well as those from the front of the body from those of the back. To achieve this, three gradient magnetic fields are used. One in the longitudinal axis of the body, the z-axis, enables slice selection; one in the x-axis and the other in the y-axis together describe the planes

21.5

385

medical physics

that are perpendicular to the z-axis (see Fig. 21.5). These gradient magnetic fields are produced by gradient coils as describe below and are added to the strong main magnetic field produced by the main solenoid used to align the nuclei of the atoms. The basic principle of using gradient magnetic X coil creates varying fields is that the magnetic field magnetic field strength at any unique x, y and from left to right z coordinate is slightly different Radio-frequency coil generates and receives from the others, such that the radio waves Main solenoid tissues at this location will creates strong, uniform resonate to specific frequency magnetic field radio waves hence later re-emitting radio waves with an unique frequency which can then be analysed to provide information of the tissues’ location. Y coil creates varying magnetic field from top to bottom

Z coil creates varying magnetic field from head to toe y

x

z

Figure 21.5  MRI apparatus: the main solenoid and the gradient coils: the main solenoid produces the strong magnetic field used to align the nuclei of the atoms while the gradient coils augment this main magnetic field, which is essential for locating the returning radio signals

Figure 21.6  Magnetic field strength

Slice selection By convention, MRI images are represented by slices, and commonly, horizontal slices. To select slices along the z-axis, a longitudinal gradient coil is used. This is shown in Figure 21.5. This gradient coil produces a gradient magnetic field along the z-axis, and is added to the main magnetic field, which is used to align the nucleus of the atoms. Such a gradient field is set up to augment the main magnetic field by about 1% in magnitude so that the field strength towards the head increases uniformly, shown in the graph in Figure 21.6. This results in distinct Larmor frequencies for the hydrogen atoms within different slices along the z-axis of the body. (Recall that the Larmor frequency is proportional to the strength of the magnetic field.) A radio oscillator is employed to selectively produce radio waves at a specific frequency to target the hydrogen nuclei within a particular slice along the z-axis. Later, these radio waves are re-emitted and received by the receiving coil and the anatomical information carried by these radio waves is made slice-specific. Analogy: This slice selection method is

similar to tagging labels. The anatomical information within different slices are tagged with different labels (the specific Larmor frequencies) so that they can be differentiated when they are received.

Magnetic field strength (T)

Locating signals within a slice (Toe)

(Head) 0

386

Distance along the z-axis (m)

A similar idea is used to locate the signals coming from within one slice. The x coil produces a gradient magnetic field in the x-axis that differentiates the left from the right,

chapter 21  magnetic resonance imaging

whereas the y coil produces a gradient field along the y-axis which differentiates the top from the bottom. However, rather than modifying the Larmor frequency, the x gradient field encodes the position by causing the nuclei of the hydrogen atoms to precess at slightly different frequencies depending on the positions of the atoms within the slice. This is known as frequency encoding. The y gradient field on the other hand modulates the phase of the precession of the nuclei depending on the positions. The phase difference in this context refers to nuclei precessing at the same frequency but one nucleus may be precessing ahead of the other one depending on its position. This is known as phase encoding. A typical slice has 256 frequency encoding values and 256 phase encoding values, which together divide the slice into 256 × 256 voxels. Tissues within each voxel have different frequency and phase encoding values, allowing their position to be located within the slice.

Image formation: tissue differentiation and contrasts n

n

Explain that the amplitude of the signal given out when precessing nuclei relax is related to the number of nuclei present Explain that large differences would occur in the relaxation time between tissue containing hydrogen bound water molecules and tissues containing other molecules

21.6

The second step to image formation is to reconstruct the actual outline of the organs. Since only hydrogen nuclei are targeted, the image reconstruction is essentially a mapping out of the hydrogen nuclei. Consequently, the shape of the organ or tissue depends on the distribution of the hydrogen nuclei. The brightness of the image on the other hand is related to the intensity of radio waves received and is determined by: 1. The number of hydrogen atoms in a particular organ. 2. The way the pulses of radio waves are sent and detected. This is related to the relaxation of the hydrogen nuclei within the organ and is discussed in detail later. 3. The level of pre-saturation. 4. The use of a contrast agent. Note: Tissues that emit more radio waves will appear bright on the screen whereas tissues that emit less will appear dark.

The re-emitted radio waves are weak. For the reconstruction of MRI images, the process of sending and receiving the pulses of radio waves has to be repeated many times so that the returning radio wave signals can be superimposed on each other to make the overall signal sufficiently strong. Although the repetition of the signal transmission provides a means of creating image contrast (see the next section), there are a few downsides to that. First, MRI scans generally take more time to complete compared to other imaging methods, such as CT scans. Second, the patient undergoing an MRI scan needs to hold still and constantly hold their breath to ensure that the hydrogen nuclei are not moved for each of the subsequent re-emissions of the radio waves, which would result in blurry images. Thus MRI scans are not suitable

387

medical physics

for uncooperative patients (e.g. people who are demented) and young children. MRIs are not the best for scanning organs that are constantly moving, for instance, the chest wall and the heart.

Image intensity and the hydrogen density Since hydrogen nuclei are the source of radio wave signals, organs or tissues that have more hydrogen atoms per unit volume will appear brighter. Water-containing tissues, such as cysts or fatty tissues, are all rich in hydrogen and therefore show up well on MRI images. Bones on the other hand are not rich in hydrogen atoms and therefore do not show up well using MRI (unless contrast is used).

Image contrast and relaxation Figure 21.7  The rotation of net magnetisation (M) from parallel (0°) to the external magnetic field to 90° to the field lines when the precessing nuclei are subjected to pulses of radio waves at their Larmor frequency z B M

To summarise, when the nuclei are subjected to a strong magnetic field, some precess parallel to the field whereas others precess anti-parallel. Normally, there are more parallel than anti-parallel alignments or precessions such that a net magnetic vector aligning parallel to the external magnetic field results. This is known as net magnetisation (M). When the nuclei are subjected to pulses of radio waves, the parallel alignments will absorb the energy and move into the higher energy anti-parallel alignments. At the same time, the anti-parallel alignments will simultaneously re-emit their energy to return to the parallel alignments—nevertheless, overall, in responding to the pulses of radio waves, there are more nuclei changing from parallel to anti-parallel compared to the reverse. The overall result is that the M vector gradually rotates away from its parallel position to the external field lines and eventually reaches the position that is 90o to the field lines. This rotation of the M vector into the z horizontal plane (the plane described After the by the x and y axes) is known as absorbtion B y y transverse magnetisation; this of pulses of M radiowaves is shown in Figure 21.7. While the M vector is in the transverse plane, x x the precession of the nuclei are all in phase due to the action of the external magnetic field. This induces EMF in the receiving coil to form the MR signals. When the radio waves are switched off, all nuclei return to their original alignments and re-emit the radio waves as they do so. This is known as relaxation, which has two aspects: T1 and T2 relaxation. The new concept introduced in this section is that T1 and T2 relaxation is based on the changes in the M vector. T1 relaxation

T1 relaxation is related to the returning of the M vector to the starting parallel position. T1 relaxation time (or just T1) is defined as the time taken for the M vector to return to 63% of its original value and is an exponential function as shown in Figure 21.8 opposite. At a microscopic level, the energy dissipation is via the excited (spinning) hydrogen nuclei transferring their energy to the surround lattice, therefore T1 relaxation is also known as spin-lattice relaxation. Different tissues will have different T1 relaxation profiles or times. Large molecules and bound water molecules such as in the fat, liver and spleen have a short T1, while free water has a long T1. See Figure 21.8.

388

chapter 21  magnetic resonance imaging

T2 relaxation Returning of M to the parallel position

T2 relaxation is related to that fact 100% that while the M vector is in the Tissue A transverse plane, the hydrogen Tissue B nuclei all precess in a coherent way 63% (in phase) as a result of the external magnetic field. As the M vector returns to the original position, the precessing nuclei lose coherence, which is accompanied by a reduction 0 in the induced MR signals. This T1 relaxation T1 relaxation is known as T2 relaxation. T2 time (short) time (long) relaxation time (or just T2) is defined as the time for the nuclei to decay to 37% of their initial precession coherence. Once again, T2 relaxation follows an exponential function, as shown in Figure 21.9. Microscopically, during T2 relaxation, the precessing nuclei transfer their energy to other precessing nuclei rather than the lattice, hence T2 relaxation is also known as spin-spin relaxation. There are also different T2 profiles. Large molecules found in tendons and muscles have a short T2 while free water has a long T2 (see Figure 21.9 below). Obviously, the two forms of the relaxation processes have to occur simultaneously, with an increase in T1 relaxation being accompanied by an increase in T2 relaxation. This is because as more nuclei are returning to their original alignments, the loss of coherence of the nuclear precession will also increase.

Figure 21.8  The profile for T1 relaxation of two different tissue types

T1 and T2 weighted images

Figure 21.9  The profile for T2 relaxation of two different tissue types

Loss of coherence of the precessions

By sending and receiving pulses of radio waves at various rates, MRI can enhance tissue contrast to produce either T1 weighted images (enhancing tissues with a short T1) or T2 weighted images (enhancing tissues with a long T2). Two parameters need to be introduced. Repetition time (TR) is the elapsed time between successive pulses of input radio waves. Echo delay time (TE) is the time delay between the sending of the radio waves and measurement of the first returning radio signal. To produce a T1 weighted image, that is, to emphasise tissues that have a short T1, a short TR is used. A short TR will enable tissues with a short T1 to maximise the absorption of radio waves. This is because as soon as the M vector returns to the original position (very quickly), there will be radio waves available for absorption. Tissues that have 100% a long T1 will take a longer time to recover before they can absorb radio waves again, and therefore will ‘miss out’ on the opportunity to absorb the many pulses of radio 37% waves. Consequently when tissues Tissue B re-emit the radio waves, the tissues Tissue A with a short T1 will emit more than 0 those with a long T1 (because they T2 relaxation T2 relaxation have absorbed more), and hence will time (short) time (long) appear brighter.

389

medical physics

In T1 weighted images, a short repetition time is used. Tissues with a short T1 will appear bright. On the other hand, in order to produce a T2 weighted image, that is, to enhance tissues that have a long T2, a long TE is needed. A long TE means the signals from T2 relaxation are measured long after the sending of the initial radio wave pulses. This will result in only tissues with a long T2 contributing to the returning signals because of the precession of the nuclei are still in phase, whereas tissues with a short T2 will have their T2 signal diminished to a very low level by this time. This allows the tissues with a long T2 to appear brighter and tissues with a short T2 to be suppressed. A cross-sectional image of the brain—T1 weighted

In T2 weighted images, a long echo delay time is used. Tissues with a long T2 will appear bright. Also, in T1 weighted images, the T2 effect needs to be suppressed. Hence in T1 weighted images, a short TE is used in order to suppress the T2 weighting. Similarly, a long TR is chosen to decrease the T1 weighting in T2 weighted images. Also by choosing a balanced TR and TE, images that are neither T1 nor T2 weighted can be produced, and these may occasionally be clinically useful. Note: It is futile to have both T1 and T2 weighted effects simultaneously because this means the contrast is lost, that is, all tissues will light up at the same time.

A cross-sectional image of the brain—T2 weighted. Note that the cerebral spinal fluid (water) appears bright (see arrow).

Coloured MRI scans of an axial (horizontal) section through the brain of a 38-year-old patient with cerebral abscesses (dark circles surrounded by white rings): the abscesses have been highlighted by the injection of a gadolinium

390

Image contrast by pre-saturation and contrast agents These two concepts are complex and will only be discussed briefly. Pre-saturation refers to continuously sending pulses of radio waves targeting the tissue (say tissue A) that needs to be suppressed. These radio waves will rotate and maintain the M vector in the transverse plane. At the same time, the induced voltages or MR signals due to the coherent precession of the nuclei of tissue A are destroyed by the use of a spoiler coil so that they do not contribute to the image formation. The radio frequency oscillator then produces another set of radio waves targeting other tissues of interest (say tissues B and C). These tissues will absorb the energy, but the tissue that is to be suppressed (tissue A) will not because its M vector is maintained in the transverse plane and therefore

chapter 21  magnetic resonance imaging

fails to relax. Consequently, upon relaxation, tissues B and C will re-emit the radio wave they have absorbed and light up on the scan images; whereas tissue A will be suppressed (appear dark) because it has not absorbed any radio wave energy. Examples of these include fat saturation, where fat tissues are selectively suppressed, and magnetisation transfer suppression, where free water is suppressed. Contrast agents work by affecting the relaxation time of the target tissues and most commonly their T1 relaxation time. Gadolinium, a common contrast agent, works by shortening the T1 relaxation time of the target tissues, such as cancerous tissues. This results in enhancement of such tissues on the scanning images if T1 weighting is employed.

Hardware used in magnetic resonance imaging n

Gather and process secondary information to identify the function of the electromagnet, radio frequency oscillator, radio receiver and computer in the MRI equipment

secondary source investigation PFAs

Magnets

H3

In first generation MRIs, magnetic fields were either provided by large permanent magnets or electromagnets. These magnets were bulky and could only produce magnetic fields up to 0.5 Tesla. These ‘low’ magnetic field strengths meant that the image quality was poor. Further contributing to the problem was that electromagnets lost large amounts of heat energy during their operation. Modern MRIs use very strong magnetic fields, ranging from 1 Teslas to 2.5 Teslas. These strong magnetic fields may be provided by a superconducting coil magnet. Such a magnet has zero resistance when cooled below its critical temperature, so that only a very small amount of energy is lost as heat.

physics skills H12.3a, b, d H12.4f

Note: Recall that heat loss = I2R.

Furthermore, when using superconductor coil magnets, once the voltage is switched on, the current will continue to flow to produce the required magnetic field even after the voltage is turned off. This ‘perpetuating’ current makes such magnets very efficient to use. (Refer to Chapter 13 for further information on superconductors.) The main advantage superconductor magnets have over normal solenoid magnets is their ability to produce very powerful magnetic fields and at the same time minimise the amount of energy lost as heat. This is because superconducting magnets have no resistance to the flow of electricity, hence all electrical energy input can be converted into magnetic fields. If resistance is present, an attempt to increase the magnetic field strength by increasing the current will see the electrical energy diverted into heat rather than converted into magnetic fields. Also directly related to their high energy efficiency, superconducting coil magnets are smaller. One disadvantage of using superconducting magnets is that they need to be cooled by, for instance, liquid helium. This makes them more expensive and more technically demanding. The main magnetic field used to align the nuclei must be very strong in order to be effective. This strong magnetic field means that the main solenoid (the superconducting

391

medical physics

magnet) can potentially attract metal objects placed inside the scanning room or from within patient’s body! Therefore, it is absolutely critical to have no unsecured metal, such as scissors, inside the scan room as they may fly towards the main solenoid and cause damage. Patients who have metal wares inside their body, for instance, metal clips to a cerebral aneurysm (an abnormally dilated brain vessel) cannot have MRI scans as these clips maybe moved by the strong magnetic field, causing an intracranial haemorrhage (bleeding inside the brain). This can be fatal!

Radio frequency oscillator The radio frequency oscillator produces radio waves with the required frequencies. As mentioned, only radio waves that have frequencies equal to the Larmor or resonant frequency of the target nuclei will be absorbed. Since there is a range of Larmor frequencies for the hydrogen nuclei, due to the way they are bonded in compounds, as well as the influence of the gradient magnetic fields used for locating signals, the radio frequency oscillator needs to be able to produce a range of frequencies precisely. In summary, the role of the radio frequency oscillator is to produce the right frequency radio waves to match the required Larmor or resonant frequencies.

Radio receiver The radio receiver is a set of coils that detect the returning radio waves and digitise them for later processing. The coil may be the same coil that is used to produce the radio waves initially. The size of the receiving coil (radio receiver) may vary, with the smaller coil more sensitive than the larger one. An rf (radio frequency) shield is used to shield the returning radio waves from the background radio waves from local radio and TV stations.

Computer A powerful computer system is required to analyse the returning radio waves because of the complex nature of these waves. The returning radio waves are analysed for intensity as well as x, y and z coordinates in order to determine their origins, as well as tissue specificity. They are then processed to form the actual images. The computer also controls the repetition time and echo delay time in order to produce T1 or T2 weighted images. Furthermore, the computer controls pre-saturation and many other forms of radio wave transmission and radio wave manipulation in order to improve tissue contrast.

Medical uses of MRI first-hand and secondary source investigation PFAs

n

n

H3 physics skills



H12.3a, b, d H12.4c H13.1a, b, c, e



392



Perform an investigation to observe images from magnetic resonance image (MRI) scans, including a comparison of healthy and damaged tissue Identify data sources, gather, process and present information using available evidence to explain why MRI scans can be used to: – detect cancerous tissues – identify areas of high blood flow – distinguish between grey and white matter in the brain

MRI has extensive uses in clinical medicine. MRI has a higher resolution than ultrasound and CT scans and avoids the use of harmful ionising radiation. MRI can be used to resolve

chapter 21  magnetic resonance imaging

and visualise most tissues in the body, whether healthy or diseased. Some examples are described below.

TR

Brain and spinal cord MRI is the imaging method of choice for studying central nervous system (the brain and the spinal cord) anatomy and diseases. In addition to the high resolutions MRI can provide, which enable small anatomical structures such as the pituitary gland and the pineal gland to be resolved, MRI is also able to differentiate the grey matter from the white matter in the brain. The brain consists of a cortex, also known as the grey matter, on the external side and the white matter deeper inside. The spinal cord consists of central grey matter and surrounding white matter. The grey matter is composed of neuron (nerve) cell bodies while the white matter is composed of neuron axons. Importantly, neuron cell bodies have a different hydrogen atom density compared to the neuron axons, and also the hydrogen atoms are bonded differently, and thus have different T1 and T2 relaxation times. Consequently, the grey matter and white matter show distinctive contrast on MRI images and can be easily distinguished. This can be contrasted to CT images of the brain and spinal cord, where the grey matter and the white matter cannot be clearly differentiated due to their similar attenuation (absorption) of X-rays. The high resolution and the ability to clearly distinguish the grey matter from the white matter is clinically important for diagnosing diseases like brain or spinal abscess, brain or spinal tumour and brain infarction (stroke). Multiple sclerosis, a demyelination disease that affects the white matter of the brain, changes the hydrogen composition of the white matter so that plaque formed in the white matter as a result of the disease can be readily visualised. This plaque does not show up well on CT images due to its similar X-ray attenuation compared to the rest of the brain tissues. MRI is also excellent in detecting vertebral disc herniation—a cause of acute back pain.

Figure 21.10  (a) MRI of the brain: healthy

Risk assessment matrix

Figure 21.10  (b) MRI of the brain showing a loss of differentiation between the grey matter and the white matter; this may be due to a lack of blood supply to the brain—a stroke

393

medical physics

Detecting cancerous tissues

Figure 21.11  (a) A brain tumour

MRI is also used to detect cancerous tissues, for example lung, brain, thyroid and kidney cancers. Cancer cells may contain hydrogen atoms that have different T1 and T2 relaxation times, and hence may form a contrast with the surrounding tissues, see Figure 21.11 (a) and (b). Furthermore, cancerous tissues usually contain cells that are either damaged or malfunctioning, and therefore leakier. They often have a higher level of water or hydrogen content—this makes the cancerous tissues show up well on MRI images. Cancerous tissues, due to their high metabolic rate, will take up more contrasting material when administered and this further enhances their visualisation. For instance, a lymph node that takes up gadolinium will appear bright on a T1 weighted image due to the shortening of its T1 by gadolinium. Such a lymph node is usually malignant, as a normal lymph node only takes up gadolinium minimally. Nevertheless, for economical and practical reasons, most malignancies are detected by CT scans rather than MRIs. (MRIs are expensive and are not always available). Occasional supplementary PET scans may help to assess the spread of the cancer.

Blood flow: Magnetic resonance angiogram MRI can be used to selectively reconstruct blood vessels and therefore is excellent for studying the vascular (blood vessel) anatomy, particular in the brain. This is also known as magnetic resonance angiogram (MRA). There are two ways blood vessels can be selectively studied using magnetic resonance. One method is known as time-of-flight MRA. In this modality, both the stationary tissue and blood are pre-saturated with radio waves. Figure 21.11  (b) A tumour that has spread to the spine (sacrum) (see arrow)

Figure 21.12  MRA of the circle of Willis showing Berry aneurysm

Note: Recall that to achieve saturation is to continuously bombard the tissue with radio waves so that its M vector cannot relax and the nuclei can no longer absorb more energy.

However, because blood flows away, blood which was pre-saturated is now elsewhere. The fresh blood that flows in is unsaturated and therefore is able to absorb radio waves and later re-emit them for the reconstruction of the blood vessel anatomy. Note: The surrounding tissue will not re-emit any radio waves because it has not absorbed any.

The second method is known as phase contrast MRA. In this, the background tissue signals are subtracted from the flow-enhanced signals (due to the movement induced phase shift in the precession of the hydrogen nuclei), and the difference is used for the blood vessel reconstruction. A minimum of two image signals is required. Phase contrast MRA produces better quality images compared to time-of-flight MRA due to a better background tissue suppression; however, it has a more prolonged scanning time. MRA is used to study the blood vessels that supply the brain: the carotid arteries and circle of Willis. The blood vessels can be easily

394

chapter 21  magnetic resonance imaging

visualised, and problems such as thrombus (clots), leakage, aneurysm (abnormal dilation of the blood vessel which can break and bleed, see Figure 21.12), stenosis (narrowing) and many more can be diagnosed. MRA is the key investigation for stroke patients. MRA is better than CT angiogram because a contrast agent (which can damage the kidneys) is not needed. In addition, it is able to provide a better resolution and contrast so that smaller vessels and smaller problems can be identified.

Soft tissues MRI is excellent for the study of soft tissues. Tissues like muscles, tendons, ligaments and cartilages all have excellent MRI signals that can be analysed to give high-resolution images. MRI is commonly used to study the musculoskeletal system to diagnose degenerative joint diseases, such as osteoarthritis, torn tendons, torn muscles or other mechanical injuries. Injured tissues enhance particularly well with T2 weighting. The principle behind this is that injured soft tissues accumulate water, and free water has a long T2 relaxation time and shows up brightly on T2 weighted images.

Functional MRI By manipulating the ways radio waves are transmitted and received, MRI can be used to study the perfusion (degree of blood flow) of various organs, and hence the functional aspects of the organs. Note: Remember MRI is unsuitable for scanning any body parts that are constantly moving.

Observing MRI images You are required to observe MRI images and make comparisons between the healthy and the diseased organs or tissues. This section provides some selections of MRI images, including a comparison between the healthy and diseased appearances of the same organs. You may wish to obtain more images of interest through further research. Some suggested sources include the internet, medical textbooks and journals. The book ‘MRI-basic principles and applications’ by Mark A. Brown and Richard C. Semelka published by John Wiley & Sons in 2003 contains a variety of MRI images. It may be used as a handy reference should a more in-depth understanding of MRI be sought. After observing the MRI images: n Which features of these images can help to identify them as MRI images. Make a list to describe how MRI images are different from CT images. n One thing MRI images have in common with CT images is the ability to represent organs using slices oriented in different planes. How many different types of slices are there and what are they? n Observe the T1 and T2 weighted images of the same organ, list the differences. Remember, water appears bright on T2 weighted images and dark on T1 weighted images.

Figure 21.13  MRI of the knee, with two different views

395

medical physics

A comparison between the imaging techniques secondary source investigation

n

PFAs H3 physics skills H11.3a, c H12.3a, b, c, d H12.4a, c, f H13.1a, b H14,1a, b, e, g, h

Identify data sources, gather and process information to compare the advantages and disadvantages of X-rays, CAT scans, PET scans and MRI scans

The advantages and disadvantages of ultrasound, plain X-rays, CT scans, endoscopies and nuclear scans have been discussed individually in the previous chapters. The main advantages and disadvantages of MRI are summarised below.

Advantages n MRI provides high-resolution images of body organs. MRIs can visualise almost all body organs well, except those which are constantly in motion, such as the heart and chest wall. With the application of various forms of signal sequencing (e.g. T1, T2 weighting, pre-saturation and so on), excellent contrast between tissues, both healthy and diseased, can be achieved. Under many circumstances, MRI is the best image modality to achieve diagnosis. n MRI avoids the use of harmful ionising radiation. n Compared to endoscopies, MRI is non-invasive.

Disadvantages n MRI is very costly. MRI machines are expensive and the cost of maintenance (including the cost of the coolants for the superconductor magnets) is also high. n The main solenoid used by MRI takes a tunnel shape. Some patients when lying inside this tunnel can develop severe anxiety attacks and claustrophobia. These patients are unable to undergo MRI scans. n MRI examinations are lengthy. They require patients to be still, which may pose difficulties when scanning children or uncooperative patients. Table 21.3 compares all image methods in terms of their cost, resolution, length of the examination, comfort and safety as well as their uses for each of the body systems.

Table 21.3  A summary of all scanning methods studied in this module Nuclear medicine

Ultrasound

Plain X-ray

CT scan

Endoscope

Costa

One of the cheapest image methods: about $50 to $100

One of the cheaper image methods: about $20 to $50

Moderately expensive: cost is around $200

The doctors’ fee A standard is about $200b radioisotope scan is around $100; a PET scan is over $500

Expensive: cost ranging from $400 to $600

Resolution

Low 1–5 mm

High 0.5 mm

Moderate 1 mm

Very high 0.1 mm

Low 5–10 mm

High 0.5 mm

Length of the examinationc

Varies: usually about 30 minutes

About 1 minute

About 5 minutes

Varies: usually about 30 minutes

Varies: radioisotope scans usually take about 5 to 10 minutes, PETs can take up to 30 minutes

15 to 30 minutes

396

MRI

chapter 21  magnetic resonance imaging

Nuclear medicine

Ultrasound

Plain X-ray

CT scan

Endoscope

Comfort and safety

Non-invasive and the patient is usually quite comfortable. Because no ionising radiation is used, it is 100% safe.

Comfortable. Minimal harm to the patient due to the low dose of X-rays used. Not recommended in pregnancy.

Comfortable. More harm than plain X-ray films due to the higher dose of X-rays used. Not recommended in pregnancy.

The procedure is invasive and has associated risks. Examination is generally uncomfortable and needs to be performed under anaesthetic.

Comfortable. Gamma radiation may cause harm. Not recommended in pregnancy.

Most patients find it comfortable apart from the noises of the machine. Some patients can be claustrophobic and can develop panic attacks while having the scan.

MRI

Central nervous sytem

The brain and the spinal cord cannot be visualised well because of their bony coverage.

The brain and the spinal cord cannot be visualised.

Can be used to diagnose most pathologies, including stroke, tumour, infection, abscess and bleeding.

Not available.

For studying the function of the brain and diagnosing functional brain diseases, for example, epilepsy.

**Can diagnose most pathologies and is more accurate than CT. MRA can be used to study the blood vessels supplying the brain and the spinal cord.

Cardiovascular system

**Duplex ultrasound is the diagnostic tool of choice for studying cardiac and vascular conditions. Duplex scan can exam both the anatomy and the blood flow.

Contrast can be injected to outline blood vessels. Used for studying coronary vessels (blood vessels that supply the heart)—coronary angiography.

Good for studying the anatomy of the heart and the course of a blood vessel. Contrast may improve the image quality.

Not available.

Tc-99m labelled albumin can be used to trace the blood flow. Tc-99m labelled MIBI can be used to assess the cardiac function.

MRA is good for studying the anatomy of blood vessels. Functional MRI is occasionally used to study the perfusion of organs.

Respiratory system

Airway and lungs are difficult to visualise due to their rib coverage.

**Screening investigation for lung conditions. Although not all that accurate, it provides clues based on which other scans may be ordered.

**Pathologies detected on a plain X-ray film can be further evaluated using a CT scan. Reliable and accurate.

Bronchoscope can be used to detect airway lesions, such as a tumour.

Ventilation and perfusion scan is useful for diagnosing pulmonary embolism.

Rarely used (due to movement).

Digestive system

Good for detecting gallstones. Can visualise liver, pancreas and spleen.

Only with the use of contrast, the gastrointestinal tract may be visualised. Limited values.

**The scanning modality of choice for detecting intraabdominal pathologies. Can visualise well all intraabdominal organs and the gastrointestinal tract (although not too accurate for small luminal conditions). Image quality can be further improved by the use of contrast agents.

Gastroscope and colonoscope are usefully for detecting luminal pathologies of the gastrointestinal tract. Laparoscope is the most accurate diagnostic tool for intraabdominal conditions, however it is invasive. Laparoscope is an essential component of key-hole surgery.

Occasionally used for functional bowel diseases, such as gastroparesis (ineffective contraction of the gastrointestinal tract leading to a delayed transit time). PET can be used to detect the intraabdominal spread of a cancer.

Occasionally used for detecting intra-abdominal malignancies.

continues

397

medical physics

Musculoskeletal system

Ultrasound

Plain X-ray

CT scan

Endoscope

Useful for accessing soft tissue injuries. Low resolution and operative dependency limit its use.

**The investigation of choice for bone conditions, mainly fractures. Often the first line investigation for musculoskeletal pain.

Can visualise bones and soft tissues well. Useful for assessing complex injuries such as very comminuted fractures (small fragments). 3D CT can further aid the diagnosis.

Arthroscope can be used to diagnose and treat joint conditions, for example, diagnosing osteoarthritis or repairing torn ligaments.

Nuclear medicine Bone scan can detect conditions that are unable to be seen on a plain X-ray film or a CT image. Occult fracture and osteomyelitis are some examples.

MRI **MRI has a similar role compared to CT but has a higher resolution therefore is more accurate and reliable. MRI can visualise tendons, ligaments and cartilages better than CT.

a. Cost can vary quite substantially. Different doctors or radiographers may charge different rates. Also examination of certain body systems may be more expensive than others. Hence only a rough estimation is provided here. b. Most endoscopic examinations need to be performed in the operating theatre with the presence of an anaesthetist, therefore these costs need to be added. In addition, if endoscopic procedures are performed, further costs will incur. c. Not including set-up time and positioning of the patient. d. Patient usually needs to be injected first and then return a few hours later for the actual scan. This waiting is not included as a part of the examination time. ** Most useful or most commonly used imaging method for this body system.

The impact of medical applications of physics on society secondary source investigation PFAs H4, H5 physics skills H12.3a, b, c, d, e H12.4a, b, c, f H14.1a, b, d, e, g, h H14.3c, d

SR

‘Assess’

398

n

Gather, analyse information and use available evidence to assess the impact of medical applications of physics on society

There is no doubt that an increase in the knowledge of and the ability to manipulate physics has led to the development of many advanced imaging methods. Ultrasound, plain X-rays, CTs, endoscopies, nuclear medicine scans and MRIs all have important roles in studying the anatomy of the body and establishing diagnosis of conditions. The impacts of these imaging methods on society are multi-dimensional, both positive and negative.

Healthier society The advance in imaging techniques allows early diagnosis hence better management of certain diseases, leading to a healthier society. For instance, ultrasound has been widely used in antenatal clinics for detecting foetal abnormalities before birth. This allows minor abnormalities or deformities to be corrected via intrauterine interventions. In the case of major abnormalities or deformities, other options maybe discussed with the parents. Chest X-rays have been widely adopted to screen for lung diseases, such as tuberculosis (TB), in immigrants arriving to Australia. This helps to reduce the incidence of TB in our country. CT scans and PET scans have been used to detect and evaluate the spread of cancer, helping doctors to work out the best treatment for the patient, whether for surgical resection or radiotherapy or chemotherapy. The invention of laparoscopes has led to the development of key-hole surgeries. Key-hole removal of the gall bladder leaves behind smaller wounds (key-hole size), allowing a quicker recovery time as well as a lower infection and complication rate. Arthroscopic surgeries of the joints, whether to repair a torn ligament or tendon can minimise post-operative pain and rehabilitation and therefore enable patients to return to their full function earlier.

chapter 21  magnetic resonance imaging

Increase in medical knowledge As described in Chapter 20, PET scans allow the study of regional functions of the brain. This allows doctors to predict the outcomes of certain debilitating diseases of the brain, such as multiple sclerosis (where there is a scarring of the brain with unknown aetiology) or stroke.

Economics Associated with the advance in medical imaging methods is the increase in cost. Some of the medical imaging devices are very expensive. An MRI or a PET machine costs millions of dollars to install and maintain. It poses a heavy burden on the government and healthcare funding to provide these scans in all hospitals.

Ethics With the widespread use of imaging methods comes some unresolved ethical issues. For example, if a foetus was diagnosed to have a major genetic defect or a major abnormality on ultrasound, what to do next? Is it ethical to perform an abortion and what is the latest time to perform it? Is it ethical to perform intra-uterine operations? Is it ethical to have the birth of the foetus? Certain imaging devices are expensive and may only be available to those who are rich (in the private hospitals). By using better scanning techniques, the rich can have diseases detected, and thus treated, early. Not all human beings are being treated equally. The use of ionising radiation, such as in nuclear medicine, X-rays or CT, in women who are pregnant but do not know this at the time of the test—causing later foetal deformity—is both an ethical and a legal concern.

chapter revision questions   1. Although the element carbon-12 forms the basic building block for most body tissues,

it does not contribute to the image formation in MRI. Explain the reason for this.   2. Nuclei with a net spin, when subjected to a strong external magnetic field, will align

either parallel or anti-parallel to the magnetic field lines. (a) Are there more parallel or anti-parallel alignments? (b) Identify which type of alignment has a higher energy level. (c) Describe what can be done to change the low energy alignments to the high energy alignments. (d) Explain the significance of the change in alignments of the nuclei in the context of MRI.   3. (a) Define precession.

(b) Explain why precession is an important part of MR image formation.   4. Describe how MRI is able to locate signals coming from different positions along the

longitudinal axis of the body. In your answer, discuss the significance of the Larmor frequency for signal location along the longitudinal axis of the body.   5. The returning radio waves from the nuclei as they return to their original alignments are

used for MR image reconstruction. Describe the factors that determine the brightness and contrast of the images formed.

399

medical physics

  6. (a) Define T1 relaxation and T1 relaxation time.

(b) Define T2 relaxation and T2 relaxation time.   7. Describe how MRI produces T1 weighted images. What tissues show up brightly on

T1 weighted images?   8. Describe how MRI produces T2 weighted images. What tissues show up brightly on

T2 weighted images?   9. (a) Identify the main hardware components of an MRI machine.

(b) Describe the advantages of superconducting magnets over normal solenoid magnets. (c) Describe one safety precaution needed when operating a MRI machine. 10. June is a 57-year-old woman who presented to her GP with neurological symptoms. Her

GP referred her to a neurologist, who then ordered an MRI scan of her brain and spinal cord. (a) Explain why the neurologist chose MRI over CT. (b) Identify one disadvantage of MRI, assuming June is not claustrophobic. (c) If June has a metal pacemaker for her heart condition, can she still have the scan? Why? (d) If MRI is negative for any medical conditions, describe what other options there are for the neurologist. 11. Ken is a 19-year-old boy who twisted his right knee during a soccer match. He heard

SR

Answers to chapter revision questions

400

a pop and this was followed by knee swelling and excruciating pain. He limped into an orthopaedic surgeon’s consulting room. The surgeon suspected that Ken might have torn one of the ligaments in the knee and ordered an MRI scan of the knee. (a) Would plain X-ray films have any value in this clinical scenario? (b) Could ultrasound be used for diagnosing ligament tears in the knee joint? Explain. 12. Define MRA and name two clinical scenarios for which MRA may be useful. 13. Assess the impact of medical physics on society.

SR

Mind map

astrophysics

astrophysics

CHAPTER 22

Observing our Universe Our understanding of celestial objects depends upon observations made from Earth or from space near the Earth

22.1

Figure 22.1  Galileo explaining Moon topography to sceptics

Figure 22.2  A telescope like the one used by Galileo

404

Galileo’s observations of the heavens n

Discuss Galileo’s use of the telescope to identify features of the Moon

Galileo did not invent the telescope, but he was the first person to construct one so that it could produce a sufficiently clear image to observe features on the Moon, which today can be seen with everyday binoculars. A crude form of telescope, known as perspicillums, made from roughly ground glass lenses placed at either end of a hollow tube, were sold as children’s toys and novelty items. Believed to have been ‘invented’ by at least two different Dutch spectacle makers, they could make distant objects such as church steeples appear closer. Galileo improved the perspicillum. He used better quality glass from the glassblowers around Venice in Italy, which he then ground himself to produce better lenses. He called the improved the device a telescope. The result was much improved, so much so that the mountains and craters of the Moon were clearly visible to Galileo. However, in Galileo’s time, all heavenly bodies were considered ‘perfect’. This view had been held since Aristotle, and was incorporated into the teaching of the Catholic Church (along with the belief that the Earth was the centre of the Universe). Clearly, to Galileo, this was not the case. Using the angle of the Sun, Galileo made estimates of the heights of the lunar mountains and showed that the craters were deep with high sides around them. Vast regions of plains—‘mares’ (meaning oceans) were mapped by Galileo with the aid of his refined telescope.

Figure 22.3  (a) The Moon, first observed through a telescope by Galileo

Figure 22.3  (b) Jupiter and one of its four Galilean moons, Io, as observed through a small telescope

chapter 22  observing our universe

Galileo’s observations of our nearest neighbour in space, showing its imperfections for the first time, were followed by observations of Jupiter’s moons orbiting their parent planet. This in turn led to Galileo’s model of the solar system with the Sun, not the Earth, at its centre.

The atmosphere is a shield n

Discuss why some wavebands can be more easily detected from space

Wavelength

Earth’s atmosphere extends above us for several hundred kilometres. Air pressure is a result of the weight of the vertical column of air above us. Composed mostly of the gases nitrogen and oxygen, the atmosphere is transparent to visible light and most radio waves. For these wavebands, there is little interaction with the molecules in the atmosphere. However, blue light, having a shorter wavelength in the visible spectrum, is scattered by fine particles and larger molecules in the atmosphere. The scattering is evident by the way 10-6 nm in which the sky appears blue in all 10-5 nm directions. Gamma The electromagnetic spectrum is 10-4 nm rays shown in Figure 22.4. The wavebands 10-3 nm are regions within the spectrum that have common characteristics and uses. 10-2 nm For wavebands other than visible 10-1 nm light and radio, the atmosphere acts X-rays 1 nm Ultraviolet as a shield, absorbing nearly all radiation gamma and X-rays and the majority 10 nm of ultraviolet (UV) rays. Gamma 100 nm rays are absorbed by the atoms making up the atmosphere. UV 103 nm = 1 µm wavelengths are absorbed by ozone Visible light 10 µm Infrared gas molecules as well as by other radiation 100 µm molecules in the atmosphere. Getting a sun tan is evidence that some UV 1000 µm = 1 mm radiation penetrates the atmosphere. 10 mm = 1 cm Microwaves Infrared radiation (heat) is partially absorbed by carbon dioxide and 10 cm water vapour molecules. The blue 100 cm = 1 m appearance of the sky is due to the 10 m scattering of short-wavelength visible light by fine particles and larger 100 m Radio waves molecules, evidence that even visible 1000 m = 1 km light is somewhat affected by the atmosphere. At sunset and sunrise, 10 km light from the Sun has a longer 100 km path to travel through, causing the shorter wavelengths to be scattered and leaving a dominance of longer

22.2 Figure 22.4  The electromagnetic spectrum with wavebands shown

400 nm Violet Blue Green Yellow Orange Red 700 nm

405

astrophysics

Atmospheric absorbtion bands

Per cent

0.2 100

1

10

70

75 Total absorbtion and scattering

50 25 0

Water vapour Major components

Carbon dioxide Oxygen and ozone Methane

Figure 22.6  Mount Stromlo Observatory

Nitrous oxide Rayleigh scattering 0.2

1

10

70

Wavelength (µm)

Figure 22.5  Diagram showing absorbance of wavebands by atmosphere

wavelengths, causing the Sun to appear redder. Further to this scattering effect is the distortion, or seeing caused by the refraction of light as it passes through air of slightly different densities due to temperature variations. The twinkling of stars is probably the most commonly recognised effect of seeing. The stars are so distant that they approximate mathematical points to our eyes. Refraction (seeing) causes minute shifts in position resulting in apparently larger and flickering dots. A mirage seen on a hot road in summer is an extreme case of the refraction of light due to the difference in the density of hot air and cooler air. Very hot air near the road’s surface is less dense and allows light to travel through it faster than the cooler air above. When viewing at very shallow angles, the road appears to shimmer as the observer is actually viewing a distorted image of the sky just above the road. See Figures 22.7 (a) and 22.7 (b). The same refractive effect, although less pronounced, causes the twinkling of stars—an astronomer’s worst nightmare after clouds. Wavebands absorbed by the atmosphere are more easily detected from space, above any atmospheric effects. This is why many hundreds of millions of dollars have been spent in recent years in placing instruments on satellites to detect all but radio waves for use in astronomical observations. The Compton Gamma Ray Observatory, the Chandra X-ray Observatory, the Hubble Space Telescope and the Spitzer

Figure 22.7  (a) How a mirage forms over a hot road

Curved path of light ray

Cooler air Hot air Road Figure 22.7  (b) A mirage

406

chapter 22  observing our universe

(infrared) Space Telescope all orbit the Earth and are controlled remotely from ground-based centres. For ground-based instruments, astronomers have been limited to gathering information primarily in the visible light and radio wavebands. The reception of radio waves from deep space using instruments on satellites is not feasible due to the very large dishes used in radio astronomy. The main dish on the Parkes radio telescope has a diameter of 64 m. Figure 22.8  The Compton Gamma Ray Observatory

Resolution and sensitivity n

Define the terms ‘resolution’ and ‘sensitivity’ of telescopes

Any device designed to capture an image must resolve the information in a similar way to how a camera focuses. The resolving power of a telescope is its ability to make distinct images of objects, which are close to each other in angular separation. Our eyesight cannot resolve the distant headlights of a car into two separate sources of light until the approaching car is sufficiently close for the angular separation of the headlights to be greater than the resolving power of our eyes. Figure 22.10 shows how angular separation is defined by the angle two objects make with the observer. In the case of an astronomical telescope, two stars with a small angular separation may appear as one. A larger telescope with better resolving power may make the two stars become distinctly separate. This is because θ the larger the diameter of the primary light gathering lens or mirror, the better the resolving power of the telescope. The resolving power, wavelength of light and diameter of the primary lens or mirror are related by the equation:

R=

2.1 × D

105

λ

Figure 22.9  The Parkes radio telescope

22.3 Figure 22.10  Angular separation diagram

Where: R = the minimum angle of resolution (seconds of arc) λ = the wavelength (m) D = diameter of the telescope’s primary mirror or lens (m)

407

astrophysics

(a)

(b) Figure 22.11  Sensitivity comparison photograph: (a) less sensitivity, (b) more sensitivity

22.4

As the wavelength of the observed light increases, the minimum angle of resolution for that instrument increases. As a consequence, blue light has a smaller angle of resolution than red light using the same telescope. Radio telescopes have a relatively poor resolving power compared to light telescopes. The introduction of interferometry, which links radio telescopes hundreds of kilometres apart, has increased the resolution of radio astronomy. Visible light wavelengths to which the human eye responds range from 380 nm to about 750 nm (i.e. 3.8 × 10–7 m to 7.5 × 10–7 m). Radio telescopes typically observe electromagnetic radiation with wavelengths of the order of millimetres to centimetres. The sensitivity of a telescope is a measurement of its light gathering ability. This is directly proportional to the surface area used to collect the incoming light, either the objective lens for a refracting telescope or the primary mirror for a reflecting telescope. For radio telescopes, the area of the dish, which is used to reflect the radio signals onto the receiver, is proportional to the sensitivity. As most mirrors and lenses are circular, a doubling of their diameter results in a four-fold increase in their surface area, and therefore in the sensitivity of the telescope. For any circle, A = πr2. It follows that the sensitivity of a telescope is proportional to the square of the diameter of its primary lens or mirror. Figure 22.11 shows images taken through telescopes with different sensitivities. A more sensitive telescope will reveal stars with less brightness.

Earth’s atmosphere limits ground-based astronomy n

Discuss the problems associated with ground-based astronomy in terms of resolution and absorption of radiation and atmospheric distortion

Ground-based astronomy has a number of limitations imposed by the effects of the atmosphere on observations, as discussed in previous sections. The wavebands that can be detected from the ground, radio and visible light, had been the traditional types of astronomy until the advent of satellite-based instruments. The remaining wavebands, gamma rays, X-rays, UV and infrared are all extremely useful to astronomers, as they can reveal information and detail hidden from the radio and visible light wavebands. Seeing, the term for atmospheric distortion, limits the practical resolution possible for any telescope, no matter how large. A number of techniques are used to reduce this limitation (see the next section) but they are expensive and available only by utilising considerable computing power. Placing the telescope as high as possible, on a mountain, reduces atmospheric distortion in a number of ways. Having less atmosphere to penetrate by being placed at altitude results in less haze, pollution and water vapour (most of which is found in the lower few kilometres), which absorb light and infrared wavelengths. Often the site is above the clouds and it is affected by fog far less frequently than at lower altitudes. Also, the temperature tends to be more uniform in the atmosphere at higher altitudes, resulting in less seeing. Seeing is the result of warmer air pockets with a lower refractive index in surrounding

408

chapter 22  observing our universe

cooler air (or vice versa). Another effect of the atmosphere for ground-based astronomy is distortion of the colour of Atmosphere the light passing through it. At sunrise and sunset, sunlight must follow a longer path through the atmosphere Observer (see Fig. 22.12). This effect is seen in an exaggerated form as the blue light (shorter wavelengths) is scattered by the atmosphere so that the longer wavelengths (i.e. red) dominate. Ground-based astronomers must take this into account when making observations, even when the subject is at a relatively high elevation.

Improving resolution n

Outline methods by which the resolution and/or sensitivity of ground-based systems can be improved, including adaptive optics, interferometry and active optics

The limit of the resolving power of a standard ground-based telescope is about one arc second. This can be achieved with a relatively small mirror. Simply using larger mirrors improves sensitivity but not resolution. The problems associated with ground-based systems, referred to previously, can be reduced using methods that utilise computerised control of mirror shape, or computerised wavefront correction or extended baseline by utilising separate telescopes, or by simply placing the telescope at high-altitude locations. The details of these methods are outlined below. Adaptive optics uses a brighter light source from either a nearby star or a laser beam. The way in which the atmosphere affects this bright light source is analysed in real time by a wavefront sensor. Corrections to the distortions are fed into the telescope’s rapidly adaptable mirror so that the observed image has the distortion caused Light from target Light from reference beacon by the atmosphere largely eliminated. As the corrections are made at around 1000 times Collimator per second, considerable computing power is required. In addition, expensive technology is Beam splitter needed to make the changes in the adaptable (dichotic) Camera mirror. Active optics has some similarities with Deformable adaptive optics in that it too employs a mirror wavefront sensor to detect distortion in the collected light. However, unlike adaptive Wavefront sensor optics, active optics uses a slower feedback system that corrects deformities in the Wavefront analysis primary mirror of the telescope. These deformities can be caused by sagging under Actuator control its own weight as it is moved to different positions and by differences in temperature,

Figure 22.12  How light from a low elevation source passes through more of the atmosphere

22.5

Figure 22.13  Adaptive optics applied to an optical system

Science instrument

Image post-processing

409

astrophysics

Figure 22.14  The enhanced effect of adaptive optics (simulated) (a) View of Uranus through a conventional Earthbased telescope; (b) the Hubble Space Telescope; (c) the Hawaiian Keck Telescope fitted with adaptive optics

(a)

(b)

(c)

Figure 22.15  An example of an image of a double star taken without adaptive optics (left image) and through the same telescope using adaptive optics (right image)

Figure 22.16  The twin Keck telescopes in Hawaii

410

causing expansion and contraction in the mirror itself. Prior to active optics, mirrors were constructed using thick glass to avoid deforming, but this added to their weight and to the cost of the mounting and the mirror itself. Such thick mirrors were limited to about 5 to 6 m in diameter. With active optics, thinner, cheaper and lighter mirrors can be used, typically 8 m or more in diameter. Active optics is now used in most new telescope mirrors, such as the 10 m Keck telescopes in Hawaii. Interferometry is a technique applied to radio astronomy. The wavelength of radio waves (typically from 1 mm to several hundred kilometres) is many times greater than visible light (400 to 800 nm). This means that radio telescopes would need to have dishes hundreds of kilometres in diameter to match the resolution of large optical telescopes. This is clearly not practical. In order to overcome this limitation, two or more radio telescopes are linked by computers, which combine the incoming signals from the separate telescopes to produce an interference pattern. This is then analysed further and is converted into an image with a resolution approaching those of the largest optical telescopes. The Square Kilometre Array (SKA) is a system of about 80–100 separate radio telescopes, costing approximately AU$1.8 billion. It is planned to be built either in Australia or South Africa. Interferometry will enable the SKA to have the equivalent resolution of a single dish hundreds of kilometres in diameter by placing several receivers hundreds of kilometres from the main group. Its sensitivity will be 100 times greater than any other radio telescope due to the total receiving area being about 1 million square metres (1 square kilometre).

chapter 22  observing our universe

The relationship between the size of the instrument and sensitivity n

Identify data sources, plan, choose equipment or resources for, and perform, an investigation to demonstrate why it is desirable for telescopes to have a large diameter objective lens or mirror in terms of both sensitivity and resolution

This investigation can be performed in several ways. If two telescopes with considerably different objective lens or mirror diameters are available, viewing the same region of the night sky (if possible) will reveal far more stars through the larger telescope due to its greater sensitivity. Viewing a planet that has observable detail (Jupiter with its red spot or Saturn with its rings) will reveal greater detail through the larger telescope due to its better resolution. A second way avoids the need for a night excursion. It would require an observer to compare the clarity of detail of objects in the far distance viewed through two binoculars of different sizes. This method only provides a qualitative comparison of resolution. In a third method, the resolving power of a small telescope, binoculars and the unaided eye are compared. This is done using a chart with two black rectangles close together. The distances at which the gap between the rectangles on the chart can be discerned when viewed through each device or unaided eyes represents the difference in the resolving powers. Yet another way in which this investigation can be approached is by using Internet search engines to obtain images of the same region of the sky taken through telescopes with different diameters. Search terms ‘resolution’, ‘objective lens size’, ‘telescope mirror diameter’, etc. are starting points. These would become the resources used in the investigation.

Useful websites

first-hand investigation PFAs H1 skills outcomes H11.1b, e H11.2b, c, d, e H11.3a, b, c H12.1a, b, d H12.2a, b H12.4a H14.1a, c TR

Risk assessment matrix

>WWW

NASA’s deep space tracking facility in Canberra with links to many space observatories and probes: http://www.cdscc.nasa.gov/Pages/pg03_trackingtoday.html Use of radio frequencies in radio astronomy: http://www.nfra.nl/craf/freq.htm The Narrabri radio telescope facility: http://www.narrabri.atnf.csiro.au/public/atca_live/atca_live.html

chapter revision questions 1. (a) Outline the adverse effects that the Earth’s atmosphere has on astronomical

observations made in wavebands other than light. (b) Outline how these adverse effects are overcome or avoided in modern astronomy. 2. Describe quantitatively the changes in a telescope’s resolution and in its sensitivity when

its 20 cm diameter objective lens is upgraded to a 40 cm diameter lens. 3. Explain the purpose of adaptive optics and how it overcomes some of the problems

caused by the atmosphere.

411

astrophysics

4. Compare the techniques of adaptive optics and active optics in improving astronomical

observations. 5. Give reasons why interferometry is widely used in the field of radio astronomy.

SR

6. Undertake further research into future developments regarding interferometry and its

application in astronomy. 7. Research several orbiting satellites and for each, describe the waveband being observed,

its purpose, current status and achievements. Answers to chapter revision questions

412

8. Discuss the costs involved in making modern astronomical observations against the

benefits to society (knowledge, technological gains, etc).

chapter 23  astrometry: finding the distance to stars

CHAPTER 23

Astrometry: finding the distance to stars Careful measurement of a celestial object’s position in the sky (astrometry) may be used to determine its distance Measuring distances in space n n

Define the terms parallax, parsec and light-year Explain how trigonometric parallax can be used to determine the distance to stars

23.1

Due to the vast distances in space, the conventional unit of length, the metre, requires the use of powers of 10, which make the number too large to readily comprehend. Consequently, alternative length units that measure much larger distances are used by astronomers and in general language. The light-year is the distance light travels in one Earth year. A brief calculation yields this distance in metres: 1 l.y. = speed of light × 1 Earth year = 3.0 × 108 m s–1 × 365 × 24 × 60 × 60 s = 9.5 × 1015 m The light-year is used for popular astronomy articles and discussions; however, another unit, the parsec, is also used by amateur and professional astronomers. One parsec is 3.26 light-years. The word parsec is derived from its origins: ‘par’ from ‘parallax’; ‘sec’ from ‘arcseconds’, or one second of a degree of arc. Parallax is the way in which a closer object seems to move against a distant background when the observing position moves. Holding one’s finger out at arm’s length when looking at a more distant background and then closing one eye at a time causes an apparent movement of your finger against the background. Relatively close stars exhibit a similar (but much smaller) apparent movement against the background of distant stars and galaxies as the Earth orbits the Sun. This movement is too small to be noticed except by careful comparison of photographic or photometric records taken at different times of the year. A simple formula is applied to calculate the distance to the star once its parallax angle is measured: Figure 23.1  The geometry of trigonometric parallax measurement

Earth

p”

Nearby star

Sun

Earth’s orbit around the Sun

Background stars

413

astrophysics

1 d= p

Where: d = the distance in parsecs p  = the star’s parallax angle in arcseconds

The parallax angle used to determine distance in parsecs is half the maximum parallax angle that a star may exhibit over a six-month period.

23.2 Figure 23.2  The Hipparcos Observatory

WWW>

The limitations of trigonometric parallax n

Discuss the limitations of trigonometric parallax measurements

Limits in the resolution of telescopes due to seeing (the blurring effect of the Earth’s atmosphere) make parallax angle measurements of less than 0.01  (arcsecs) not possible with errors of less than 10%. This places an effective limit of 100 parsecs (as d = 1/p ) on the distance measuring capability of trigonometric parallax from ground-based observers. Other factors, such as the refraction of starlight by the atmosphere, may cause errors in the position of the star that must be corrected. As trigonometric parallax is used by astronomers to calibrate other distance measuring techniques, this is quite a severe limitation. As our Milky Way galaxy has a diameter of approximately 45,000 parsecs, either better resolution or a different but equally reliable distance measuring technique is required. In the late 1980s and early 1990s, the Hipparcos Observatory, unaffected by seeing, was able to measure the parallax angles of around 2.5 million stars with a precision of less than 0.001 , increasing the distance measured to the furthest stars to about 1000 parsecs. Useful website This website shows real data animation from the Hipparcos database: http://www.rssd.esa.int/Hipparcos/apps/ShowMotion.html

The European Space Agency is planning to launch the Gaia Observatory into Earth orbit in late 2010 or 2011. The twin 1.45 m diameter telescopes on board are expected to be able to measure the distance to over one billion stars to within an accuracy of 20% at a distance of about 10,000 parsecs. Its observations are expected to be 100 times more accurate than Hipparcos. The Gaia Observatory will be placed in an orbit around the Sun at L2, a point in space directly on the opposite side of the Earth to the Sun where it will be protected from Sun’s glare and follow Earth in its orbit. L2 is a Lagrange point, one of several where a spacecraft will remain stationary with respect to the Earth due to the balancing of gravity. In the case of L2, the gravity of the Earth and the Sun combine so that the Gaia Observatory will orbit the Sun with the Earth, but with a slightly larger orbital radius.

414

chapter 23  astrometry: finding the distance to stars

Figure 23.3  The Gaia Observatory, planned for launch in 2010 or 2011

Useful websites The European Space Agency website for the Gaia mission: http://www.esa.int/esaSC/120377_ index_0_m.html NASA’s website for the SIM (Space Interferometry Mission) for proposed launch in 2009: http://planetquest.jpl.nasa.gov/SIM/sim_facts.cfm n n



>WWW

Solving problems: Using trigonometric parallax to find the distance to a star Solve problems and analyse information to calculate the distance to a star given its trigonometric parallax using: 1 d= p

Sample question 1 What is the calculated distance to the sixth star in the Hipparcos catalogue which has a parallax angle of 18.80 milli arcseconds?

Sample answer Using d =

1 : p

1 18.80 × 10–3 = 53.19 parsec

d=



Sample question 2 The red giant star Betelgeuse is 130 parsecs away. Could the distance to Betelgeuse be calculated using ground-based parallax methods?

Sample answer

1 , p 1 so that p = d 1 = 130 = 0.0077 arc secs

The parallax angle of Betelgeuse is found using: d =

As this parallax angle is less than 0.01 , it is too small to be measured with acceptable accuracy from ground-based instruments.

415

astrophysics

The relative limits of ground-based and space-based trigonometric parallax secondary source investigation PFAs H1, H5 physics skills H12.3a, b, c, d H12.4b

WWW>

n

Gather and process information to determine the relative limits to trigonometric parallax distance determinations using recent ground-based and space-based telescopes

Useful website This website from the Australia Telescope Outreach and Education facility gives useful information. In summary, it states that ground-based observations are limited to distances of about 40 parsecs due to atmospheric distortion, while the Hipparcos satellite was capable of determining distances to about 1000 parsecs: http://outreach.atnf.csiro.au/education/senior/astrophysics/parallaxlimits.html Other useful information can be found by using search terms such as ‘parallax limitations’ in your favourite search engine.

chapter revision questions 1. Outline the process of trigonometric parallax. 2. Describe the effect on the measurements made using trigonometric parallax if the radius

of Earth’s orbit were doubled. 3. Explain why the Hipparcos Observatory has been so successful in measuring the

distance to stars. 4. Compare the distances between one a.u. (astronomical unit), one light-year and one

parsec by using a scale diagram. 5. Describe the benefits of launching observatories that will have improved resolution when

making trigonometric parallax measurements. SR

6. ‘A light-year and a parsec will be different for a civilisation on a planet in another solar

system.’ Why would this be so? 7. Find the distance in parsecs to a star with a parallax angle of: Answers to chapter revision questions

416

(a) 0.05 arcsecs (b) 0.230  (c) 0.008 +/− 0.002 

chapter 24  spectroscopy: analysing the spectra of stars

CHAPTER 24

Spectroscopy: analysing the spectra of stars Spectroscopy is a vital tool for astronomers and provides a wealth of information Producing spectra n

Account for the production of emission and absorption spectra and compare these with a continuous black body spectrum

A spectrum is observed by allowing the light from a source to pass through a device that spreads the White light wavelengths apart. A triangular prism used for the dispersion of white light into the colours of the rainbow—and indeed raindrops, Glass prism which produce rainbows—are examples of the production of a spectrum. The human eye perceives a combination of colours and wavelengths as one resultant colour, or white if the right combination of colours is present. This is why spectra cannot be observed with the human eye alone. Emission spectra are produced when a body of low pressure gas atoms are heated or energised, ‘excited’, by other means such as a strong electric field. Electrons in the atoms absorb the energy and ‘jump’ to a higher energy level.

24.1 Dispersion of white light

Figure 24.1  White light is dispersed into its component colours by a triangular prism

Niels Bohr, in his model of the atom, described the allowable orbits of electrons in terms of energy levels that electrons could ‘jump’ between. When moving up to higher energy levels, the electron would absorb an amount of energy equal to the energy difference between the two levels. The ‘excited’ atom will return to its normal ‘ground’ state when the electron loses energy by emitting a photon of light and ‘falling’ to a lower allowed energy level. The frequency of the emitted photon is determined by the equation E = hf, where E is the energy difference between the two allowed energy levels the electron moves between, and h is Planck’s constant. As the allowed energy levels are fixed for a particular element, only certain frequencies, characteristic of that element, can be emitted. The release of the absorbed energy by an electron occurs only at certain frequencies so that the observed spectrum has bright lines against a dark background (see Fig. 24.4a). Useful emission spectra sources include gas discharge tubes, fluorescent light tubes, and sodium or mercury vapour street lights. Absorption spectra are produced when electrons in atoms, ions or molecules in the atmosphere of a star absorb radiation at set wavelengths. The absorbed

417

astrophysics

2 3

1

Energy in

An electron jumps up an energy level (1) when it absorbs energy. It releases the energy as a photon of light (2) with a set frequency when it returns to its original enrgy level (3). Figure 24.2  A low-pressure sodium vapour street light

400

Figure 24.3  The source of emission spectra

500

600

700

H Hg Ne Wavelength (nm) Figure 24.4  (a) Emission spectra for hydrogen, mercury and neon

Wavelength (nm) H 400

Figure 24.5  The production of an absorption line within a star

500

600

700

Figure 24.4  (b) An absorption spectrum for hydrogen

Absorbing electron

Re-emitted radiation in all directions

Continuous spectrum from core

418

Star

wavelengths are determined by the differences in the energy levels that the electrons jump between. The absorbed wavelengths, originally emitted from the core of the star, are re-emitted very soon after they are absorbed. Only a fraction of the re-emitted radiation is in the original direction (from the core). As the core of a star produces a continuous spectrum due to the black body radiation from such a hightemperature source, the wavelengths, which have been absorbed and then re-emitted in all directions, appear as dark lines against the bright continuous spectrum, as shown in Figure 24.4 (b). Figure 24.5 shows the mechanism occurring in a star that produces an absorption line in an absorption spectrum. Continuous spectra are produced from hot bodies, like the tungsten filament in an incandescent light globe. (For more about black body radiation, see Chapter 11.) The core of a star, a region of dense nuclei heated to many million kelvin, is also a source of a continuous spectrum. As the temperature of the body increases, the peak wavelength of the radiation becomes shorter, as well as the amount of energy

chapter 24  spectroscopy: analysing the spectra of stars

emitted increasing in proportion to the temperature in kelvin to the power of 4 (T4). This causes the colour of the object to change from red through to orange, yellow and then white.

400nm

500nm

600nm

700nm

Figure 24.6  A continuous spectrum

When the peak wavelength of a continuous spectrum corresponds to green, the object appears white, not green. This is because at such a temperature, there is a significant amount of blue being emitted. Our eyes perceive this colour mixture as white.

Measuring spectra n

Describe the technology needed to measure astronomical spectra

There are three basic components required to measure astronomical spectra. First, the light must be gathered, so a telescope is required. Second, the collected light must be dispersed using a spectroscope. A diffraction grating is used to do this (acting in a similar way to a glass prism). The light will reflect off the diffraction grating at different angles, according to its different wavelengths. With the light separated into its range of wavelengths, the third requirement is a detecting or recording device. The first spectra were simply observed with the naked eye and recorded by hand. Photographic plates were able to make a permanent record of stellar spectra; however, for every 100 incident photons on photographic film, only one is captured and converted into the image. Collect the Disperse the Charged coupled devices (CCDs) are far more efficient, light using a light using a converting 80%–90% of photons into the recorded image. telescope diffraction grating to form This means that less exposure time is required to obtain a spectrum the spectra, and smaller telescopes can be used. This information is shown in a flow chart in Figure 24.7.

Stellar objects and types of spectra n

Identify the general types of spectra produced by stars, emission nebulae, galaxies and quasars

24.2 Figure 24.7  The requirements to obtain stellar spectra Record and analyse the spectrum

24.3

Emission spectra are observed from many nebulae. These are interstellar gas clouds at very low pressures heated by a nearby star. The heated gas molecules and atoms emit light at certain frequencies in a similar fashion to the gas in a fluorescent light. Quasars are also sources of emission spectra. Quasars are very distant, very luminous objects, thought to be huge black holes at the centre of a galaxy being consumed. They are as luminous as hundreds of normal galaxies. The material being accelerated by the black hole’s extreme gravity causes the emission spectra.

419

astrophysics

Quasars (quasi-stellar objects) were first believed to be stars; however, the red-shift evident in their spectra showed that they are receding from us at such fast speeds that some of them must be around 13 billion light-years away, near the edge of the observable Universe. Quasars may emit energy equal to many thousands of galaxies, making them extraordinarily luminous objects. The true nature of the approximately 100,000 known quasars is still uncertain. Absorption spectra are produced by stars. The relatively cooler outer layer of gases in a star’s atmosphere are responsible for absorbing particular frequencies of the light coming from the star’s core. The gaseous atoms that have absorbed a photon of light then re-emit the light at the same frequencies, but in all directions. To an observer, these re-emitted frequencies appear dark against a bright continuous background. Some stars, known as Wolf-Rayett stars, exhibit emission spectra. It is thought that these stars do not have a cooler outer atmosphere to absorb frequencies so the spectrum observed is coming directly from the radiative, inner layers. These stars are quite rare. Galaxies appear to emit a continuous spectrum—a result of the combination of the emission spectra from interstellar nebulae and the absorption spectra produced by the hundreds of billions of stars in the galaxy. However, closer analysis reveals that galaxies not actively producing new stars show absorption lines (especially calcium and magnesium) but have little or none of the emission lines produced by nebulae. Younger galaxies still producing new stars also show emission lines. Table 24.1  A summary of objects and spectra types they produce

24.4 PFA

H2 ‘Analyses the ways in which models, theories and laws in physics have been tested and validated’

420

Object

Type of spectra produced

Comments

Stars

Absorption

Absorption occurs in the atmosphere of the star

Emission nebula

Emission

Produced by the heating of low-density gases by nearby stars

Quasars

Emission

Possibly from matter being accelerated into a massive black hole

Galaxies

Continuous

May be absorption or emission depending on abundance of nebulae in galaxy

The key features of stellar spectra n

Describe the key features of stellar spectra and describe how these are used to classify stars

Background The classification of stars by their spectra began before astronomers fully understood the link between the patterns within the spectra and the surface temperature of the star.

How has this model evolved over time? In 1814 Joseph Fraunhofer carefully studied the hundreds of absorption lines present in the Sun’s spectrum. The original classification system used the letters A through to O based on the strength of the hydrogen absorption lines present (the Balmer series). It happens

chapter 24  spectroscopy: analysing the spectra of stars

that a surface temperature of about 10 000 K produces the strongest Balmer series absorption lines. These stars were assigned the spectral type A based upon this. Cooler red stars exhibit very weak Balmer series lines in their spectra, and were assigned the letter M for their spectral type. However, very hot stars have no discernable Balmer series lines. These stars were assigned the letter O. The reason for the lack of hydrogen lines in such stars is that at such temperatures (above 20 000 K), hydrogen is completely ionised. That is, the electron in hydrogen responsible for the production of the hydrogen lines in the spectrum is no longer associated with the nucleus of the hydrogen atom (a single proton). It exists as a free electron. Subsequent observation of black body radiation experiments and laboratory observations of hot gas spectra enabled astronomers to match the spectral types to surface temperatures of stars. The previous alphabetical order was found to be in need of a complete overhaul. Rather than re-assign all the letters, the spectral types were simply placed in the order hottest to coolest and simplified to eliminate overlapping and confusing spectra. This work was mainly done at Harvard University from 1918 to about 1924. The re-organised order, O B A F G K M, is still used today in the Hertzsprung-Russell diagram, a useful tool used by astronomers to assist in the classification of stars.

Where to from here?

SR

Animation: spectroscopy

TR

The advent of infrared astronomy, possible with satellite-based telescopes, has led to recent modification of the spectral types classification. Stars previously too cool to classify (as they were not detectable by light telescopes) are now assigned the spectral types R, N or S, depending on the elements present in their spectra. Other additions include the WR (Wolf-Rayet) and the T (T Tauri) categories.

PFA scaffold H2

>WWW

Useful website Further reading on stellar spectra: http://www.shef.ac.uk/physics/people/pacrowther/spectral_classification.html

The key features of a star’s spectrum used by astronomers when classifying the star include the appearance and intensity of spectral lines, the relative thickness of certain absorption lines, and the wavelength at which peak intensity occurs. The apparent colour of the star is determined by its surface temperature, as shown in Figure 24.8. Table 24.2  Relationship between colour and surface temperature of stars Spectral class

Effective temperature (K)

Colour

H Balmer features

O

28 000–50 000

Blue

Weak

Ionised He+ lines, strong UV continuum

B

10 000–28 000

Blue-white

Medium

Neutral He lines

A

7 500–10 000

White

Strong

Strong H lines, ionised metal lines

F

6 000–7 500

White-yellow

Medium

Weak ionised Ca+

G

4 900–6 000

Yellow

Weak

Ionised Ca+, metal lines

K

3 500–4 900

Orange

Very weak

Ca+, Fe, strong molecules, CH, CN

M

2 000–3 500

Red

Very weak

Molecular lines, e.g. TiO, neutral metals

L?

Useful website This site is an interactive black body radiation simulator showing a continuous spectrum for the chosen black body temperature: http://webphysics.davidson.edu/Applets/java11_Archive.html Table 24.3  Table of luminosity classes with examples

422

Symbol

Class of star

Example

0

Extreme, luminous supergiants



Ia

Luminous supergiants

Betelgeuse

Ib

Less luminous supergiants

Antares

II

Bright giants

Canopus

III

Normal giants

Aldebaran

IV

Sub-giants

Procyon

V

Main sequence

Sun

sd

Sub-dwarfs

Kapteyn’s Star

wd or D

White dwarfs

Sirius B

chapter 24  spectroscopy: analysing the spectra of stars

–10

25 000

10 000

5000

3000 Temp (°C) Red supergiants (I)

Blue giants –5 Absolute magnitude

Note: A ‘white dwarf’ is the remnant of a star in the final stages of cooling down after its nuclear fuel has been depleted. Despite their relatively high surface temperature they are very dim due to their size, about the same as the Earth. It is no longer fusing nuclei in its core, unlike ‘dwarf’ and ‘sub-dwarf’ stars, which are so-named due to their comparatively small size.

Main sequence 0

Red giants (II, III) Sun

+5

Subgiants (IV)

Main sequence

Main sequence

+10 White dwarfs O

The location of the various luminosity classes is shown on a HertzsprungRussell diagram in Figure 24.10.

B

A

F

G

M Spectral type

Figure 24.10  A Hertzsprung-Russell diagram with luminosity classes shown

The information about a star from its spectrum n

K

Describe how spectra can provide information on surface temperature, rotational and translational velocity, density and chemical composition of stars

24.5

Surface temperature The surface temperature of a star can be determined in two ways, both by examining the star’s spectrum. By studying the absorption lines in the spectrum and comparing the pattern and intensity against reference stellar spectra, the star can be assigned a spectral class and a corresponding surface temperature. Alternatively, the intensity versus the wavelength of the radiation being emitted by the star is plotted. The wavelength at which the intensity is greatest (peak intensity wavelength) is then used in Wien’s law to determine the effective surface temperature of the star.

λmax

T=W

Note: This equation is not given in the syllabus. 

Where: λmax = the peak intensity wavelength (m) T = the effective surface temperature of the star W = Wien’s constant (2.9 × 10–3 m K)

What is the ‘surface’ of a star if a star is a ball of gas? A layer of a star called the photosphere is a region where the temperature has cooled sufficiently for light to be produced. The more massive a star, the hotter is the photosphere. The photospheres of stars range in temperature from a few thousand K to 50 000 K. The temperature here is often referred to as the star’s ‘effective’ surface temperature,

423

astrophysics

acknowledging that a star does not have an actual surface in the same way a solid planet has. Molecules, elements and ions within the photosphere give rise to the star’s spectral features.

Rotational and translational velocity The relative velocity of a star either approaching or moving away from an observer can be measured by the blue shift or red shift exhibited in the star’s spectrum. The Doppler effect is the shortening of the wavelength of the light from a source that is approaching an observer and the lengthening of wavelengths from sources moving away (see Figure 24.11). Ordinarily, the Doppler effect may be noticed when an emergency vehicle passes with the siren on. The relative speed of the vehicle causes an increase in the pitch of the siren and then a decrease after it passes. This effect is also very noticeable for racing cars, as their high speed is a significant fraction (almost 1/3) of the speed of sound.

WWW> Figure 24.11  An example of red shift evident in a star’s spectrum

Useful websites A Doppler effect simulation for the effect on sound waves can be found at these two sites: http://galileo.phys.virginia.edu/classes/109N/more_stuff/flashlets/doppler.htm http://www.shep.net/resources/curricular/physics/java/physengl/dopplerengl.htm

If a star is rotating, one side is moving towards us while the other side is moving away, as shown in Star not moving Figure 24.12. This results in the absorption lines within the spectrum being both red and blue shifted simultaneously, so that they appear broader than expected. Careful measurement of the amount of broadening, along with an estimation Star moving away from us of the size of the star can lead to the calculation of the rotational velocity of the star. If a star is moving directly towards or away from us it will not change its position relative to other stars; however, its motion can be detected by the red or blue shift of its spectral lines. Again, the amount of red or blue shift can be measured in order to calculate the star’s translational velocity. Figure 24.11 shows how the spectrum of a star moving away from us would be shifted.

Figure 24.12  How a rotating star has its spectral lines broadened due to the Doppler effect

Rotating star Approaching side of star

Blue shift

Receding side of star

No shift

Observer

424

Red shift

Density It is very useful to know the density of a star’s atmosphere. The largest supergiant stars have the lowest densities, while main sequence stars have higher densities. Finding the luminosity class of a star and its spectral type allows for a very good estimation of the star’s absolute magnitude, from which its distance can be calculated using spectroscopic parallax. Lower density stellar atmospheres produce sharper, more narrow spectral lines. This is due to the

chapter 24  spectroscopy: analysing the spectra of stars

motion of the atoms and ions, which are absorbing the radiation and producing the lines. The particles in lower density gases travel further before each collision with other particles. The absorption lines they produce are sharper. As giant stars have less gravity near their surface, the pressure is also less near the surface where the absorption spectrum is being produced, so their spectral lines are finer.

Chemical composition

Figure 24.13  Fraunhofer lines visible in the Sun’s spectrum

Stars similar to the Sun have many elements in small quantities in their atmospheres. Each of these elements produces its own characteristic spectral lines. Calcium, potassium and iron are three such elements. Fraunhofer lines, named after Joseph Fraunhofer, who carefully observed the Sun’s absorption lines in the 1800s, are due to the many elements absorbing radiation at particular wavelengths. Matching the absorption lines found in a star’s spectrum with absorption lines produced by an element under laboratory conditions verifies the existence of that element in the star’s atmosphere. The relative intensity of the absorption lines indicates the abundance of that element.

Examining spectra n

Perform a first-hand investigation to examine a variety of spectra produced by discharge tubes, reflected sunlight, or incandescent filaments

This investigation is best performed using a hand-held spectroscope.

Method

first-hand investigation physics skills H12.1 a, b, d H12.2 a, b H14.1 e, f, g

1. In a darkened room, set up discharge tubes filled with a variety of different gases (sodium

and mercury are two that are highly suitable.) 2. Observe the spectra produced by each discharge tube, and sketch your observations. 3. Next, observe the spectrum produced by a fluorescent light, and compare it with those

previously observed from discharge tubes.

TR

Risk assessment matrix

4. Observe the spectrum produced from an incandescent globe (without other sources

of light present) and contrast this spectrum with those previously observed in steps 2 and 3. This should be repeated using different voltage settings on the power supply noting the effects of the change in temperature to both the intensity and the range of colour. 5. Finally, go outside and observe the spectrum from reflected sunlight. Never point the spectroscope towards the sun— Hg damage to your retina may result! (You must ensure that the spectroscope is carefully focused for this part.) Compare and contrast this spectrum with all of those previously observed and relate the nature of each spectra to how they are produced.

Figure 24.14  An example of the emission spectrum produced by a mercury discharge tube

425

astrophysics

Predicting a star’s surface temperature from its spectrum secondary source investigation

n

Analyse information to predict the surface temperature of a star from its intensity/wavelength graph

An intensity versus wavelength graph such as the one shown in Figure 24.15 shows the relationship between the surface temperature of a black body and the wavelength of the peak intensity of the radiation being emitted.

WWW>

Useful website This website with user inputs for temperature shows black body radiation intensity curves: http://webphysics.davidson.edu/Applets/java11_Archive.html

Relative intensity

Intensity

6 000K 5 000K 4 000K 3 000K

Wavelength Figure 24.15  Intensity versus wavelength for black bodies at different temperatures

400 700 1000 Ultraviolet Infrared Wavelength (nm) Figure 24.16  An intensity versus wavelength plot for the Sun

The intensity versus wavelength plot for a star can be compared to a given graph such as that shown in Figure 24.16. Another method is to use Wien’s law, first developed in the 1890s, which is:

λmax =

W T

Note: This equation is not listed in the syllabus

and it is not necessary to memorise it.

Where: λmax = the wavelength of peak intensity W = Wien’s constant (2.898 × 10–3) T = the star’s surface temperature (in K)

The peak intensity wavelength for a star is observed and the equation applied. Example

The peak intensity of radiation from a star being observed has a wavelength of 580 nm. What is the surface temperature of this star? Solution

λmax =

426

W T

chapter 24  spectroscopy: analysing the spectra of stars

W λmax



T=



=



= 5.00 × 103 K

2.898 × 10–3 580 × 10–9

chapter revision questions   1. Describe ways in which an absorption spectrum is similar to an emission spectrum and

to a continuous spectrum.   2. How is a continuous spectrum produced?   3. The spectrum from a distant galaxy appears to be a continuous spectrum. Why?   4. The individual spectral lines within a star’s spectrum all appear slightly shifted towards

the red end. What does this tell us about the relative motion of this star?   5. Outline the information that can be found from the analysis of a star’s spectrum.   6. A star, which has been observed for many years, does not seem to be moving relative

to nearby stars; however, its spectral lines are shifted towards the blue end of the spectrum. What does this tell us about the motion of the star relative to us?   7. Use a diagram to explain how the spectral lines of a rotating star are red and blue

shifted simultaneously.   8. How would the spectrum of a star with an atmosphere rich in metallic elements differ

from a star that has a very low or non-existent abundance of metallic elements?

SR

  9. How would the peak wavelength of radiation emitted from a red star be different from the

peak wavelength emitted by a white star? 10. Outline how the density of a star and hence its luminosity class can be found from its

spectrum.

Answers to chapter revision questions

427

astrophysics

CHAPTER 25

Photometry: measuring starlight Photometric measurements can be used for determining distance and comparing objects

25.1

Stellar magnitude n

Define absolute and apparent magnitude

The present scale of magnitude, used to measure the brightness of stars, has its origin as far back as the Greek astronomer Hipparchus (190–120 BC). Hipparchus surveyed more than 800 stars, and, based upon their apparent brightness in the night sky, assigned them a number. Number one was assigned to the brightest stars, through to number six for the faintest visible to the naked eye. It was not until the 1800s that it was realised the human eye detects light in a logarithmic fashion, and international astronomers defined a magnitude 1 star as being 100 times brighter than a magnitude 6 star. Mathematically, this results in a magnitude difference of 5 one as being a brightness ratio of 2.512, as √100 = 2.512. Hipparchus’s modified magnitude scale has now been extended to include the faintest objects (observed through powerful telescopes such as the Hubble Space Telescope) through to the brightest (the Sun). Some well-known objects and their magnitudes as observed from Earth are shown in Table 25.1. These magnitudes are called apparent as the brightness is observed from Earth. The brightness of a stellar object is dependent on its temperature, surface area and distance (inverse square law). Definition

The apparent magnitude of an object is how bright it appears from the Earth using the magnitude scale. Table 25.1  Some well-known objects and their apparent magnitudes Object Sun

Apparent magnitude − 27

Venus (at its brightest)

− 4.4

Alpha Centauri

− 0.27

Titan (Saturn’s largest moon)

+8

Faintest star visible using an Earth-based telescope

+ 26

Faintest object detected by Hubble Space Telescope

+ 30 (approx.)

A star’s distance is a major factor in determining its brightness, and hence its apparent magnitude. Another magnitude, known as absolute magnitude is used to compare the luminosity (i.e. total light energy being emitted) of astronomical objects. To make this comparison fair, a standard distance of 10 parsecs is used.

428

chapter 25  photometry: measuring starlight

Definition

The absolute magnitude of an object is how bright it would appear to be if placed at a distance of 10 parsecs using the same magnitude scale as that used for apparent magnitude. Table 25.2 shows the apparent and absolute visual magnitudes of some stars. Table 25.2  The apparent and absolute visual magnitudes of some stars Star Sun

Apparent magnitude − 27

Absolute magnitude + 4.8

Sirius (next brightest star)

− 1.4

+ 1.4

Betelgeuse (a red supergiant in Orion)

+ 0.45

− 5.1

Barnard’s Star

+ 9.5

+13.2

The comparison of absolute magnitudes allows the luminosities of stars to be compared. It is noted that Betelgeuse is approximately 10 000 times more luminous than the Sun. For each five magnitudes lower, a star is 100 times brighter (by definition). As Betelgeuse has an absolute magnitude about 10 lower than the Sun, its luminosity is 100 × 100 = 10 000 greater. A star’s luminosity is determined by two factors: the surface temperature of the star and its surface area. The laws that relate to the amount of energy emitted by a black body are closely matched by a star. Importantly, the energy radiated by a star is proportional to the temperature of the surface to the fourth power, as given by the Stefan-Boltzmann law: E α T 4. A star with a surface temperature of 7 000 K when compared to a star with a surface temperature of 3 500 K will radiate 24, or 16 times, more energy per surface area. A star with a surface temperature of 21 000 K is six times hotter than a star with a surface temperature of 3 500 K, and therefore emits 64, or 2376 times, the energy per surface area. Stars are essentially spherical. The surface area of a sphere = 4πr2. A doubling in the size of a star gives it four times the surface area. A red supergiant can have a radius more than 25 000 times larger than a white dwarf, giving it 625 million times the surface area. This factor outweighs the previous temperature dependency, so that red giant stars are far more luminous than white dwarfs. It should be remembered why the magnitude scale seems to be the wrong way around, that is, why brighter objects have a lower magnitude. The system in use today relates all the way back to the ancient Greek astronomers, including Hipparchus and the system he used to rank the stars from brightest to faintest using the numbers one to five.

429

astrophysics

25.2

Using magnitude to determine distance n

Explain how the concept of magnitude can be used to determine the distance to a celestial object.

Only the closest stars can have their parallax angle determined and thus their distance measured directly. However, if a star’s absolute magnitude can be found (methods for doing this will be discussed later) and its apparent magnitude measured, the distance to the star can be calculated. In Table 25.2, it can be seen that the Sun’s apparent magnitude is a much lower number than its absolute magnitude. This is because the Sun is much closer than 10 parsecs away. Conversely, the apparent magnitude of Betelgeuse is higher than its absolute magnitude as it is about 130 pc away. Interesting fact: The light from a star that is 10 pc away takes 32.6 years to reach us. Light from the Sun takes a little over eight minutes to reach us! Astronomers refer to the difference between a star’s apparent magnitude, m, and its absolute magnitude, M, as the star’s ‘distance modulus’: m – M = distance modulus Using the distance modulus, the distance to a star can be calculated using a technique called spectroscopic parallax.

25.3

Spectroscopic parallax n

Outline spectroscopic parallax

A star closer than 10 pc will have a negative distance modulus, while a star further away than 10 pc will have a positive distance modulus. Using this distance modulus and the definition used to determine the magnitude scale, the equation: M = m – 5log

d 10

can be used to calculate the distance to the star. The distance given will be in parsecs. Example 1

A certain star has a measured apparent magnitude of +17 (using a large telescope). This star’s absolute magnitude is determined to be +3. What is the calculated distance to the star using spectroscopic parallax? Solution

430

d 10



M = m – 5log



3 = 17 – 5log

d 10

chapter 25  photometry: measuring starlight

–14 = – 5log log



d 10

d = 2.8 10 d = 102.8 10 d = 103.8 d = 6310 pc

Solve problems and analyse information using: d I M = m – 5log and A = 100 (mB – mA)/5 to calculate the IB 10 absolute or apparent magnitude of stars using data and a reference star

■■

SR

( )

Worked examples 21, 22

The first of these two equations has been dealt with in the previous section. In order to find the absolute magnitude of a star that has a known distance (using trigonometric parallax), the star’s apparent magnitude must also be measured. An example is shown below. Example 2

A newly discovered star is in a galaxy known to be 40 000 pc away. The star’s apparent magnitude is measured as +21.0. What is this star’s absolute magnitude? Solution

M = m – 5log

d 10

= 21 – 5log

40000 10

= 21 – 18.0 = 3.0 Note: ‘Log’ in physics implies log10.

Example 3

What is the apparent magnitude of a distant star ‘Alpha’ if it is one-tenth as bright as another star ‘Beta’? The apparent magnitude of Beta is +8.0. Solution

Using

IA = 100(mB – mA)/5 IB

431

astrophysics

where mB = +8.0 and

IA 1 = IB 10



= 0.1 0.1 = 100(8 – mA)/5

taking the log of both sides: so log 0.1 = (8 – mA )/5log100 –1 = (8 – mA )/5 × 2 –5/2 = 8 – mA mA = 10.5

25.4

Colour index n

Explain how two-colour values (i.e. colour index, B-V) are obtained and why they are useful

The direct observation of a star’s colour is not always possible, especially if the star is faint or if there is interstellar dust or gas between Earth and the star. The human eye is insensitive to colour from faint sources, especially when the light is from point sources such as starlight. Our eyes also take a long time to adapt to darkness and we ‘undersee’ red light during this time. Analysis of a star’s spectrum may also be difficult, so astronomers have developed an alternative way in which the colour of a star, and hence its surface temperature, can be determined. The determination of a star’s apparent magnitude (see earlier in this chapter) by comparing its brightness to a reference star’s brightness is relatively straightforward. By placing a blue filter between the telescope and the camera, or using a photometric device that analyses only the blue part of the star’s spectrum, an astronomer can measure the blue apparent magnitude (mB or B) of the star. Using a yellow filter or yellow part of the star’s spectrum, the yellow, or ‘visual’, apparent magnitude (mV or V) of the star can also be measured. The difference between the two magnitudes, mB – mV , is usually between 0 and 1.5. The blue filter (B), at 440 nm wavelength, is used for colour index calculations as this wavelength corresponds to the peak sensitivity of photographic film. The human eye is most sensitive to wavelengths in the yellow-green portion of the visible spectrum, at 550 nm. This wavelength is used for the visual (V) filter. In Chapter 24, the way in which the intensity of a star’s spectrum varies with wavelength was discussed. A star with a relatively low surface temperature, such as a red giant, will not emit much radiation in the shorter wavelength region of the visible spectrum as a proportion of its total radiation. This proportion will increase as the star’s surface temperature increases, so that a white star will have a greater proportion of its total radiation being emitted in the blue end of the spectrum. As a consequence, a white star will have a blue magnitude very close in value to its visual

432

chapter 25  photometry: measuring starlight

Intensity curves for a red star (3 500 K) and a white star (10 000 K) Intensity curves modified to same peak intensity on y axis 440 nm

550 nm

10 000 K

3 500 K

Intensity

(yellow) magnitude, so that its colour index, B – V, will be close to zero. However, a red star, being brighter in the yellow part of its spectrum than in the blue, will have a colour index of about 1.5. Very hot blue or blue-white stars may have a colour index slightly less than zero. Figure 25.1 shows the intensity versus wavelengths graphs of a white star and a red star with the blue and visual intensities compared. Very cool stars, such as ‘brown’ stars, are not usually assigned a colour index as they emit only a very small amount of blue-wavelength radiation which is often too weak to measure properly. Table 25.3 shows some examples of some colour indices.

B

V

Table 25.3  The colour index, mB − mV, and a star’s colour and typical surface temperature Star

Colour Index

Colour

Surface temperature (K)

Betelgeuse

+ 1.54

Red

3 400

Sun

+ 0.67

Yellow

5 850

Sirius A

− 0.06

White

9 900

Wavelength Figure 25.1  The intensity versus wavelength curves for a white star and a red star, showing how the colour index is found for both stars

Being able to obtain a star’s colour index assists astronomers in determining the distance to the star using spectroscopic parallax. While spectroscopic parallax is used to find the distance to a star, it does not use parallax measurements at all. Rather, it relies on being able to find a star’s absolute magnitude, M, which may include finding the star’s colour index or other methods such as spectral analysis to determine the star’s spectral class or surface temperature. This determines the star’s horizontal position on an HR diagram. Further analysis of the star’s spectrum may allow its luminosity class to be found, for example if the star is a giant or a main sequence star. This allows for the star’s vertical position on a Hertzsprung-Russell (HR) diagram to be estimated which in turns reveals its absolute magnitude, M. Finally, the equation d can be applied to find the distance to the star. M = m – 5log 10

Using filters for photometric measurements n

Perform an investigation to demonstrate the use of filters for photometric measurements

Sample procedure for this investigation Use an incandescent light (a source of a continuous spectrum) such as a light globe or the lamp in a ray box as the source. Ray box kits have coloured filters suitable for use in this investigation. Coloured cellophane will suffice as a filter too. Use either a light detector

first-hand investigation physics skills H12.1a, b, c H12.2a, b

433

astrophysics

attached to a data logger, or a light meter to record the intensity of the light through various coloured filters and without the filter. If possible, change the temperature of the globe (using a ray box lamp, turn the voltage down to 10 V or 8 V) and take another set of readings. The brightness of the light through each filter is represented by the intensity measured. It is also possible to use a hand-held spectroscope to view the spectrum of the globe through the filters to observe the effect the filter has on the different wavelengths being emitted from the globe.

Questions 1. Compare the ratio of the brightness of the light for your source through a blue or violet

filter with the intensity through a red filter. 2. How does this ratio vary when the lamp is turned down (i.e. becomes cooler/redder)? 3. How does this ratio represent the ‘colour index’ of the globe?

25.5 PFA

H4 ‘Assesses the impacts of applications of physics on society and the environment’

TR

PFA scaffold H4

Photographic versus photoelectric technology n

Describe the advantages of photoelectric technologies over photographic methods for photometry

What are the applications of physics in this case? Exposure to light triggers the small generation of an EMF in one of millions of miniature photocells in a CCD (charge-coupled device). The photoelectric effect is responsible for this EMF, which, when along with the millions of other EMFs in the other cells, is analysed and constructed to produce an image. The need for sensitive CCDs to act as the ‘film’ in photoelectric devices with sufficient resolution to rival traditional photographic film for use by astronomers and organisations such as NASA has driven the development of this technology over the past few decades. The advantages of photoelectric devices (digital format, better sensitivity, remote sensing, selective wavelengths, etc.) have made them essential for astronomy. This has filtered down into consumer use, where, in the space of a few years, digital cameras and videos have surpassed photographic devices with the exception of some very specialised uses.

What impacts have there been on society? Society has benefited from photoelectric technologies in several ways. Instant viewing and transmission of photographs using digital technology (i.e. the Internet, email, etc.) is one example. A medical procedure can be viewed in real time by specialists thousands of kilometres away. The photoelectric devices in the video cameras are linked directly to the Internet, whereas photographic devices would have had to have the film developed and then transported, perhaps taking days. Remote sensing cameras onboard satellites or automatic cameras are used for weather and climate observations and in security applications.

434

chapter 25  photometry: measuring starlight

What impacts have there been on the environment? Large quantities of chemicals, including the element silver, were required to produce photographic film. The photos themselves were printed on paper. Many of these were subsequently discarded due to poor quality or mistakes. Digital photos still require paper and ink for printing; however, only selected good quality photos need be printed as they can be previewed using a computer or the camera itself. No silver is used in the operation. Indirectly, the ability to monitor the environment using photoelectric devices has enabled scientists to observe the effects of pollution, land clearing and other human activity. Knowing these effects allows us to take active measures to guard against further harm. The depletion of the ozone layer is a good example of how remote sensing alerted scientists to a potential catastrophe.

The assessment of these impacts The widespread use of photoelectric devices in today’s society at the consumer as well as technical levels has made an important and significant impact. Communication, medicine, remote sensing, meteorology and climatology are a few of the important fields that have been radically changed by the adoption of technology that was originally driven by the needs of astronomy and space research. Useful websites

>WWW

Interesting reading on the photoelectric effect and its applications: http://cfcpwork.uchicago.edu/kicp-projects/nsta/2007/pdf/nsta_2007-photoeleclab.pdf Information on charge-coupled devices: http://www.computerworld.com/softwaretopics/software/multimedia/story/0,10801,62778,00.html

Photometry is the measurement of the brightness of astronomical objects, including stars, nebulae, asteroids and galaxies. The first photographic photometry commenced in the early 1900s. Brighter objects in a photographic exposure create larger images, despite the actual image size being a single point. Comparison of the diameter of the image made by an object against reference objects on the same exposure allowed for the accurate measurement of the brightness. Traditional photographic techniques using light-sensitive film emulsion based on the reaction of silver salts have been used since the early 1800s. To record images seen through telescopes, astronomers have been using photography to record the spectra of stars since 1872. Photography is an inherently slow process as it is based on a chemical reaction. It requires the developing and fixing of the image onto a medium such as a glass plate or paper. It is also Photo-sensitive quite an inefficient process, capturing only a few photons out of area every hundred incident photons to form an image. Photography is not suitable for remote applications such as recording images from telescopes onboard orbiting observatories. There are three types of photoelectric technologies suitable for photometry. All utilise the photoelectric effect to produce a voltage. The first is the photomultiplier tube (see Fig. 25.2), a vacuum tube capable of multiplying the original signal by millions of times. They are accurate and sensitive, but prone to mechanical damage in harsh environments and can be destroyed by being exposed to very bright light.

Figure 25.2  A diagram of a photomultiplier

435

astrophysics

Figure 25.3  A photodiode

A typical CCD

The photodiode (see Fig. 25.3) is a solid state device used as a light detector. While not as sensitive as a photomultiplier, they are smaller and more robust, and are capable of detecting a broader range of wavelengths, from infrared through to UV. The most popular device now used in stellar photometry is the CCD, or charge-coupled device. Modern photoelectric technologies use the photoelectric effect in tiny individual photovoltaic cells to record incident light. Millions of these cells are grouped together to act much like the rod and cone cells in a human eye. Each cell is wired to miniature capacitors to hold the charge to make CCDs. These hold the image and then deliver it in binary code to computer software, which reconstructs the code back into an image. The number of pixels in a digital camera equates to the number of individual photovoltaic cells in the CCD used to record the image. The higher the number, the better the resolution of the image, making the detail finer and the image sharper. A CCD in a typical general purpose digital camera may have many millions of these cells. The strength of the signal produced within a CCD by a particular stellar object represents that object’s brightness. This analysis can be performed rapidly using computer technology. Photoelectric technology in the form of CCDs has other distinct and important advantages over photographic methods for recording images, as well as for photometry. These are outlined below.

Sensitivity A typical CCD will respond to over 70% of the incident light, whereas photographic film’s response is only about 2–3%. For astronomy, this is a very significant advantage as exposure times necessary for photographs can be reduced by over 90% using photoelectric instruments. The sensitivity of CCDs is relatively flat over the spectrum compared to photographic film that is more sensitive to the blue end of the spectrum than the red.

Response to a range of wavelengths CCDs are responsive to infrared wavelengths. They are used in remote-control receiving circuits for televisions and similar devices. Night vision binoculars and cameras use CCDs for military, security and search and rescue applications. In astronomy, they can be used for infrared as well as visual imaging. It is also possible to select a small range of wavelengths to analyse by subtracting unwanted wavelengths from the image.

Image manipulation and enhancement Computer programs can be used on the digital code making up the image to enhance, enlarge, add false colour or subtract selected wavelengths to assist in identifying features in the image which may not be detected otherwise. Such manipulation of photographs is either impossible or would take a long time to prepare.

436

chapter 25  photometry: measuring starlight

The impact of improvements in technology in astronomy n

Identify data sources, gather, process and present information to assess the impact of improvements in measurement technologies on our understanding of celestial objects

secondary source investigation PFAs H4, H5

What you need to do

physics skills

The way in which photometry has progressed, which has been outlined previously, will form the basis of your research. Further background reading, for example found at the suggested useful websites below, will assist with this. How these improvements have increased our understanding of celestial objects will centre on photoelectric applications in photometry and in image and data collection and analysis.

H12.3a, b, c, d H12.4f H13.1a, b, c

Specific information to gather and process for presentation It is necessary to identify specific ways in which improvements in measurement technologies have played a part in our understanding of celestial objects. These include, but are not limited to: n how very small variations in the brightness of a star may indicate it is in fact a multiple star system (see next chapter) n how variable stars’ light output changes over time n how asteroids can be tracked in their orbits around the Sun n how images obtained in the UV or infrared parts of the electromagnetic spectrum have added to our understanding

The finishing touches As with any ‘assess’ investigation or question, the extent to which the improvements in measurement technologies have actually increased our understanding of celestial objects must be clearly defined. For example, it is not sufficient to simply say ‘to a great extent’. Illustrate your final assessment with a few examples of what we know now as a direct result of the improvements. Useful website The Australia Telescope National Facility web site information: http://outreach.atnf.csiro.au/education/senior/astrophysics/photometry_photoelectricastro.html

SR

‘Assess’

>WWW

chapter revision questions 1. Outline the difference between a star’s absolute magnitude and its apparent magnitude,

giving definitions where appropriate. 2. Explain why a star with a larger apparent magnitude is in fact not as bright as a star with

a smaller apparent magnitude. 3. A star has an apparent magnitude of +8.0 when the observer is 20.0 pc away. When the

observer is 2.0 pc away from the star, what will the star’s apparent magnitude be?

437

astrophysics

4. Further to question 3, calculate the star’s absolute magnitude when the observer is

20.0 pc and then calculate the star’s absolute magnitude when the observer is 2.0 pc away. Why are the two values equal? 5. Calculate the brightness ratio of star ‘P’ (apparent magnitude +4.5) with star ‘Q’ (apparent

magnitude +7.0). 6. Explain the purpose of obtaining the colour index of a star.

SR

7. The magnitude of a star measured through a yellow (visual) filter is +12.5. Given that it is

known that this star has a surface temperature of 3 500 K, what would the magnitude of the star be when taken through a blue filter? 8. In point form, list reasons why photoelectric applications have largely replaced Answers to chapter revision questions

438

photographic methods for measuring and recording starlight. 9. Outline some applications of photometry.

chapter 26  variable and binary stars

CHAPTER 26

Variable and binary stars The study of binary and variable stars reveals vital information about stars Binary stars and their detection n

Describe binary stars in terms of the means of their detection: visual, eclipsing, spectroscopic and astrometric

When viewing the stars on a clear night, it is apparent to the naked eye that some stars seem to be very close to each other. Almost all of these cases are simply due to the optical illusion of closeness caused by the observer’s line of sight. One of the stars is closer than the other, and the two stars are most likely very far apart. These stars are known as optical binaries, and are not true double star systems. (The term ‘optical illusion’ can be thought of when referring to such stars.) There are cases where stars are so close together that they have a common centre of gravity and revolve around one another. Such binary star systems are much more common than previously thought, as modern astronomical techniques unveil more stars as being double (or triple or even quadruple) star systems. Several methods are used by astronomers in identifying binary stars that to the naked eye appear as a single star. Binary stars are especially important to astronomers. Calculations of their separation distance and observations of the period of the orbits of the two stars can yield the mass of the two stars. The mass of a star is an important quantity in astrophysics.

Visual binaries As the name implies, visual binary stars can be resolved by a suitably large telescope. Successive observations over time show that the two stars do indeed revolve around each other. Alpha Centauri, the brighter of the two pointers to the Southern Cross, appears as a bright single star to the naked eye. However, with the resolving power of binoculars or a small telescope, two individual stars of spectral types G2 and K1 (yellow and orange) can be seen. The two stars are about as far apart as Uranus is from the Sun, or 24 times the distance of Earth from the Sun. They take 81 years to orbit their centre of mass. The third star that exists in this triple star system, Proxima Centauri, requires a modest size telescope to be seen.

26.1 Figure 26.1  (a) Alpha Centauri (visible at left of photo)

Figure 26.1  (b) Alpha Centauri resolved into its two component stars

439

astrophysics

In many cases, the two stars cannot be resolved visually, as the angular separation is smaller than the resolution of even the largest telescopes available. In such cases, careful observation of the light being received from the star may reveal its binary nature.

Eclipsing binaries

Brightness

Figure 26.2  Brightness variation for a binary star system where the two stars happen to be identical, which is rare

L

L/2 A

B

Time

As the name implies, eclipsing binary stars revolve about each other in a plane that brings one star in front of the other when viewed from Earth. This eclipsing, or occulting, of the light from the star behind, may result in a detectable dimming of the overall brightness of the light being observed from both stars. This variation in brightness may follow a regular pattern, and is repeated with each orbit. Figure 26.2 shows how, if two identical stars are orbiting in a plane that results in the total eclipsing of one of the stars as one star passes in front of the other, then the brightness of the binary star will be observed to halve periodically. It is rare for the two stars in a binary system to be identical in size and luminosity. The light curves of most eclipsing binaries require careful analysis, taking in factors such as the size of each star, their spectral types (the factors which determine each A B A B A B star’s luminosity) as well as the plane of the stars’ orbits (which determines the extent to which the stars eclipse each other as viewed from Earth).

Modelling light curves of eclipsing binaries first-hand investigation physics skills

n

Perform an investigation to model the light curves of eclipsing binaries using computer simulation

H14.1b, c, f

WWW>

Useful website This interactive website allows the user to choose these variables and observe a simulated light curve from a pair of eclipsing binary stars: http://instruct1.cit.cornell.edu/courses/astro101/java/eclipse/eclipse.htm

Spectroscopic binaries There are many examples where a binary star system cannot be identified by resolving the image of the two stars and that the binary stars do not orbit in a plane that causes eclipsing of either star. In these cases, further analysis of the star’s spectrum reveals the slight Doppler shifting of the spectral lines as one star recedes from the observer and the other approaches. The simultaneous blue shifting and red

440

chapter 26  variable and binary stars

shifting cause the spectral lines to split into two and then recombine when the star’s motion is side-on, or translational, to the observer. A B A B The majority of known binary A B systems have been discovered using this technique. Binary systems separated by less than one astronomical unit (the distance between the Earth and the Sun) have been identified where no telescope can resolve them visually.

Useful website

A

B

Figure 26.3  The periodic splitting of the spectral lines in a spectroscopic binary star system

>WWW

A computer simulation similar to the one above is available at: http://instruct1.cit.cornell.edu/courses/astro101/java/binary/binary.htm

Astrometric binaries The exact position of a star in the night sky can be measured very accurately. In the days when photography was the means of recording images, astronomers would compare glass plate ‘negatives’ taken some time apart and determine the ‘proper’ motion of stars. Stars are not fixed in their positions—most exhibit some degree of proper motion as they move through space. Proper motion of the stars causes the shape of constellations to change. Indeed, since the first maps of the night sky were produced by the Chinese and ancient Greeks, the constellations have changed slightly. Astrometric binary star systems are detected by observing a star’s very slight ‘wobble’ in its position along its path of proper motion. The only force known that could cause a star to deviate in such a way is gravity from a nearby, invisible companion star that must be orbiting the visible star. In recent years, as measurements have become more accurate, many more astrometric binary star systems have been found. There are also now known to be hundreds of planets orbiting stars, causing them to exhibit such motion. The mass and separating distance of the companion star (or planet) can be calculated from the motion of the visible star. Other than the mass of the companion star, little else about its nature can be found out as its spectrum is not detectable from Earth. However, there are cases where the two stars are so close that matter is being dragged off the surface of one onto the other, causing the emission of X-rays, which can be detected from satellite-based observatories.

Useful website

Figure 26.4  The ‘wobbling’ of the position of a star as it undergoes proper motion

Proper motion path of star

Observed path of star

>WWW

Another downloadable program for eclipsing binary simulation for PCs is available here: http://www.cosmion.net/software/ebs/

441

astrophysics

26.2

Luminosity

Figure 26.5  The mass-luminosity relationship for stars on the main sequence on an HR diagram

SR

Worked examples 23, 24

Important information from binary stars n

Explain the importance of binary stars in determining stellar masses

The previous section has outlined various methods by which binary star systems are detected. Astronomers are particularly interested in binary star systems as they are the only way in which the mass of stars can be measured. Finding the distance separating the two stars (r) and their orbital period (T) can yield the combined mass of the system using a derivation of Kepler’s laws. The mass of a star not only determines its position on the main sequence of a Hertzsprung-Russell (HR) diagram (more about these in Chapter 27) but also its luminosity. A more massive star will be larger and hotter, and therefore more luminous. This is known as the ‘mass-luminosity’ relationship. It is a very useful tool in being able to find the distance to the star Greater mass and greater luminosity using spectroscopic parallax (see Chapter 25). The luminosity of a star gives its absolute magnitude, M. Main sequence Once the period and the stars separation of a binary system is determined, the formula 4π2r 3 can be applied to m1 + m2 = GT 2 Spectral class find the mass of the two stars.

Solve problems and analyse information by applying: 42r 3 m1 + m2 = GT 2

■■

This equation can be modified and used with non-SI units of solar masses (for m1 + m2), astronomical units (for separation distance r) and Earth years (for time T ). If these units were used, the equation is simplified to: r3 m1 + m2 = 2 T The following examples show the equation used in both formats. Example 1

A binary star system is observed to have a period of 28 years. The two stars are separated by a distance of 2.5 × 109 km. What is the total mass of the two stars in the binary system? Solution

Since the units for distance are given in km, the full equation will be used. T = 28 years = 28 × 365 × 24 × 60 × 60 seconds (approximately) = 8.8 × 108 s

442

chapter 26  variable and binary stars

r = 2.5 × 109 km = 2.5 × 1012 m Substituting into the original equation: m1 + m2 =

4π2r 3 GT 2 4π2 × (2.5 × 1012)3 6.67 × 10–11 × (8.8 × 108)2



=



= 1.2 × 1031 kg

Example 2

Another binary star system has a period of 3.50 years and a separation of 2.70 a.u. The mass of the larger star is estimated as being 7.50 × 1030 kg. What is the mass of the smaller star? (The Sun’s mass is 6.00 × 1030 kg.) Solution

The combined mass of both stars in the system must be found first: m1 + m2 =

r3 T2 2.703 3.502



=



= 1.61 solar masses

This gives the combined masses as 1.61 × 6.0 × 1030 kg = 9.66 × 1030 kg The smaller star’s mass = 9.66 × 1030 kg – 7.50 × 1030 kg = 2.16 × 1030 kg

Classifying variable stars n

Classify variable stars as either intrinsic or extrinsic and periodic or non-periodic

26.3

Variable stars are ones that appear to vary in brightness. This may be due to the changing luminosity of the star itself, or it may be due to an external factor such as a companion star passing in between the observer (Earth) and the star, causing an apparent dimming of the star but not actually changing the star’s luminosity. The difference between intrinsic and extrinsic variables lies in the cause of the change in the brightness of the star. Intrinsic variables vary in luminosity, that is, the light output of the star changes. Extrinsic variables appear to change in brightness; however, their luminosity is not changing, but an external factor changes the light reaching the Earth.

443

astrophysics

Types of extrinsic variable stars There are two known methods by which a star may be an extrinsic variable. The first, eclipsing binary systems, has been covered previously. An eclipsing binary system is thus classified as an extrinsic variable. It is also possible that the stars that make up a binary system are intrinsic variables as well. Stars like our Sun have sunspots that are darker than the surrounding surface of the star. Sunspots are not dispersed symmetrically around the Sun. As the Sun rotates, an observer may notice a slight variation in its luminosity. A greater area of sunspots would result in an apparent reduction in the Sun’s brightness. Such observations have been made in other stars, which are classified as intrinsic variables.

Types of intrinsic variable stars Many stars have been identified as being pulsating variables. These stars are periodically expanding and contracting, which changes their size, spectral type and luminosity. These stars appear to be otherwise stable, their pulsating nature being within certain limits, enabling them to continue to pulsate without exploding or disintegrating. Occasionally, a star is observed to brighten by millions of times. Such occurrences have been recorded throughout history. When a star that was invisible to observers began to shine brightly in the night sky, it was said to be a new star, hence the name nova, or supernova. Modern astronomical research using spectroscopic techniques have led to theories on how such events occur. Current theories suggest that most supernova are due to a white dwarf in a binary system ‘accreting’ matter from its nearby companion star until a cataclysmic explosion occurs due to a sudden, very energetic nuclear reaction. Such events are termed type 1 supernova. The less common type 2 supernova events are believed to be the result of the collapsing core of very massive stars at the end of their life cycle. Having consumed all of the available nuclear fuel, the core collapses and an extreme, sudden event blasts the outer layers of the star into space. Supernova 1987a (seen in the year 1987) was the first such event recorded with the original star having been previously identified. Yet other types of intrinsically variable stars are known as ‘eruptive’. These stellar events usually occur due to the presence of a binary star system. One such event is believed to be due to the way in which a white dwarf companion to a red giant is able to ‘blow off’ matter it has accumulated without destroying itself, repeating the action every 100 000 years or so. The result is that to observers, the star system suddenly brightens and gradually returns to normal over a number of days or weeks.

Figure 26.6  Pulsating variables

Periodic or non-periodic? A A: B: C: D: E:

B

C

D

Expansion phase; as the star expands, its surface cools. Expansion phase slows. Expansion phase ends. Star has cooled and begins to shrink. Shrinking phase continues. Shrinkage phase ends. Star has heated and begins new expansion phase.

444

E

The cataclysmic and eruptive type of variable stars that do not repeat their brightness variation are considered non-periodic. It is apparent from the previous section that stars can vary in brightness on a regular, repeating basis. Such variations are called periodic, as is a

chapter 26  variable and binary stars

sine or cosine curve. The period can range from hours to years. RR Lyrae, a regular variable with a period of 13 hours, has given its name to a class of regular variables. The Hubble Space Telescope was able to measure the distance to this star in 2002 with reasonable accuracy, which is important as such stars are used as ‘standard candles’. As their period is related to their absolute magnitude, they can be used as distance measuring objects in space. (See next section.) A special class of regular periodic variables are Cepheids.

Distance and the period-luminosity relationship for Cepheid variables n

26.4

Explain the importance of the period-luminosity relationship for determining the distance of Cepheids

Cepheid variables are named after the first such star to be identified, Delta Cephei. They are very luminous yellow giant stars. They are so luminous that some can be individually observed in neighbouring galaxies such as M31, Andromeda and in the Virgo cluster at a distance of some 60 million light-years. In the early 1900s, Henrietta Leavitt, an American astronomer, had catalogued over 1700 variable stars in the Small and Large Magellanic Clouds. She found that there was a good correlation between the periods of the 47 Cepheids observed and their luminosities. These clouds are two small galaxies neighbouring the Milky Way. The stars within them all have approximately the same distance from Earth, making it possible to correlate the period with the luminosity of the Cepheids. These were cross-referenced with a number of Cepheids of known distance. The Cepheids could now be used as ‘standard candles,’ stars which astronomers can use as distance measuring tools. Figure 26.7 shows a period-luminosity graph for Type I Cepheid variables. Type II Cepheids have a very similar graph, but are slightly less luminous. R R Lyrae stars are similar to Cepheids but have shorter periods and are less luminous. They too have a period-luminosity relationship. The time axis shown in the diagram is a log scale, not linear.

Period-luminosity relationship for Cepheids and RR Lyrae stars Type I Absolute magnitude (M)

-10

Type II

Figure 26.7  A period versus luminosity graph for Cepheid and R R Lyrae variables

-5

0 RR Lyrae 1

10

100

Period (days)

445

astrophysics

Luminosity

Cepheid variables also exhibit light output curves characterised by a more rapid brightening phase and a more gradual dimming phase. See Figure 26.8. The shape of the curve is indicative of a Cepheid, and is due to the mechanisms within the star that cause the pulsating. The importance of this period-luminosity relationship is illustrated in the following example. Time

Figure 26.8  Typical light output curve of a Cepheid variable star: note the characteristic rapid brightening and more gradual dimming phases

Example

A Cepheid variable, classified as a Type I by its spectral characteristics, is observed in a globular cluster. It has a period of 10 days and an average apparent magnitude of +9.4. (It is an average apparent magnitude as the actual magnitude varies with time.) Using a period-luminosity relationship graph (see Fig. 26.7), calculate the distance to this globular cluster. Solution

Moving along the x-axis of Figure 26.7 to 10 days, and then moving vertically upwards gives the absolute magnitude for this Cepheid variable as −7.0. The distance modulus equation used for spectroscopic parallax, d (see Chapter 25), is now used: M = m – 5log 10 d M = m – 5log 10

( )



( ) ( ) ( ) ( )

–7 = 9.4 – 5log

–16.4 = – 5log

d 10

d 10

–16.4 d = log –5 10



d = 103.28 10



d = 1905 10



d = 1.9 × 104 pc

The use of Cepheid variables in this way enabled astronomers at the start of the 1900s to estimate the size of the then known Universe. An early recalibration of the period-luminosity relationship caused a near doubling in the estimated size of the Universe! More recently, observations of the red shift of distant galaxies has

446

chapter 26  variable and binary stars

allowed a more accurate measurement of the Hubble constant, H, and therefore a more accurate estimation of the age of the Universe itself. The use of Cepheid variables as standard candles, or distance measuring stars has helped this estimation.

chapter revision questions   1. How can binary star systems give astronomers information about the mass of the binary

system?   2. Describe the difference in the means of detection of a spectroscopic binary system and

an astrometric binary system.   3. Which type of binary system can be seen as separate stars through a telescope?   4. A star is observed to have a regular dip in its apparent brightness, after which its normal

brightness is resumed. No other changes are observed in the star’s spectrum. Give a possible explanation for this.   5. What is the difference between extrinsic and intrinsic variable stars?   6. What is meant by ‘the period-luminosity relationship’ for Cepheid variable stars?   7. A Cepheid variable with a period of 10 days has an apparent magnitude of +21. Use

Figure 26.7 to determine the distance to this star.

SR

  8. Calculate the combined mass of a binary system that has a period of 6.0 Earth years

and a separation of 3.0 × 1012 m.   9. What would be the effect on the period of a binary system if the two stars’ separation

was somehow halved? 10. Why is it important to astronomers to find a star’s mass?

Answers to chapter revision questions

447

astrophysics

CHAPTER 27

The life cycle of stars Stars evolve and eventually ‘die’ The life cycle of a star from pre-birth to its eventual demise is a very different process for stars with different masses. Smaller stars take their time, last a lot longer and eventually fade away. The more massive stars shine extremely brightly, consume their nuclear fuel relatively quickly and end their lives in one of nature’s truly spectacular ways, finally becoming what is probably the most mysterious object known—a black hole.

27.1

Figure 27.1  A region of dust and gas in our galaxy known as the Eagle Nebula in which new stars are believed to be forming (photo from Hubble Space Telescope)

448

The birth of a star n

Describe the processes involved in stellar formation

For a star to be able to fuse hydrogen in its core, extremely high pressure and temperatures are required. These conditions can only exist at the centre of a star that has sufficient mass to allow gravity to first produce them and then maintain them once fusion commences. It is known that there are regions in our galaxy where large quantities of interstellar gas (mainly hydrogen) and dust exist, and that within these, hidden from our view, new stars are forming. With our ability to peer through the veil of dust and gas, modern astronomical observations confirm these theories. The larger regions of gas and dust are called large molecular clouds, and are visible as nebula (see Fig. 27.1). It is within these clouds that gravity acts on more dense ‘clumps’ of matter. If the matter was spread uniformly throughout the cloud, this would not occur. As a denser region of dust and gas gradually coalesces due to gravitational force, more and more material is drawn in. The gravitational energy lost by this material is transformed into kinetic energy, which is radiated away as heat. The protostar begins to take shape, radiating energy primarily in the infrared part of the electromagnetic spectrum, but it may also begin to glow and emit light. The surrounding interstellar gas and dust keep the protostar hidden from view. If the inward gravitational forces within the protostar continue to overcome the outward expansive forces of heat and radiation, the pressure in the centre of the protostar continues to increase. There comes a stage when the core of the protostar becomes so hot and dense that hydrogen fusion commences. The newly born star throws off the veil of surrounding material due to its stellar wind and radiation. Depending on its mass, it will spend the next few tens of millions to over 10 billion years fusing its supply of hydrogen into helium as a main-sequence star. Figure 27.3, an image taken through NASA’s Spitzer Space Telescope in the

chapter 27  the life cycle of stars

Figure 27.2  Two images: the first taken in visible light (left) and the same region of space taken in infrared (right); the two protostars are clearly seen in the lower image

infrared region of the spectrum, shows stars forming by being able to see through the blanketing dust and gas clouds that surround such stars.

Figure 27.3  Stars forming in their molecular clouds, taken by the Spitzer Space Telescope in 2008

Our knowledge of star formation has increased dramatically in recent years due largely to the observations made by the Chandra X-Ray Observatory and the Spitzer Space Telescope. The birth of stars is difficult to view using visible light observations as the molecular clouds of dust and gas from which the stars form prevent visible light from passing through. X-rays and infrared radiation, however, can penetrate these clouds, allowing direct observations for the first time. Both observatories were launched into Earth orbit (Chandra in 1999 and Spitzer in 2003) so that the atmosphere would not absorb the desired radiation before being collected in the instruments.

Figure 27.4  The stages involved in stellar formation

(a) A region of gas and dust gradually coalesces under the attractive force of gravity

(b) A central core becomes hotter and denser, emitting infrared radiation

(c) The star’s core begins to fuse hydrogen —the star is born, blowing off surrounding material

Useful website

>WWW

More images from NASA’s Spitzer Space Telescope and information about star formation: http://www.nasa.gov/mission_pages/spitzer/multimedia/20080211-b.html

449

astrophysics

27.2

The key stages in a star’s life n

Outline the key stages in a star’s life in terms of the physical processes involved

The stages of a star’s life can be summarised in a flow chart (see Fig. 27.5). The protostar stage, in which Possible formation Protostar gravity is pulling in more material from the surrounding gas and dust Main sequence star Most of star’s life cloud, does not involve nuclear spent here (more reactions. The source of energy is the Depending on massive stars star’s mass transformation of lost gravitational spend less time) potential energy from the material. It is not until the commencement Red giant Red giant of nuclear fusion of hydrogen into helium within its core that the star Neutron star/pulsar White dwarf is truly ‘born’. For the star to remain stable on the main sequence, there must be an equilibrium between the outward radiative force and the pressure of the gas in the star against the inward force of gravity. The greater the mass of the star, the greater the force of gravity, allowing greater density and temperature in the core. This in turn results in the rate of the nuclear fusion reactions being greater, so that the surface temperature of the star is higher. Stars with the same mass as the Sun have surface temperatures of about 5850 K, while stars with about 10 solar masses are very hot blue-white stars of around 20 000 K or greater. These stars may consume their nuclear fuel in only a few tens of millions of years, which is why they are quite rare. The least massive stars of about 0.1 solar masses are small red main-sequence stars, consuming their nuclear fuel so slowly that it is believed they may be as old as the Universe itself. The following stage of a star’s life occurs when the hydrogen fuel has been depleted to the extent that the core of the star collapses. Gravity takes over, elevating the core’s density and temperature. The layer of helium that has formed around the core is compressed to such an extent that the fusion of helium into carbon begins— the ‘helium flash’. Such processes, known as post-main-sequence nuclear reactions, are discussed in more detail in the following section.

Cloud of dust and gas (nebula)

Red supergiant

Black hole Figure 27.5  Flow chart showing the main stages of a star’s life

27.3

Types of nuclear reactions within stars and the synthesis of elements n n

Describe the types of nuclear reactions involved in Main-Sequence and post-Main Sequence stars Discuss the synthesis of elements in stars by fusion

Main-sequence stars are ones that, when plotted on an HR diagram, lie within a band stretching from the upper left to the lower right. It is not a sequence as such, but the region can easily be wrongly interpreted as one. It is believed that, due

450

chapter 27  the life cycle of stars

to the relationship with mass, luminosity and size of all main-sequence stars, they have a common nuclear energy source—the fusion of hydrogen nuclei into helium nuclei. A hydrogen nucleus is a single proton. A star is composed initially of hydrogen. To synthesise a helium nucleus, four hydrogen nuclei are needed. The probability of a collision involving four hydrogen nuclei simultaneously in such a manner that, instead of glancing off one another they react in exactly the right way to form a helium nucleus, is so small that such a mechanism can be discounted. The energy required for this collision to be successful is also too high for it to be considered as contributing to a star’s energy output.

The proton–proton chain A more probable step-wise reaction in the core of stars has a much greater chance of occurring. At the temperatures and pressures present, it is the likely pathway. Known as the proton–proton chain, it only involves the collision of two particles at a time— an event with much greater probability and therefore occurring far more frequently. Figure 27.6 outlines the reactions involved in the proton–proton chain. 1

H

Figure 27.6  Steps in the proton– proton chain

+

e

2

H 1

H

3

He

1

H 1

H

ν

4

He

1 1

H

H

e+

1

H

3

He

Proton Neutron

2

H

1

Positron

ν

H

Key

ν Neutrino Photon

The net equation for this form of the proton–proton chain is:

1 41H



4 2 He

+

2e+

+ 2v + 2γ

Where: v = a neutrino γ = a gamma ray

Although six hydrogen nuclei are involved in the production of the helium nucleus, two of these are released. The gamma radiation is released as photons with very high energy, while the positrons are the anti-matter equivalent of electrons. The two near massless neutrinos produced carry away energy at close to the speed of light, rarely interacting with matter.

451

astrophysics

Interesting fact: Neutrinos interact so rarely that it is estimated that billions pass through us every second. A few neutrinos are detected each day in huge underground water tanks when a neutrino interacts with a water molecule and a small flash of light is subsequently emitted.

WWW>

Useful websites The Sudbury Neutrino Observatory: http://www.sno.phy.queensu.ca/ The Ice Cube Neutrino Observatory in Antarctica: http://icecube.wisc.edu/

While there are believed to be other forms of the proton–proton chain, the above process accounts for an estimated 85% of the energy produced in the Sun. Stars with masses up to approximately 1.5 solar masses with core temperatures of up to 18 million K also produce the majority of their energy in this way. Another pathway leads to a nearly identical overall reaction, but requires the presence of carbon-12 and temperatures over 18 million K to proceed. It is known as the CNO cycle.

The CNO cycle The carbon–nitrogen–oxygen (CNO) cycle is a pathway for nuclear fusion that commences with the fusion of one proton with a carbon-12 nucleus, a reaction known to occur at temperatures above 18 million K. The carbon-12 undergoes transmutation to nitrogen-13, but it re-emerges after several more steps in the process, acting in a similar way to a catalyst in a chemical reaction. Figure 27.7 shows the steps involved in the CNO cycle.

Figure 27.7  The CNO cycle showing the steps in the fusion of hydrogen to helium nuclei

1

H

Start 4 2

12 6

He

13 7

C

N

e+

C-12 acts as a nuclear catalyst

ν

1

H

15 7

C

1

H

e+

N

13 6

15 8

O

14 7

N Key Proton

ν

Neutron 1

H

Positron

ν Neutrino Photon

452

chapter 27  the life cycle of stars

It can be seen that at no stage in the CNO cycle is a collision between more than two particles required. The carbon-12 is transmutated several times until the last step when, by alpha decay, a nitrogen-15 nucleus decays back to the carbon-12 nucleus. The net equation for the CNO cycle is 4 11 H → 42 He + 2e+ + 2v + 3γ The overall equations for the proton-proton chain and the CNO cycle are nearly identical. Four protons produce one helium nucleus, with the release of energy. The source of the energy is in fact a slight decrease in the mass of the products of the process when compared with the reactants.

Post-main-sequence stars Once a star has consumed most of its hydrogen fuel, the core will begin to collapse and a layer of extremely hot helium nuclei, which has built up during the mainsequence stage of the star’s life, will surround the core. The mass of the star will determine what happens next. With sufficient mass, a star’s gravitational force will be able to sustain the density necessary for helium to fuse into heavier elements such as carbon and oxygen. This process commences as the ‘helium flash’, resulting in the star becoming a red giant. Very massive stars are able to continue fusing elements all the way to the formation of iron. These fusion reactions are all ‘exothermic’, that is, they release energy and thus provide the outward forces of radiation, preventing the star from collapsing under its own gravity. The formation of elements heavier than iron is unsustainable, as such fusion requires a net input of energy. The existence of these heavier elements found on Earth is due to supernova, which must have occurred before the formation of our solar system. In large post-main-sequence stars, the heavier elements are drawn towards the centre of the star, building up into layers, with the heavier elements closer to the core in an onion-like fashion (see Fig. 27.8). Once one of these very massive stars has exhausted or depleted its supply of nuclear fuel, the core succumbs to the force of gravity, and begins to collapse. The release of gravitational potential energy causes a temperature increase which, with the increase in pressure, sets off one of the most energetic processes known—a supernova. Layers of H synthesised elements surrounding the core fuse into He heavier elements, blowing off a large proportion of C Ne the outer layers of the star into space at great speed. O Si The equal and opposite action is directed towards Fe the inner core, compressing it to such an extent that elements heavier than iron may be created. The residual core of the star, if sufficiently massive, may continue to collapse as protons and electrons become neutrons. The star shrinks as matter as we know it condenses into neutrons.

Figure 27.8  The onion-layer-like structure of a postmain-sequence massive star

453

astrophysics

Plotting stars on a Hertzsprung-Russell diagram first-hand investigation

n

physics skills

Present information by plotting Hertzsprung-Russell diagrams for: nearby or brightest stars, stars in a young open cluster, stars in a globular cluster

H13.1e, f H12.4b

WWW>

Useful websites Stellar evolution and the HR diagram: http://www.mhhe.com/physsci/astronomy/applets/Hr/frame.html University of NSW information on a globular cluster and its HR diagram plot: http://www.phys.unsw.edu.au/astro/wwwlabs/gcCm/gcCm_intro.html Supergiants

10,000

Luminosity ( Sun=1 )

1,000

Giants

100 1

Main sequence

1/100 1/1000

1/10,000

White dwarfs B

A

25,000

F

10,000

G

K

M Class

5,000

3,000

Temperature (Kelvin) Figure 27.9  A typical HR diagram with stellar types shown

The HR diagram is a very useful tool for astronomers. There are several variations of the labels used for both the vertical (luminosity) and the horizontal (temperature) axes. Figure 27.9 shows a typical HR diagram with the regions where several star types are found. When plotting information on an HR diagram, the horizontal axis scale may be either surface temperature or the corresponding spectral types. When using the spectral type scale, each individual type is further divided into its 10 subtypes, from 0 to 9. In the tables shown here, the spectral type is a letter followed by a sub-division number. For example, K2 is a K type spectral type with a sub-type 2. The next spectral type after K9 is M0 in this system. The vertical axis scale of luminosity is either in reference to the Sun or given as absolute magnitude, M. The tables give the values of the stars’ absolute magnitudes.

Task 1: The brightest stars Figure 27.10  A typical HR diagram axes with scales shown using spectral type on the horizontal axis

Table 27.1 lists the 20 brightest stars in the night sky. Use the information provided to plot the stars on a HR diagram. Choose the scale of the axes carefully before commencing so that all the stars fit on your diagram. The horizontal axes should always show the full range of stellar spectral types, from O to M. Figure 27.10 shows the axes for a typical HR diagram.

Task 2: The nearest stars Use the information available in Table 27.2 to plot the stars onto the same HR diagram as used in Task 1. Use a different colour for this task. Compare and contrast the results of the two groups of stars.

-10

Luminosity (M)

-5 0

Task 3: Open clusters

+5

+10 +15 O

454

B

A F G Spectral type

K

M

Plot the 20 stars from an open cluster found in Table 27.3, on the same HR diagram as previously. Use a different colour. Again, compare and contrast the position of this group of stars with the previous groups. When the HR plots of the three sets of stars—the brightest stars, the nearest stars and stars in an open cluster—are

chapter 27  the life cycle of stars

Table 27.1  The 20 brightest stars in the night sky Star

Apparent magnitude, v

Absolute magnitude, M

Spectral type

Sirius

1.45

+ 1.41

A1

Canopus

0.73

+ 0.16

F0

Rigel Kentaurus

0.10

+ 4.3

G2

Arcturus

0.06

− 0.2

K2

Vega

0.04

+ 0.5

A0

Capella

0.08

− 0.6

G8

Rigel

0.11

− 7.0

B8

Procyon

0.35

+ 2.65

F5

Achernar

0.48

− 2.2

B5

Hadar

0.60

− 5.0

B1

Altair

0.77

+ 2.3

A7

Betelgeuse

0.80

− 6.0

M2

Aldebaran

0.85

− 0.7

K5

Acrux

0.9

− 3.50

B2

Spica

0.96

− 3.4

B1

Antares

1.0

− 4.7

M1

Pollux

1.15

+ 0.95

K0

Fomalhaut

1.16

+ 0.08

A3

Deneb

1.25

− 7.3

A2

Mimosa

1.26

− 4.7

B0

Table 27.2  The 20 closest stars Star

Distance (l.y.)

Apparent magnitude

Absolute magnitude

Spectral type

15.5

M5

4.4

G2

Proxima Centauri

4.2

11.1

Rigil Kentaurus

4.3

− 0.01

Alpha Cen B

4.3

1.33

5.7

K1

Barnard’s Star

6.0

9.54

13.2

M4

Wolf 359

7.7

13.5

16.7

M6

BD +362147

8.2

7.5

10.5

M2

Luyten 726-8A

8.4

12.5

15.5

M6

Luyten 726-8B

8.4

13.0

16.0

M6

Sirius A

8.6

− 1.46

1.4

A1

Sirius B

8.6

8.3

11.2

A

Ross 154

9.4

10.45

13.1

M4

Ross 248

10.4

12.29

14.8

M5

Epsilon Eri

10.8

6.1

K2

Ross 128

10.9

11.1

13.5

M4

61 Cyg A

11.1

5.2

7.6

K4

61 Cyg B

11.1

6.0

8.4

K5

Epsilon Ind

11.2

4.7

7.0

K3

BD +4344A

11.2

8.1

10.4

M1

BD +4344B

11.2

11.1

13.4

M4

Procyon A

11.4

0.4

2.6

3.73

F5

455

astrophysics

Table 27.3  Information on some of the stars in open cluster M44 Star number 1

Apparent magnitude, v

Absolute magnitude, M

Spectral type

6.6

5.3

G0

2

6.8

5.5

A5

3

7.8

6.6

F7

4

7.5

6.2

A9

5

6.7

5.4

F0

6

7.7

6.5

A5

7

7.7

6.4

A7

8

6.6

5.3

K0

9

7.3

6.1

A

10

7.5

6.3

A

11

6.4

5.1

K0

12

6.6

5.4

A1

13

7.5

6.2

F0

14

6.8

5.5

A9

15

7.7

6.5

F2

16

6.4

5.2

K0

17

6.3

5.1

A

18

7.8

6.6

A9

19

6.9

5.6

A9

20

8.0

6.7

A7

Table 27.4  Information on some stars in a typical globular cluster Star number

456

Absolute magnitude, M

Spectral type

1

− 2.0

K9

2

+ 0.1

F7

3

− 1.0

K5

4

+ 1.2

G0

5

+ 3.1

F9

6

+ 14.0

M3

7

+ 7.4

G4

8

− 0.8

K5

9

+ 12.8

M0

10

− 1.9

M1

11

+ 6.9

G6

12

− 0.9

K0

13

+ 10.2

K6

14

− 4.9

M5

15

+ 2.9

F6

16

− 2.5

M1

17

+ 4.2

F7

18

+ 5.0

G2

19

+ 7.0

G9

20

+ 8.4

K3

chapter 27  the life cycle of stars

compared and contrasted, it becomes apparent how different they are. The brightest stars have a greater proportion of large, luminous stars, some of which are hundreds of light-years from Earth, yet still rank in the top 20 brightest stars. In contrast, the closest 20 stars include many smaller, less luminous red stars and white dwarfs. The open cluster is seen to have almost all of its stars on the main sequence, showing that they are relatively young, not having evolved into the red giant stage.

Figure 27.11  M15, a typical globular cluster

Using the HR diagram to determine a star’s evolutionary stage n

Analyse information from a HR diagram and use available evidence to determine the characteristics of a star and its evolutionary stage

Figure 27.9 shows the regions on a HR diagram where the different classes of stars are found. Figure 27.13 shows the evolutionary paths of stars with different masses. From these diagrams, it can be seen that a star will move in from the left of the main sequence, the vertical position being dependent upon the mass of the forming star. Once the protostar commences hydrogen fusion, it will ‘land’ on the main sequence, the exact location again being dependent on its mass. Analysis of a star’s spectrum, especially the width of the spectral lines, reveals information about the density of the atmosphere of the star and hence whether the star is indeed a main-sequence star, a giant or supergiant (which have less dense atmospheres and thinner, sharper spectral lines). Towards the end of a star’s life, the giant stage can be identified by the vertical position (highly luminous) and the thin, sharp spectral lines (less dense atmosphere). At the end of its life on the HR diagram, a white dwarf has low luminosity, relatively high temperature but spectral lines indicating that it has a thicker, denser atmosphere.

Determining the age of globular clusters n

Explain how the age of a globular cluster can be determined from its zero-age main-sequence plot for a HR diagram

first-hand investigation physics skills H14.1a, b, e, f, g, h 14.3a, c, d

27.4

Figure 27.12 shows the HR diagram plot for stars in a globular cluster called 47 Tucanae. Over 100 globular clusters have been found around the edges of our galaxy. It is believed that they are some of the oldest objects in the Universe, having coalesced around 12 billion years ago. Their age rivals that of our galaxy, the Milky Way. They are called ‘globular’ due to the way they appear through a telescope. With hundreds of thousands or even millions of stars in the cluster, individual stars closer to the centre cannot be resolved. Astronomers are able to assign an individually resolved star to a cluster due to its common proper motion with the cluster. (Proper

457

astrophysics

motion is the motion of a star or group of stars against very distant background stars.) Such stars are the ones which can be plotted to give HR diagram plots such as Figure 27.12 for 47 Tucanae.

WWW> Figure 27.12  A HR diagram plot of the stars in the globular cluster 47 Tucanae

Apparent visual magnitude

12 14 16 18 20 22

0.4

Useful website Animations and movies showing the appearance and the structure of globular clusters: http://terpsichore.stsci.edu/~summers/viz/starsplatter/spz/spz.html

It is clear that, despite so many stars being plotted in Figure 27.12, there are no large, luminous main-sequence stars in the cluster. The larger, more massive stars have depleted their supply of hydrogen and have moved off the main sequence onto the ‘red giant branch’ or ‘subAsymptotic giant branch giant branch’ of the HR diagram. The Horizontal branch smaller, less massive stars, which take longer to consume their hydrogen, Red giant branch are still on the main sequence, despite their comparatively old age. Our Sun Sub-giant branch is expected to take about 10 billion years to deplete its hydrogen with less massive stars taking a lot longer. The point on the main sequence where the star plot turns off onto the sub-giant and red giant branches is known as the turn-off point. Once the position of the turn-off point for a 0.8 1.2 1.6 globular cluster is found, its age can be B-V determined.

The evolutionary paths of stars with different masses secondary source investigation PFAs H5 physics skills H13.1e, f H12.4b

458

n

Present information by plotting on a HR diagram the pathways of stars of 1, 5 and 10 solar masses during their life cycle

The position of a star on the main sequence is determined by its mass, as discussed. The evolutionary path the star will take as a post-main-sequence star is also dependent on the star’s mass. A star with a mass equal to the Sun is said to have one solar mass. On the main sequence, its position as a G2 star with an absolute magnitude of +5 is how the star spends most of its existence. At around 10 billion years of age, it evolves off the main sequence into a red sub-giant, as depicted in Figure 27.9. A five solar mass star on the main sequence is approximately 100 times more luminous than the Sun, with an absolute magnitude around −2. Its surface temperature, close to 13 000 K, means that its spectral type is B. In a much shorter time than a one solar mass star, this star will evolve into a red giant once it has depleted its reserves of hydrogen. A star with 10 solar masses may have a surface temperature of around 20 000 K, still of spectral type B, but about 1000 times more luminous than the Sun. Conditions within its core are so severe that, despite having much more hydrogen to begin with, the star will evolve rapidly into a red giant in around 100 million years. Figure 27.12 shows this information on an HR diagram.

chapter 27  the life cycle of stars

Supernova

Solar mass

Figure 27.13  The evolutionary pathways of stars with one, five and 10 solar masses

Luminosity

10

Nova, then disappears as a neutron star 5

White dwarf

1

Surface temperature

Useful websites

>WWW

Stellar evolution and the HR diagram: http://www.mhhe.com/physsci/astronomy/applets/Hr/frame.html Simulations of stellar evolution with user input of the mass of the star: http://instruct1.cit.cornell.edu/courses/astro101/java/evolve/evolve.htm Stellar evolution on an HR diagram for stars with different masses, until near the end of their lives: http://www.astro.ubc.ca/~scharein/a311/Sim/hr/HRdiagram.html

The death of stars n

Explain the concept of star death in relation to: – planetary nebula – supernovae – white dwarfs – neutron stars/pulsars – black holes

27.5 Figure 27.14  A Hubble Space Telescope photograph of the Cat’s Eye Nebula

Planetary nebula Planetary nebulae are so called because of how they first appeared to astronomers using telescopes. The first planetary nebula observed were fuzzy discs of light that looked similar to a planet. Improved resolution and photographic techniques revealed the true nature of the discs as glowing gas surrounding a central star. Planetary nebulae are formed when a star of about two solar masses or less comes to the final stages of fusing helium in the shell surrounding a core of carbon and oxygen that has built up as products of the fusion. The star becomes unstable, but is not capable of fusing heavier elements due to its smaller mass. Eventually the star pulsates so violently that it throws off its outer layers of material into space. This gaseous material glows from

459

astrophysics

the radiation being emitted from the remaining central part of the star. The leftover material no longer fuses elements, and collapses to form a white dwarf. The colours and patterns observed in fine detail make planetary nebulae some of the most spectacular objects in the sky. Figure 27.14 is a planetary nebula known as the Cat’s Eye, one of about 1500 known such nebulae in the Milky Way.

Supernovae A very large star nearing the end of its evolutionary life cycle has built up layers of heavier elements, with iron being the heaviest possible. The heavier elements sink towards the core. The depletion of sufficient nuclear fuel for the process to continue results in the collapse of the star under its own gravitational force. When this happens, a huge amount of gravitational potential energy is released, superheating the collapsing matter and igniting an explosion that has such force that the star is blown apart. The energy released can be greater than all the energy released in 10 billion years by the Sun, and can, for a short time, rival the energy from an entire galaxy. The matter spreading out from the exploding star glows and emits gamma radiation and X-rays. This highly luminous matter can become visible from Earth to the naked eye. Early astronomers wrongly assumed that this was a new star, hence the name. However supernovae only glow with such intensity for a few days or weeks. The left-over material, if having a mass of more than 1.4 solar masses, will collapse into a neutron star and possibly further into a black hole, from which not even light can escape. Another type of supernova, type 1a, is not associated with star death. White dwarf stars caught in a binary system may be drawing in matter from their partner star. This increases their mass until it triggers a burst of nuclear fusion. The outer layers of the star are blown off, causing it to increase in luminosity by a million times, again for only a short period of time.

White dwarfs

Figure 27.15  The Crab Nebula, remnants of the 1054 supernova

White dwarfs are not true stars, as they are not fusing hydrogen or other elements in their cores. A white dwarf is the collapsed inner portion of an older star that has remained behind after the planetary nebula stage. It is comprised primarily of oxygen and carbon. The surface of a white dwarf is white hot due to the small surface area for the amount of residual heat being radiated. A white dwarf is about the same size as Earth, which has one-thousandth the diameter of the Sun. In time, a white dwarf will cool until it becomes almost undetectable; however, it is thought that no white dwarf has yet had sufficient time for this to occur. The mass of a white dwarf must be less than 1.4 solar masses, otherwise the gravitational force will cause electrons and protons to combine to form a neutron star. A white dwarf has a very low luminosity due to the comparatively small surface area of these objects. They have a typical absolute magnitude of around +11 to +15, that is, between 250 and 10 000 times less luminous than the Sun.

Neutron stars/pulsars The residual matter left over after a star has passed through the planetary nebula or supernova stage may form a white dwarf. If the total residual mass exceeds 1.4 solar masses, the collapse

460

chapter 27  the life cycle of stars

of normal matter as we know it occurs as gravity forces the electrons and protons to merge and form a continuous type of extremely dense matter: a neutron star. Instead of being the size of the Earth, a neutron star may have a diameter of only 10 to 20 km. Like a spinning top, most stars possess angular momentum. This angular momentum is conserved as the star collapses, which results in the speed of rotation increasing. The original star may have been rotating with a period of days or weeks, but with the conservation of angular momentum, the period of rotation of the neutron star may be in the order of milliseconds. A dancer or ice-skater uses the same principle to increase their speed of rotation by pulling in their arms when spinning. When radio signals from pulsars were first detected using radio telescopes, it was thought by some that they were originating from another advanced life form, such was their regular nature. It is now known that rapidly spinning neutron stars can emit an intense, focused beam of radio waves similar to a lighthouse and its sweeping beam of light. If Earth happens to be in the line of this emission, radio telescopes pick up the regular radio pulses. The name ‘pulsars’ reflects this nature.

Black holes With a mass of greater than five solar masses, the remnant core of a supernova will contract into a neutron star and then continue to contract further into what is known as a singularity. Surrounding this point in space is a region where gravity is so strong that not even light can escape. Black holes are believed to exist due to the emission of X-rays from material being accelerated as it is drawn into the black hole, and by the calculations of the mass that must exist at the centre of our galaxy. If a black hole were to pass between us and a distant galaxy or star, its gravity would bend the light around it, causing ‘gravitational lensing’, which causes the brief brightening of the distant galaxy. The accretion disc, depicted in Figure 27.16, is formed by the encircling material being consumed by the black hole. Black holes are perhaps the most mysterious objects in the Universe, and are subject to much debate among astronomers. With an infinite density, the core of a black hole has no real dimensions, a concept referred to as a ‘singularity’. The matter being pulled into a black hole disappears before reaching the singularity. The point at which it vanishes is called the ‘event horizon’.

Figure 27.16  The accretion disk surrounding a black hole pulling matter from a grey star

chapter revision questions   1. Outline the stages in the formation of a star from a gas and dust (molecular) cloud.   2. Explain why the HR diagram plot of the 20 brightest stars differs significantly from the

HR diagram plot of the 20 nearest stars.   3. What event signals the ‘birth’ of a star from the protostar stage?   4. Four protons are required to fuse to form a helium nucleus. Why are the proton–proton

chain and the CNO cycle the suggested mechanisms for fusion within main-sequence stars rather than a single collision between four protons?

461

astrophysics

  5. On suitable axes for an HR diagram, show the regions in which: (a) main-sequence

stars; (b) red giants; (c) white dwarfs; and (d) protostars are found. Annotate the diagram to indicate the primary source of energy for stars or protostars for each of the regions.   6. Why is the existence of elements heavier than iron evidence for supernovae?   7. Describe what is meant by the term ‘helium flash’.

SR

  8. The HR diagram plots of hundreds of stars in a globular cluster show that there are no

main-sequence stars with a mass greater than the Sun. However, there are many red giant stars. What does this tell us about the approximate age of the globular cluster?   9. Outline the differences in the evolutionary paths of stars with one, five and 10 solar Answers to chapter revision questions

462

masses. 10. Why is a white dwarf not considered a true star?

Contents About the authors To the student Acknowledgments List of Board of Studies verbs Physics skills—an introduction

space Chapter 1  Gravity  The Earth has a gravitational field that exerts a force on objects both on it and around it 1.1 Gravitational field and weight 1.2 Universal gravitation 1.3 A closer look at gravitational acceleration Secondary source investigation: g on other planets and the application of F = mg Analyse information using the expression F = mg to determine the weight force for a body on Earth and for the same body on other planets 1.4 Gravitational potential energy First-hand investigation: Simple pendulum motion Chapter revision questions Chapter 2  Space exploration Many factors have to be taken into account to achieve a successful rocket launch, maintain a stable orbit and return to Earth 2.1 Projectile motion Solve problems and analyse information to calculate the actual velocity of a projectile from its horizontal and vertical components using: v x2 = u x2 v = u + at v y2 = u y2 + 2ay y x = uxt 1 y = uy t + a t2 2 y 2.2 Galileo’s analysis of projectile motion 2.3 Circular motion Solve problems and analyse information to calculate the centripetal force acting on a satellite mv2 undergoing uniform circular motion about the Earth using: F = r 2.4 Circular motion and satellites 2.5 A quantitative description of Kepler’s third law r3 GM Solve problems and analyse information using: 2 = T 4π 2 2.6 Geostationary satellites and low Earth orbit satellites 2.7 Escape velocity 2.8 Leaving Earth 2.9 Coming back to Earth Secondary source investigation:The work of rocket scientists First-hand investigation: Projectile motion Chapter revision questions

xi xi xii xiii xiv

1 2 2 3 5 6 6 9 13 16 18

18

20 25 26 28 30 32 33 35 39 40 44 45 47 48

iii

contents

Chapter 3  Gravity, orbits and space travel The solar system is held together by gravity 3.1 Gravity: a revision Secondary source investigation: Factors that affect the size of gravitational attraction m1m2 Solve problems and analyse information using: F = G d2 3.2 The slingshot effect Chapter revision questions Chapter 4  Special relativity Current and emerging understanding about time and space has been dependent upon earlier models of the transmission of light 4.1 The aether model 4.2 The Michelson–Morley experiment Secondary source investigation: The Michelson–Morley experiment 4.3 Frames of reference First-hand investigation: Non-inertial and inertial frames of reference 4.4 Principles of special relativity 4.5 Impacts of special relativity Solve problems and analyse information using: E = mc2 4.6 The modern standard of length Secondary source investigation: Evidence for special relativity 4.7 Limitations of special relativity and the twin paradox Secondary source investigation: Thought experiments and reality 4.8 Implications of special relativity for future space travel Chapter revision questions Space: review questions

motors and generators Chapter 5  The motor effect Motors use the effect of forces on current-carrying conductors in magnetic fields 5.1 Some facts about charges and charged particles 5.2 The motor effect Solve problems and analyse information about the force on current-carrying conductors in magnetic fields using F = BIlsin  5.3 Force between two parallel current-carrying wires F II Solve problems using: =k 1 2 l d 5.4 Torque: the turning effect of a force 5.5 Motor effect and electric motors Solve problems and analyse information about simple motors using:  = nBIA cos  5.6 The need for a split ring commutator in DC motors 5.7 Features of DC motors Secondary source investigation: Applications of the motor effect First-hand investigation: Demonstrating the motor effect Chapter revision questions Chapter 6  Electromagnetic induction The relative motion between a conductor and magnetic field is used to generate an electrical voltage 6.1 Michael Faraday’s discovery of electromagnetic induction 6.2 Magnetic field lines, magnetic flux and magnetic flux density 6.3 Faraday’s law: a quantitative description of electromagnetic induction

iv

52 52 55 55 56 57 58

58 59 60 63 64 65 66 66 79 80 81 81 82 83 85

87 88 88 88 89 94 97 98 100 102 103 104 107 109 110 114

114 117 119

contents

6.4 Lenz’s law 6.5 Lenz’s law and the conservation of energy 6.6 The need for external circuits 6.7 Another application of Lenz’s law: back EMF in DC motors 6.8 Eddy currents Secondary source investigation: Applications of induction and eddy currents Secondary source investigation: Eddy current braking First-hand investigation: Electromagnetic induction First-hand investigation: The effects of magnets on electric currents Chapter revision questions Chapter 7  Generators Generators are used to provide large scale power production 7.1 Generators 7.2 Magnetic flux, changing of flux and induced EMF 7.3 The difference between a DC generator and an AC generator 7.4 The transmission wires Secondary source investigation: Insulating and protecting Secondary source investigation: Advantages and disadvantages of AC and DC generators Secondary source investigation: The competition between Westinghouse and Thomas Edison 7.5 Assess the impacts of the development of AC generators on society and the environment First-hand investigation: The production of an alternating current Chapter revision questions Chapter 8  Transformers Transformers allow generated voltage to be either increased or decreased before it is used 8.1 Transformers: What are they? Secondary source investigation: Energy lost in transformers 8.2 Types of transformers 8.3 Calculations for transformers Vp np = Solve problems and analyse information about transformers using Vs ns 8.4 Voltage changes during the transmission from power plants to consumers Secondary source investigation: The role of transformers for long-distance transmissions 8.5 The need for transformers in household appliances 8.6 The impact of the invention of transformers First-hand investigation: Producing secondary voltage Chapter revision questions Chapter 9  AC motors Motors are used in industries and the home usually to convert electrical energy into more useful forms of energy 9.1 AC electric motors 9.2 AC induction motors Secondary source investigation: Energy transformation First-hand investigation: Demonstrating the principle of an AC induction motor Chapter revision questions Motors and generators: review questions

from ideas to implementation Chapter 10  From CRTs to CROs and TVs Increased understandings of cathode rays led to the development of television 10.1 A cathode ray tube: the idea 10.2 Electric fields

121 124 124 126 127 128 129 129 130 131 134 134 136 137 141 141 142 143 144 146 147 149 149 151 151 152 152 156 157 158 158 159 159 161

161 162 164 165 165 166

167 168 168 169

v

contents

Solve problems and analyse information using: E =

V

d 10.3 Forces acting on charged particles in electric and magnetic fields Solve problems and analyse information using: F = qvBsin  and F = qE 10.4 Debates over the nature of cathode rays: waves or particles? 10.5 J. J.Thomson’s charge to mass ratio experiment First-hand investigation: Properties of cathode rays 10.6 Applications of cathode ray tubes (CRTs): implementation First-hand investigation: Observing different striation patterns Chapter revision questions Chapter 11  From the photoelectric effect to photo cells The reconceptualisation of the model of light led to an understanding of the photoelectric effect and black body radiation 11.1 Electromagnetic radiation (EMR) Solve problems and analyse information using: c = f  Hertz’s discovery of radio waves and his measurement of their speed 11.2 Hertz’s experiment: production and reception of EMR 11.3 The photoelectric effect 11.4 Quantum physics 11.5 Black body radiation and the black body radiation curve Solve problems and analyse information using E = hf 11.6 Particle nature of light 11.7 Einstein’s explanation for the photoelectric effect: a quantum physics approach 11.8 Using Einstein’s explanation to investigate the photoelectric effect 11.9 Einstein’s contributions to quantum physics and black body radiation Secondary source investigation: Applications of the photoelectric effect: The implementation Secondary source investigation: Can science be set free from social and political influences? Einstein and Planck’s views First-hand investigation:The production of radio waves Chapter revision questions Chapter 12  From semiconductors to solid state devices Limitations of past technologies and increased research into the structure of the atom resulted in the invention of transistors 12.1 Valence shell and valence electrons 12.2 Metals: metallic bonds and the sea electron model 12.3 The structure of semiconductors 12.4 Band structure and conductivity for metals, semiconductors and insulators 12.5 A closer look: intrinsic and extrinsic semiconductors Secondary source investigation: Implementations of semiconductors: solid state devices 12.6 Solid state devices versus thermionic devices 12.7 Why silicon not germanium? Secondary source investigation: More solid state devices: solar (photovoltaic) cells Secondary source investigation: Integrated circuits and microchips: extensions of transistors First-hand investigation: Modelling the behaviour of semiconductors Chapter revision questions Chapter 13  From superconductors to maglev trains Investigations into the electrical properties of particular metals at different temperatures led to the identification of superconductivity and the exploration of possible applications 13.1 Braggs’ X-ray diffraction experiment 13.2 Metal structure 13.3 The effects of impurities and temperature on conductivity of metals

vi

171 172 172 178 178 181 183 186 187 191

191 192 194 194 196 198 198 200 201 201 204 204 205 206 207 208 211

211 212 212 213 217 220 222 223 224 226 228 229 230

230 232 232

contents

13.4 Superconductivity Secondary source investigation: More on superconductors 13.5 Explaining superconductivity: the BCS theory First-hand investigation: The Meissner effect 13.6 Explaining the Meissner effect Analyse information to explain why a magnet is able to hover above a superconducting material that has reached the temperature at which it is superconducting Secondary source investigation: Applications of superconductors 13.7 Limitations of using superconductivity Chapter revision questions From ideas to implementation: review questions

233 234 236 237 238 238 239 242 243 244

from quanta to quarks

245

Chapter 14  The models of the atom

246

Problems with the Rutherford model of the atom led to the search for a model that would better explain the observed phenomena 14.1 The early models of the atom 14.2 Rutherford’s model of the atom 14.3 Planck’s hypothesis 14.4 The hydrogen emission spectrum 14.5 Bohr’s model of the atom: introduction 14.6 Bohr’s model and the hydrogen emission spectrum 1 1 1 – 2 Solve problems and analyse information using: = R ni  nf 2 14.7 More on the hydrogen emission spectrum 14.8 Limitations of Bohr’s model of the atom First-hand investigation: Observe the visible components of the hydrogen emission spectrum Chapter revision questions

(

)

Chapter 15  More on the models of the atom The limitations of classical physics gave birth to quantum physics 15.1 The wave–particle duality of light 15.2 Matter waves h Solve problems and analyse information using:  = mv 15.3 Proof for matter waves 15.4 Applying the matter waves to the electrons in an atom Secondary source investigation: Pauli and the exclusion principle Secondary source investigation: Heisenberg and the uncertainty principle Chapter revision questions Chapter 16  The nucleus and nuclear reactions The work of Chadwick and Fermi in producing artificial transmutations led to practical applications of nuclear physics 16.1 The nucleus 16.2 The discovery of neutrons 16.3 Radioactivity and transmutation 16.4 Wolfgang Pauli and the discovery of neutrinos 16.5 Strong nuclear force 16.6 Fermi’s artificial transmutation experiments 16.7 The accidental discovery of nuclear fission reactions 16.8 Fermi’s discovery of chain reactions 16.9 Comparing controlled and uncontrolled fission reactions

246 247 249 250 251 253 255 257 258 259 259 261 261 262 262 264 266 268 270 271 273

273 274 276 279 280 282 283 284 285

vii

contents

16.10 Fermi’s first controlled chain reaction 16.11 Mass defect and binding energy 16.12 Energy liberation in nuclear fission First-hand and secondary-source investigation: Observe radiation emitted from a nucleus using a Wilson cloud chamber Chapter revision questions Chapter 17  Applications of nuclear physics and the standard model of matter An understanding of the nucleus has led to large science projects and many applications 17.1 An application of nuclear fission reactions—a typical fission reactor 17.2 Radioisotopes and their applications Secondary source investigation: Uses of radioisotopes 17.3 Neutron scattering and probing Secondary source investigation: The Manhattan Project 17.4 Particle accelerators 17.5 The standard model of matter Chapter revision questions From quanta to quarks: review questions

medical physics Chapter 18  Ultrasound The properties of ultrasound waves can be used as diagnostic tools 18.1 Sound waves 18.2 Ultrasound 18.3 Piezoelectric materials and the piezoelectric effect 18.4 The basic principle behind ultrasound imaging 18.5 A closer look at the reflection and the penetration of ultrasound waves Solve problems and analyse information to calculate the acoustic impedance of a range of materials, including bone, muscle, soft tissue, fat, blood and air and explain the types of tissues that ultrasound can be used to examine [Z2 – Z1]2 Ir Solve problems and analyse information using: Z =  and = [Z2 + Z1]2 Io 18.6 Different types of ultrasound scans 18.7 Secondary source investigation: The clinical uses of ultrasound 18.7 Doppler ultrasound 18.8 Doppler ultrasound as a diagnostic tool Secondary source investigation: Ultrasound as a tool for measuring bone density Chapter revision questions Chapter 19  X-rays, computed axial tomography and endoscopy The physical properties of electromagnetic radiation can be used as diagnostic tools 19.1 The nature of X-rays 19.2 The production of X-rays for medical imaging 19.3 Using X-rays for medical imaging: the principle 19.4 The clinical uses of X-ray imaging Secondary source investigation: Gathering X-ray images 19.5 Computed axial tomography 19.6 The functional principle of CT scans 19.7 The clinical uses of CT scans Secondary source investigation: Observing and comparing CT scans 19.8 Endoscopy 19.9 Optical fibres 19.10 Endoscopes

viii

289 289 293 294 294 296 296 298 298 302 304 306 309 311 313

315 316 316 317 317 319 320 320 320 324 329 331 334 338 339 340 340 341 342 343 344 347 347 350 353 353 355 356

contents

19.11 The uses of endoscopes Secondary source investigation: Observing endoscope images First-hand investigation: Transfer of light by optical fibres Chapter revision questions Chapter 20  Radioactivity as a diagnostic tool Radioactivity can be used as a diagnostic tool 20.1 Radioactivity 20.2 Half-life 20.3 Radioactivity as a diagnostic tool: nuclear medicine First-hand and secondary source investigation: The uses of radioisotope scans 20.4 Positron emission tomography 20.5 The operating principle of a PET scan 20.6 Applications of PET scans Secondary source investigation: Using PET scans to detect diseased organs 20.7 Evaluation of the uses of radioisotope scans Chapter revision questions Chapter 21  Magnetic resonance imaging The magnetic field produced by nuclear particles can be used as a diagnostic tool 21.1 Nuclear spin 21.2 The nucleus in a magnetic field 21.3 Precession 21.4 Larmor frequency 21.5 Image formation: locating the signals 21.6 Image formation: tissue differentiation and contrasts Secondary source investigation: Hardware used in magnetic resonance imaging First-hand and secondary source investigation: Medical uses of MRI Secondary source investigation: A comparison between the imaging techniques Secondary source investigation: The impact of medical applications of physics on society Chapter revision questions Medical physics: review questions

astrophysics Chapter 22  Observing our Universe Our understanding of celestial objects depends upon observations made from Earth or from space near the Earth 22.1 Galileo’s observations of the heavens 22.2 The atmosphere is a shield 22.3 Resolution and sensitivity 22.4 Earth’s atmosphere limits ground-based astronomy 22.5 Improving resolution First-hand investigation: The relationship between the size of the instrument and sensitivity Chapter revision questions Chapter 23  Astrometry: finding the distance to stars Careful measurement of a celestial object’s position in the sky (astrometry) may be used to determine its distance 23.1 Measuring distances in space 23.2 The limitations of trigonometric parallax Secondary source investigation: The relative limits of ground-based and space-based trigonometric parallax Chapter revision questions

358 360 361 362 364 364 368 370 371 374 375 377 377 378 379 380 381 382 384 384 385 387 391 392 396 398 399 401

403 404

404 405 407 408 409 411 411 413

413 414 416 416

ix

contents

Chapter 24  Spectroscopy: analysing the spectra of stars

417

Spectroscopy is a vital tool for astronomers and provides a wealth of information 24.1 Producing spectra 24.2 Measuring spectra 24.3 Stellar objects and types of spectra 24.4 The key features of stellar spectra 24.5 The information about a star from its spectrum First-hand investigation: Examining spectra Secondary source investigation: Predicting a star’s surface temperature from its spectrum Chapter revision questions Chapter 25  Photometry: measuring starlight

417 419 419 420 423 425 426 427 428

Photometric measurements can be used for determining distance and comparing objects 25.1 Stellar magnitude 25.2 Using magnitude to determine distance 25.3 Spectroscopic parallax d I Solve problems and analyse information using: M = m – 5log and A = 100(mB – mA)/5 to 10 IB calculate the absolute or apparent magnitude of stars using data and a reference star 25.4 Colour index First-hand investigation: Using filters for photometric measurements 25.5 Photographic versus photoelectric technology Secondary source investigation: The impact of improvements in technology in astronomy Chapter revision questions

428 430 430

( )

Chapter 26  Variable and binary stars The study of binary and variable stars reveals vital information about stars 26.1 Binary stars and their detection First-hand investigation: Modelling light curves of eclipsing binaries 26.2 Important information from binary stars 42r 3 Solve problems and analyse information by applying: m1 + m2 = GT 2 26.3 Classifying variable stars 26.4 Distance and the period-luminosity relationship for Cepheid variables Chapter revision questions Chapter 27  The life cycle of stars

431 432 433 434 437 437 439 439 440 442 442 443 445 447 448

Stars evolve and eventually ‘die’ 27.1 The birth of a star 27.2 The key stages in a star’s life 27.3 Types of nuclear reactions within stars and the synthesis of elements First-hand investigation: Plotting stars on a Hertzsprung-Russell diagram First-hand investigation: Using the HR diagram to determine a star’s evolutionary stage 27.4 Determining the age of globular clusters Secondary source investigation: The evolutionary paths of stars with different masses 27.5 The death of stars Chapter revision questions Astrophysics: review questions

448 450 450 454 457 457 458 459 461 463

Glossary Appendix: Data and formulae sheet Credits Index

464 472 475 476

x

About the authors Dr Xiao L. (William) Wu graduated from Sydney Boys High School with first place in physics, and received the Premier’s All Rounder Award for his Higher School Certificate. He completed Bachelor of Medicine and Surgery degrees (Honours) at the University of New South Wales (UNSW) while simultaneously studying chemistry, physics and biology as a part of his Bachelor of Science (Honours) degree. Dr Wu is passionate about teaching. He has taught at UNSW and in eight years of high school tutoring has produced many outstanding students, some of whom were among the top 10 in Higher School Certificate science subjects, with perfect UAI scores. Many of his students have gone on to study Medicine and Law. Besides pursuing a career in orthopaedic surgery, Dr Wu will continue to dedicate his time in teaching high school sciences. Rob Farr has been teaching physics for 25 years in New South Wales schools. He has been on review panels for the current physics syllabus and has extensive experience as a marker, senior marker and supervisor in physics and chemistry. Rob graduated with a Bachelor of Science (Honours) degree from the University of Sydney in 1982 and completed his Diploma of Education the next year. He has a Master of Arts, specialising in science education and school leadership, from Macquarie University. Rob’s passion for science and science teaching have led him to become a consultant and contributor to the Biology in Focus series, especially in developing the approach to the Prescribed Focus Areas. He is currently Science Coordinator at Brigidine College St Ives, Sydney.

To the student Physics is rich in its history, its accidental discoveries, its geniuses and in the way theories, models and laws have been developed or discarded. Importantly, physics has many varied and profound impacts on society and the environment, both positive and negative. The teaching and learning of physics without reference to these impacts is akin to teaching words with no sentences. Ultimately, it is to science that we turn to find solutions to the wide variety of problems arising from overpopulation, pollution and threats from outer space in an effort to ensure our survival. Learning Physics within these contexts gives it relevance; hopefully this will encourage students to pursue the subject at tertiary levels. Physics in Focus is a succinct and easy-to-follow book for the New South Wales Stage 6 Physics Syllabus. As in the syllabus, the text is divided into modules. Each module is made up of chapters based on the divisions in the syllabus. Chapters are divided into sections to specifically cover the syllabus dot points. The Preliminary volume covers all four core modules; the HSC volume covers three core modules and three optional modules: ‘From quanta to quarks’, ‘Medical physics’ and ‘Astrophysics’. Physics in Focus is a valuable guide, not only to the syllabus content but also to practical procedures. The NSW Board of Studies intends, as with all science courses, that physics is taught in the contexts formed by the five Prescribed Focus Areas (PFAs). The PFAs have been a misunderstood concept for too long. This book attempts to address this shortfall by giving specific examples and information that relates to these broader issues. Physics in Focus clearly indicates where the skills, PFAs and first-hand investigations are being addressed within the text. It also provides risk assessments, animations, exercises and worked examples, all marked with icons and colour coding. The dot points in the ‘students learn to’ columns are marked in blue and the third column investigations are marked in green. Exceptions are for made for dot points that start with either ‘solve’ or ‘analyse’; these are marked in red, emphasising that students may need some guidance to learn this material, rather than learning by themselves through investigations.

xi

Appendix DATA SHEET 8]Vg\ZdcZaZXigdc! qe

·&#+%'s &%·&. 8

BVhhd[ZaZXigdc! me

.#&%.s &%·(& `\

BVhhd[cZjigdc! mn

&#+,*s &%·', `\

BVhhd[egdidc! mp

&#+,(s &%·', `\

HeZZYd[hdjcY^cV^g

()%bh·&

:Vgi]¼h\gVk^iVi^dcVaVXXZaZgVi^dc! g

.#-bh·'

HeZZYd[a^\]i! c

(#%%s &%- bh·&

N BV\cZi^X[dgXZXdchiVci! ¥¦ k y % ´µ Q¶ ' §

'#%s &%·, C6·'

Jc^kZghVa\gVk^iVi^dcVaXdchiVci! G

+#+,s &%·&& Cb' `\·'

BVhhd[:Vgi]

+#%s &%') `\

EaVcX`XdchiVci! h

+#+'+s &%·() ?h

GnYWZg\XdchiVci! R ]nYgd\Zc

&#%.,s &%, b·&

6idb^XbVhhjc^i! u

&#++&s &%·', `\ .(&#*BZK$ c '

472

&ZK

&#+%'s &%·&. ?

9Zch^ind[lViZg! S

&#%%s &%( `\b·(

HeZX^[^X]ZViXVeVX^ind[lViZg

)#&-s &%( ?`\·& @·&

appendix

FORMULAE SHEET Preliminary course

HSC course

From ideas to implementation

The world communicates

Space

F  qvB h^c R

v  fM

m& m' Ep   G r

I

t

&

F  mg

d'

v x '  ux '

v& h^c i  v' h^c r Electrical energy in the home

F E  q V R  I

% x  ux t & '

% y  uy t ay t '

T

'

Gm& m'

:cZg\n  VIt

E  mc '

vVk

aVk

F  Ek 

mv ' r & ' mv '

v' c'

& p

d 

&

v'

c' m%

mv 

&

v' c'

Motors and generators

 k

IA IB

t%

tv 

l

I%

' Z'  Z& > <  < Z' Z& > '

d M  m  * ad\ ¥ ´ § &% ¶

lv  l% & 

F

Ir

d'

vu %v i]ZgZ[dgZaVk   t %t

4 F  ma

Z  Sv

Astrophysics

)Q '

F 

%r  %t

Medical physics

GM



P  VI

Moving about

c  fM

v y '  uy ' ' ay % y

r

V d

E  hf

v  u at

(

E 

I& I' d

 &%%

mB  mA

m& m' 

)Q ' r ( GT '

From quanta to quarks

¥ & & &´  R¦ '  ' µ M § n f ni ¶

M 

h mv

F  BIl h^c R

The age of silicon

W  Fs

U  Fd

A% 

p  mv

U  nBIA XdhR

>bejahZ  Ft

Vp Vs



np

*

Vdji V^c

Vdji V^c  

R[ R^

ns 473

474

'% 8V )%#%-

&. @ (.#&%

;gVcX^jb

GVY^jb

Niig^jb

6Xi^c^jb

I]dg^jb

EgdiVXi^c^jb

.& EV '(&#%

.% I] '('#%

6Xi^cd^Yh -. 6X P'',R JgVc^jb

.' J '(-#%

+% CY &))#'

HZVWdg\^jb

&%+ H\ P'++R

Ijc\hiZc

,) L &-(#-

BdanWYZcjb

)' Bd .*#.)

8]gdb^jb

') 8g *'#%%

EgVhZdYnb^jb CZdYnb^jb

8Zg^jb

AVci]Vcjb

9jWc^jb

*. Eg &)%#.

Gji]Zg[dgY^jb

6Xi^cd^Yh

&%* 9W P'+'R

IVciVajb

,( IV &-%#.

C^dW^jb

)& CW .'#.&

KVcVY^jb

'( K *%#.)

AVci]Vcd^Yh *, *- AV 8Z &(-#. &)%#&

&%) G[ P'+&R

=V[c^jb

,' =[ &,-#*

O^gXdc^jb

)% Og .&#''

I^iVc^jb

'' I^ ),#-,

-.·&%(

AVci]Vcd^Yh

*,·,&

(. N --#.&

HXVcY^jb

'& HX ))#.+

CZeijc^jb

.( Ce P'(,R

EgdbZi]^jb

+& Eb P&)*R

7d]g^jb

Eajidc^jb

.) Ej P'))R

HVbVg^jb

+' Hb &*%#)

=Vhh^jb

&%- =h P',,R

Dhb^jb

&%, 7] P'+)R

G]Zc^jb

,+ Dh &.%#'

Gji]Zc^jb

)) Gj &%&#&

>gdc

'+ ;Z **#-*

,* GZ &-+#'

IZX]cZi^jb

)( IX P.,#.&R

BVc\VcZhZ

'* Bc *)#.)

6idb^XLZ^\]i

6idb^XCjbWZg

6bZg^X^jb

.* 6b P')(R

:jgde^jb

+( :j &*'#%

BZ^icZg^jb

&%. Bi P'+-R

>g^Y^jb

,, >g &.'#'

G]dY^jb

)* G] &%'#.

8dWVai

', 8d *-#.(

c &&)#-