January 8, 2010

=PAGE 4-

When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.


In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.

In 1900 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off in set amounts, or quanta. Five years later, German-born American physicist Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein's special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.

Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known - an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927. But while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world - that is, the one in which we live.

In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both an energy source and a weapon.

These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.

Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.

Particle Accelerators, in physics, are the devices used to accelerate charged elementary particles or ions to high energies. Particle accelerators today are some of the largest and most expensive instruments used by physicists. They all have the same three basic parts: a source of elementary particles or ions, a tube pumped to a partial vacuum in which the particles can travel freely, and some means of speeding up the particles.

Charged particles can be accelerated by an electrostatic field. For example, by placing electrodes with a large potential difference at each end of an evacuated tube, British scientists’ John D. Cockcroft and Ernest Thomas Sinton Walton were able to accelerate protons to 250,000 eV. Another electrostatic accelerator is the Van de Graaff accelerator, which was developed in the early 1930's by the American physicist Robert Jemison Van de Graaff. This accelerator uses the same principles as the Van de Graaff Generator. The Van de Graaff accelerator builds up a potential between two electrodes by transporting charges on a moving belt. Modern Van de Graaff accelerators can accelerate particles to energies as high as 15 MeV (15 million electron volts).

Another machine, first conceived in the late 1920's, is the linear accelerator, or linac, which uses alternating voltages of high magnitude to push particles along in a straight line. Particles pass through a line of hollow metal tubes enclosed in an evacuated cylinder. An alternating voltage is timed so that a particle is pushed forward each time it goes through a gap between two of the metal tubes. Theoretically, a linac of any energy can be built. The largest linac in the world, at Stanford University, is 3.2 km. (2 mi.) long. It is capable of accelerating electrons to an energy of 50 GeV (50 billion, or giga, electron volts). Stanford's linac is designed to collide two beams of particles accelerated on different tracks of the accelerator.

The American physicist Ernest O. Lawrence won the 1939 Nobel Prize in physics for a breakthrough in accelerator design in the early 1930's. He developed the cyclotron, the first circular accelerator. A cyclotron is somewhat like a linac wrapped into a tight spiral. Instead of many tubes, the machine had only two hollow vacuum chambers, called dees, that are shaped like capital letter Ds back to back. A magnetic field, produced by a powerful electromagnet, keeps the particles moving in a circle. Each time the charged particles pass through the gap between the dees, they are accelerated. As the particles gain energy, they spiral out toward the edge of the accelerator until they gain enough energy to exit the accelerator. The world's most powerful cyclotron, the K1200, began operating in 1988 at the National Superconducting Cyclotron Laboratory at Michigan State University. The machine is capable of accelerating nuclei to an energy approaching 8 GeV.

When nuclear particles in a cyclotron gain an energy of 20 MeV or more, they become appreciably more massive, as predicted by the theory of relativity. This tends to slow them and throws the acceleration pulses at the gaps between the dees out of phase. A solution to this problem was suggested in 1945 by the Soviet physicist Vladimir I. Veksler and the American physicist Edwin M. McMillan. The solution, the synchrocyclotron, is sometimes called the frequency-modulated cyclotron. In this instrument, the oscillator (radio - frequency generator) that accelerates the particles around the dees is automatically adjusted to stay in step with the accelerated particles; as the particles gain mass, the frequency of accelerations is lowered slightly to keep in step with them. As the maximum energy of a synchrocyclotron increases, so must its size, for the particles must have more space in which to spiral. The largest synchrocyclotron is the 600-cm. (236-in.) phasotron at the Dubna Joint Institute for Nuclear Research in Russia; it accelerates protons to more than 700 MeV and has magnets weighing 6984 metric tons (7200 tons).

When electrons are accelerated, they undergo a large increase in mass at a relatively low energy. At 1 MeV energy, an electron weighs two and one - half times as much as an electron at rest. Synchrocyclotrons cannot be adapted to make allowance for such large increases in mass. Therefore, another type of cyclic accelerator, the betatron, is employed to accelerate electrons. The betatron consists of a doughnut-shaped evacuated chamber placed between the poles of an electromagnet. The electrons are kept in a circular path by a magnetic field called a guide field. By applying an alternating current to the electromagnet, the electromotive force induced by the changing magnetic flux through the circular orbit accelerates the electrons. During operation, both the guide field and the magnetic flux are varied to keep the radius of the orbit of the electrons constant.

The synchrotron is the most recent and most powerful member of the accelerator family. A synchrotron consists of a tube in the shape of a large ring through which the particles travel; the tube is surrounded by magnets that keep the particles moving through the centre of the tube. The particles enter the tube after having already been accelerated to several million electron volts. Particles are accelerated at one or more points on the ring each time the particles make a complete circle around the accelerator. To keep the particles in a rigid orbit, the strengths of the magnets in the ring are increased as the particles gain energy. In a few seconds, the particles reach energies greater than 1 GeV and are ejected, either directly into experiments or toward targets that produce a variety of elementary particles when struck by the accelerated particles. The synchrotron principle can be applied to either protons or electrons, although most of the large machines are proton-synchrotrons.

The first accelerator to exceed the 1 GeV mark was the cosmotron, a proton-synchrotron at Brookhaven National Laboratory, in Brookhaven, New York. The cosmotron was operated at 2.3 GeV in 1952 and later increased to 3 GeV. In the mid-1960's, two operating synchrotrons were regularly accelerating protons to energies of about 30 GeV. These were the Alternating Gradient Synchrotron at Brookhaven National Laboratory, and a similar machine near Geneva, Switzerland, operated by CERN (also known as the European Organization for Nuclear Research). By the early 1980s, the two largest proton-synchrotrons were a 500-GeV device at CERN and a similar one at the Fermi National Accelerator Laboratory (Fermilab) near Batavia, Illinois. The capacity of the latter, called Tevatron, was increased to a potential 1 TeV (trillion, or tera, eV) in 1983 by installing superconducting magnets, making it the most powerful accelerator in the world. In 1989, CERN began operating the Large-Electron Positron Collider (LEP), a 27-km (16.7-mi) rings that can accelerate electrons and positrons to an energy of 50 GeV.

A storage ring collider accelerator is a synchrotron that produces more energetic collisions between particles than a conventional synchrotron, which slams accelerated particles into a stationary target. A storage ring collider accelerates two sets of particles that rotate in opposite directions in the ring, then collides the two set of particles. CERN's Large Electron-Positron Collider is a storage ring collider. In 1987, Fermilab converted the Tevatron into a storage ring collider and installed a three-story-high detector that observed and measured the products of the head - on particle collisions.

As powerful as today's storage ring colliders are, physicists need even more powerful devices to test today's theories. Unfortunately, building larger rings is extremely expensive. CERN is considering building the Large Hadron Collider (LHC) in the existing 27-km (16.7-mi) tunnel that currently houses the Large Electron-Positron Collider. In 1988, the United States began planning for the construction of the Superconducting Super Collider (SSC) near Waxahachie, Texas. The SSC was to be an enormous storage ring collider accelerator 87 km (54 mi) long. However, after about one - fifth of the tunnel had been completed, the Congress of the United States voted to cancel the project in October 1993, as a result of the accelerator's projected cost of more than $10 billion.

Accelerators are used to explore atomic nuclei, thereby allowing nuclear scientists to identify new elements and to explain phenomena that affect the entire nucleus. Machines exceeding 1 GeV are used to study the fundamental particles that compose the nucleus. Several hundred of these particles have been identified. High - energy physicists hope to discover rules or principles that will permit an orderly arrangement of the proportion of subnuclear particles. Such an arrangement would be as useful to nuclear science as the periodic table of the chemical elements is to chemistry. Fermilab's accelerator and collider detector permit scientists to study violent particle collisions that mimic the state of the universe when it was just microseconds old. Continued study of their findings should increase scientific understanding of the makeup of the universe.

In addition, Particle Detectors, are described as instruments used to detect and study fundamental nuclear particles, as these detectors range in complexity from the well - known portable Geiger counter to room-sized spark and bubble chambers.

One of the first detectors to be used in nuclear physics was the ionization chamber, which consists essentially of a closed vessel containing a gas and equipped with two electrodes at different electrical potentials. The electrodes, depending on the type of instrument, may consist of parallel plates or coaxial cylinders, or the walls of the chamber may act as one electrode and a wire or rod inside the chamber act as the other. When ionizing particles of radiation enter the chamber they ionize the gas between the electrodes. The ions that are thus produced migrate to the electrodes of opposite sign (negatively charged ions move toward the positive electrode, and vice versa), creating a current that may be amplified and measured directly with an electrometer - an electroscope equipped with a scale - or amplified and recorded by means of electronic circuits.

Ionization chambers adapted to detect individual ionizing particles of radiation are called counters. The Geiger-Müller counter is one of the most versatile and widely used instruments of this type. It was developed by the German physicist Hans Geiger from an instrument first devised by Geiger and the British physicist Ernest Rutherford; it was improved in 1928 by Geiger and by the German American physicist Walther Müller. The counting tube is filled with a gas or a mixture of gases at low pressure, the electrodes being the thin metal wall of the tube and a fine wire, usually made of tungsten, stretched lengthwise along the axis of the tube. A strong electric field maintained between the electrodes accelerates the ions; these then collide with atoms of the gas, detaching electrons and thus producing more ions. When the voltage was raised sufficiently, the rapidly increasing current produced by a single particle sets off a discharge throughout the counter. The pulse caused by each particle is amplified electronically and then actuates a loudspeaker or a mechanical or electronic counting device.

Detectors that enable researchers to observe the tracks that particles leave behind are called track detectors. Spark and bubble chambers are track detectors, as are the cloud chamber and nuclear emulsions. Nuclear emulsions resemble photographic emulsions but are thicker and not as sensitive to light. A charged particle passing through the emulsion ionizes silver grains along its track. These grains become black when the emulsion is developed and can be studied with a microscope.

The fundamental principle of the cloud chamber was discovered by the British physicist C. T. R. Wilson in 1896, although an actual instrument was not constructed until 1911. The cloud chamber consists of a vessel several centimetres or more in diameter, with a glass window on one side and a movable piston on the other. The piston can be dropped rapidly to expand the volume of the chamber. The chamber is usually filled with dust-free air saturated with water vapour. Dropping the piston causes the gas to expand rapidly and causes its temperature to fall. The air is now supersaturated with water vapour, but the excess vapour cannot condense unless ions are present. Charged nuclear or atomic particles produce such ions, and any such particles passing through the chamber leave behind them a trail of ionized particles upon which the excess water vapour will condense, thus making visible the course of the charged particle. These tracks can be photographed and the photographs then analysed to provide information on the characteristics of the particles.

Because the paths of electrically charged particles are bent or deflected by a magnetic field, and the amount of deflection depends on the energy of the particle, a cloud chamber is often operated within a magnetic field. The tracks of negatively and positively charged particles will curve in opposite directions. By measuring the radius of curvature of each track, its velocity can be determined. Heavy nuclei such as alpha particles form thick and dense tracks, protons form tracks of medium thickness, and electrons form thin and irregular tracks. In a later refinement of Wilson's design, called a diffusion cloud chamber, a permanent layer of supersaturated vapour is formed between warm and cold regions. The layer of supersaturated vapour is continuously sensitive to the passage of particles, and the diffusion cloud chamber does not require the expansion of a piston for its operation. Although the cloud chamber has now been supplanted almost entirely by the bubble chamber and the spark chamber, it was used in making many important discoveries in nuclear physics.

The bubble chamber, invented in 1952 by the American physicist Donald Glaser, is similar in operation to the cloud chamber. In a bubble chamber a liquid is momentarily superheated to a temperature just above its boiling point. For an instant the liquid will not boil unless some impurity or disturbance is introduced. High - energy particles provide such a disturbance. Tiny bubbles form along the tracks as these particles pass through the liquid. If a photograph is taken just after the particles have crossed the chamber, these bubbles will make visible the paths of the particles. As with the cloud chamber, a bubble chamber placed between the poles of a magnet can be used to measure the energies of the particles. Many bubble chambers are equipped with superconducting magnets instead of conventional magnets. Bubble chambers filled with liquid hydrogen allow the study of interactions between the accelerated particles and the hydrogen nuclei.

In a spark chamber, incoming high - energy particles ionize the air or a gas between plates and wire grids that are kept alternately positively and negatively charged. Sparks jump along the paths of ionization and can be photographed to show particle tracks. In some spark-chamber installations, information on particle tracks is fed directly into electronic computer circuits without the necessity of photography. A spark chamber can be operated quickly and selectively. The instrument can be set to record particle tracks only when a particle of the type that the researchers want to study is produced in a nuclear reaction. This advantage is important in studies of the rarer particles; spark-chamber pictures, however, lack the resolution and detail of bubble-chamber pictures.

The scintillation counter functions by the ionization produced by charged particles moving at high speed within certain transparent solids and liquids, known as scintillating materials, causing flashes of visible light. The gases’ argon, krypton, and xenon produces ultraviolet light, and hence are used in scintillation counters. A primitive scintillation device, known as the spinthariscope, was invented in the early 1990s and was of considerable importance in the development of nuclear physics. The spinthariscope required, however, the counting of the scintillations by eye. Because of the uncertainties of this method, physicists turned to other detectors, including the Geiger-Müller counter. The scintillation method was revived in 1947 by placing the scintillating material in front of a photo multiplier tube, a type of photoelectric cell. The light flashes are converted into electrical pulses that can be amplified and recorded electronically.

Various organic and inorganic substances such as plastic, zinc sulfide, sodium iodide, and anthracene are used as scintillating materials. Certain substances react more favourably to specific types of radiation than others, making possible highly diversified instruments. The scintillation counter is superior to all other radiation-detecting devices in a number of fields of current research. It has replaced the Geiger-Müller counter in the detection of biological tracers and as a surveying instrument in prospecting for radioactive ores. It is also used in nuclear research, notably in the investigation of such particles as the antiproton, the meson Elementary Particles, and the neutrino. One such counter, the Crystal Ball, has been in use since 1979 for advanced particle research, first at the Stanford Linear Accelerator Centre and, since 1982, at the German Electron Synchrotron Laboratory (DESY) in Hamburg, Germany. The Crystal Ball is a hollow crystal sphere, about 2.1 m. (7 ft.) wide, that is surrounded by 730 sodium iodide crystals.

Many other types of interactions between matter and elementary particles are used in detectors. Thus in semiconductor detectors, electron-hole pairs that elementary particles produce in a semiconductor junction momentarily increase the electric conduction across the junction. The Cherenkov detector, on the other hand, makes use of the effect discovered by the Russian physicist Pavel Alekseyevich Cherenkov in 1934: A particle emits light when it passes through a nonconducting medium at a velocity higher than the velocity of light in that medium (the velocity of light in glass, for example, is lower than the velocity of light in vacuum). In Cherenkov detectors, materials such as glass, plastic, water, or carbon dioxide serve as the medium in which the light flashes are produced. As in scintillation counters, the light flashes are detected with photo multiplier tubes.

Neutral particles such as neutrons or neutrinos can be detected by nuclear reactions that occur when they collide with nuclei of certain atoms. Slow neutrons produce easily detectable alpha particles when they collide with boron nuclei in borontrifluoride. Neutrinos, which barely interact with matter, are detected in huge tanks containing perchloroethylene (C2CI4, a dry - cleaning fluid). The neutrinos that collide with chlorine nuclei produce radioactive argon nuclei. The perchloroethylene tank is flushed at regular intervals, and the newly formed argon atoms, presents in minute amounts, is counted. This type of neutrino detector, placed deep underground to shield against cosmic radiation, is currently used to measure the neutrino flux from the sun. Neutrino detectors may also take the form of scintillation counters, the tank in this case being filled with an organic liquid that emits light flashes when traversed by electrically charged particles produced by the interaction of neutrinos with the liquid's molecules.

The detectors now being developed for use with the storage rings and colliding particle beams of the most recent generation of accelerators are bubble-chamber types known as time-projection chambers. They can measure three-dimensionally the tracks produced by particles from colliding beams, with supplementary detectors to record other particles resulting from the high - power collisions. The Fermi National Accelerator Laboratory's CDF (Collision Detector Fermilab) is used with its colliding-beam accelerator to study head - on particle collisions. CDF's three different systems can capture or account for nearly all of the subnuclear fragments released in such violent collisions.

High - energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called superstrings. Superstrings may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, superstrings may tell us about the largest event of all time: the big bang, and the creation of the universe!

These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself. Read on to follow their progress.

The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging slightly more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.

The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element - that is, how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.

In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.

Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best - known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.

Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. And so is everything else.

It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is actually the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed.

What differentiates radio waves, visible light, and X ray is their energy. This energy is directly related to the wave’s length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.

How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.

To understand really what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.

While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.

Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is rather ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated.

The proposal and acceptance of quarks were a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.

Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.

Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultrareliable lasers and transistors.

The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. But they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. And physicists cannot yet provide a satisfactory description of gravity.

The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton’s explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity? No one has yet proposed a satisfactory answer to this question. Physicists have been trying to find one for a long time.

At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn’t we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.

There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics is subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology - for example, how did the universe get to be the way it is, and why did galaxies form?

Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.

We can't build an accelerator to probe that distance using today’s technology, because the proportions of size and energy show that it would stretch from here to the stars! But we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity!

These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model - such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicists’ questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.

Among the most promising new theories is the idea that everything is made of fundamental ‘strings,’ rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle - at quantum-length distances - we would see a tiny, vibrating loop!

In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.

This is a very clever idea, since it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.

Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. The end of the story has not yet been written, however. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.

Elementary Particles, in physics, are particles that cannot be broken down into any other particles. The term elementary particles also are used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.

Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analysing information about the world, they showed that these materials were not fundamental but were made of other substances.

In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible’). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.

Scientists now believe that quarks and three other types of particles - leptons, force-carrying bosons, and the Higgs boson - are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.

Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.

In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have slightly different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.

Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.

Classifying particles as either fermions or bosons are similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yield a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.

For example, a hydrogen atom contains two fermions: an electron and a proton. But the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.

A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.

The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.

The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter is bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms radioactively to decay). Scientists believed another type of fundamental boson, called the Higgs boson, give matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.

Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs. Leptons have electric charges of either -1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or -? . Antiquarks have electric charges of either -? or +? . Leptons interact rather weakly with one another and with other particles, while quarks interact strongly with one another.

Leptons and quarks each come in 6 varieties. Scientists divided these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.

Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino (ν? e); the second generation, the muon (µ) and the muon neutrino (ν? µ); and the third generation, the tau (t) and the tau neutrino (ν? t;).

The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of –1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.

Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.

Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth’s atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one - millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into Muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.

The third generation charged lepton is the tau. The tau has an electric charge of -1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short - only 0.3 picoseconds

(a picosecond is one-trillionth of a second). Scientists believe the tau has an electrically neutral partner called the tau neutrino. While scientists have never detected a tau neutrino directly, they believe they have seen the effects of tau neutrinos during experiments. Like the other neutrinos, the tau neutrino has a very small mass or no mass at all.

The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavours,’ divided into three generations. Unlike leptons, however, quarks never exist alone - they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.

Quarks are unique among all elementary particles in that they have fractional electric charges - either +? or -? . In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.

The first generation quarks are designated up (u) and down (d); the second generation, charm and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.

The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +? , and the down quark has a charge of -? . The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +? , and the strange quark has a charge of -? . The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +? , and the bottom quark has a charge of -? . The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.

Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).

Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +? , so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is -? Besides -? , or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).

Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the kaons (pronounced KAY-ons) and the D particles. Kaons (Κ?) and Ds comes in several different varieties, just as pions do. All varieties of kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contains either a charm quark or a charm antiquark.

Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? In addition +? Moreover -? , produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.

The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus -? plus -? for a net charge of 0, making the neutron electrically neutral. Neutrons have a slightly greater mass than protons and an average lifetime of 930 seconds.

Many other baryons exist, and many contain quarks other than the up and down flavours. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.

British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.

All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks, and two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.

When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.

Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned, magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.

All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.

Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.

Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent’s hands never come into contact with the atoms of the child. The electrons in the parent’s atoms repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.

Theories about elementary particles, however, require forces to be local - that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.

Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle - any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.

For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are actually different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.

Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.

Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers - they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.

In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.

Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei of atoms. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.

A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of antired (also called cyan), antiblue (also called yellow), or Antigreen (also called magenta). Quark types and colours are not linked - up quarks, for example, may be red, green, or blue.

All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anticolour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour’s red, blue, and green combine to produce white, so the baryon is white, or colourless.

The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.

Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.

The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.

In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of atoms contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.

While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.

All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. But the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.

The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.

One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.

The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.

When the weak force affects a particle, the particle emits one of the three weak vector bosons -W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.

A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.

The gravitational force is probably the most familiar force, yet it is the only force not described by the standard model of particle physics. In 1915 German-born American physicist Albert Einstein developed a significant new approach to the concept of gravity: the general theory of relativity. While general relativity successfully described many phenomena, the theory was framed differently than were theories of particle physics, making relativity difficult to reconcile with particle physics. Through the end of the 20th century, all efforts to develop a theory of gravitation entirely consistent with particle physics failed.

Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.

Gravitation is the weakest of the four forces on the atomic scale, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.

Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).

The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.

Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.

Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.

The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.

Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.

Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.

The American physicists’ Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.

The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.

One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.

Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time - some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstrings. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.

Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles - leptons, quarks, force-carrying bosons, and the Higgs boson - appear to be ‘point particles.’ A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.

In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.

Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.

Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.

When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).

Particle accelerators come in two basic types - linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not flat screen) television sets and computer monitors use this method to accelerate electrons toward the screen (Television: Picture Tube). Linear accelerators have two main uses: They can produce a beam of particles for a fixed target experiment, or they can feed particles into a circular accelerator.

Circular accelerators, or synchrotrons (pronounced SIN-krow-trons), use magnetic fields to accelerate charged particles in a circle. The particles can circle many times, gaining energy each time they travel around the circle. Thus synchrotrons can accelerate particles to extremely high energies. Synchrotrons can be used in fixed target experiments, or they can accelerate two beams simultaneously for use in a collider experiment.

Positively charged particles bend a different way in a magnetic field than do negatively charged particles, so a synchrotron can accelerate electrons in one direction and positrons in the other. A synchrotron can also accelerate protons in one direction and antiprotons in the other. Scientists are even considering building a synchrotron to accelerate less stable particles, such as muons and antimuons.

Once particles reach the desired energy, experimenters slightly change the magnetic field controlling the particles, bringing the two beams into a collision. The particles and antiparticles annihilate each other. The resulting energy produces numerous other particles for the scientists to study.

Many great discoveries in particle physics have been made by looking to the heavens. The universe is a natural particle accelerator, and particles from outer space continually bombard Earth’s atmosphere. Extraterrestrial particles called cosmic rays

- and their collisions with other particles in the atmosphere - produce many unusual and unstable particles. Scientists first discovered the muon and the pion in cosmic rays, as well as the positron. Mesons made up of the strange quark were also first spotted in cosmic ray experiments before modern large accelerator facilities were built.

Neutrinos stream to Earth from cosmic sources. Nuclear reactions in the Sun produce incredibly large numbers of electron neutrinos that can then be detected on Earth. Experiments studying these solar neutrinos suggest that the mass of the neutrino may not be zero. If these experiments are correct, they could provide the first contradiction of the standard model of particle physics.

Every particle experiment needs particle detectors. Particle detectors come in many shapes, sizes, and types. Some detectors track particles, some count the number of particles passing by, some measure the energy left in the detector by a particle, and some are even more specializing. In addition, many detectors contain large magnets to bend the paths of charged particles. The direction the path bends indicates the electric charge of the particle, and the amount, but the path bends indicates the mass and speed of the particle.

Physicists have extensively studied, and come to understand, commonly occurring interactions between particles, so most current particle experiments focus on rare interactions, which are less well understood. Experiments must generate incredibly large numbers of particle interactions to produce a few of the desired rare interactions. Scientists are not interested in studying the majority of interactions produced in an experiment, so they need fast computers and sophisticated programs to sort the data and pick out the important interactions.

Each type of particle has distinct properties, so each type of particle behaves differently in detectors. Experiments typically have many types of detectors to distinguish between different particles. Each detector produces such an enormous amount of data on each interaction that annualising particle, but experiments requires a huge amount of computer time.

Scientists use particle traps to study particles that are more stable and have less energy than particles studied in accelerators and colliders. Magnetic and electric fields can be used to trap charged particles. The fields control the movement of the particle, keeping it confined to a small area. Neutral particles, such as atoms, can also be trapped, but that task is much more difficult. Lasers, beams of coherent light, are often used to trap neutral particles. Light carries energy, and when light strikes an object, it exerts a small force on the object. Shining lasers on atoms or other neutral particles causes the particles to slow down gradually and be trapped.

The rules of quantum theory prevent any particle trap from being perfect. A perfect trap would enable a physicist to determine precisely a particle’s position and speed. A rule called the uncertainty principle states that a particle’s location and speed cannot be precisely measured at the same time. Increasing the precision in one measurement increases the uncertainty in the other. If a particle trap was infinitely small, the location of the particle would be known precisely, but this would make measurement of the particle’s speed infinitely uncertain: The scientist would not be able to determine anything about the particle’s speed. Likewise, if the particle trap slowed the particle to a complete rest, its speed would be known precisely, which would make the particle’s location infinitely uncertain: The scientist would not be able to determine anything about position, or whether the particle was even in the trap.

Scientists use particle traps to compare the properties of particles and antiparticles. Scientists are also trying to create antihydrogen using particle traps. Antiparticles, such as antiprotons and positrons, usually exist for just a brief time before they combine with their counterpart particles in ordinary matter and are annihilated. A particle trap, however, can confine an antiproton without letting it contact its ordinary matter counterpart, the proton. Positrons can be confined in a similar manner. Researchers are currently using particle traps to bring positrons close enough to antiprotons so these particles can bind and make antihydrogen, just as electrons and protons make hydrogen.

The history of particle physics began in the early 20th century with the discovery of the parts of the atom and the photon. Theories explaining the behaviour of these particles led physicists to propose the existence of neutrinos in 1928 and antimatter in 1931. Antimatter was discovered in 1933, but it took experimenters almost 30 years to confirm the existence of neutrinos. Physicists were aided in their studies of particles by the first particle accelerator, invented in 1928, and by its successor, which was developed in the 1940s.

During the 1950s scientists discovered mesons and pions in cosmic rays from space. They did not yet understand, however, that these particles, as well as the protons and neutrons inside atoms, were composed of quarks.

Two important advances in the theory of elementary particles occurred in the 1960s: Physicists proposed the existence of quarks, and they introduced the standard model, a theory that explains how the strong and weak nuclear forces work. The standard model predicted the existence of many more particles, which scientists later detected in experiments. According to the standard model, the number of truly elementary particles is now 30: 6 quarks, 6 antiquarks, 6 leptons, 6 antileptons, the photon, the gluon, the 3 bosons of the weak force, and the Higgs boson. (The graviton, while it may exist, is not included in the standard model.) Particle physicists continue to revise their theories and often propose new particles to explain different phenomena. Some of the particles that have been suggested, but not yet detected, are the axion, the squark, and the magnetic monopole.

In seeking to explain the behaviour of atoms, physicists of the late 1800s searched for the source of negative electric charge in atoms. British physicist Sir Joseph John Thomson is credited with the discovery of the electron. Although many others had studied electricity and streams of electrons, Thomson was the first to measure the properties of individual electrons and to suggest that electrons existed within atoms. He measured the ratio of electron mass to electron charge and, in 1897, claimed that electrons could be found in all matter.

Matter is not made up entirely of electrons–atoms also contain protons and neutrons. No one person is given credit for discovering the proton. Many experiments around the turn of the century examined its properties, but it was not named proton until 1920. The discovery of the neutron came much later, because the neutron is electrically neutral and therefore much harder to detect. British physicist James Chadwick discovered the neutron in 1932. He won the 1935 Nobel Prize in physics for this discovery.

Before the development of particle physics, scientists had a difficult time explaining the behaviour of light. Light often behaves like a wave, such as a wave of sound or a wave on the surface of water. Other times, however, light behaves more like a beam of particles. To explain this behaviour, Albert Einstein proposed in 1905 that light came in little packets, or particles, of energy. He was awarded the 1921 Nobel Prize in physics for his explanation. In 1926 scientists named these particles of light photons.

In the early part of the 20th century, scientists studying beta decay noticed that the sum of the mass and energy before the decay was greater than the sum of mass and energy present after the decay. To account for this missing energy, Austrian-born American physicist Wolfgang Pauli proposed the existence of a new particle in 1928. Pauli called his suggestion a drastic measure because scientists by then did not expect more elementary particles. His hypothesis proved correct, however, and this particle is now known as the electron neutrino. The neutrino was escaping unseen because it has no electric charge, no colour charge, and a very small mass (or no mass at all). American physicists’ Fred Reines and Clyde Cowen were the first to detect the neutrino in 1956 experimentally, almost 30 years after Pauli first proposed its existence. Reines shared the 1995 Nobel Prize in physics for his part in this experiment.

Pauli received a Nobel Prize as well, but not for his proposal of neutrinos. He won the 1945 Nobel Prize in physics for developing the exclusion principle. The exclusion principle is the rule of quantum theory that says that no two fermions with exactly the same characteristics can occupy the same space. Pauli proposed the exclusion principle in 1925. A year later Italian-born American physicist Enrico Fermi developed the mathematical equations to explain why two fermions cannot occupy the same state.

In 1931 British physicist Paul Dirac produced the precursor of modern particle theories. Dirac’s equations described the known electromagnetic properties of particles well, but to make his theory work more comprehensively, Dirac had to introduce the idea of antiparticles, antimatter counterparts of existing particles. The existence of these particles was confirmed in 1933, when American physicist Carl Anderson saw something peculiar while looking at tracks made by cosmic rays in a type of particle detector called a cloud chamber. A particle passing through the cloud chamber seemed to have the mass of an electron, but it had a positive rather than a negative charge - he had discovered the positron. Anderson shared the 1936 Nobel Prize in physics for this confirmation of Dirac’s theory.

In 1934 Japanese physicist Yukawa Hideki predicted the existence of a force carrier holding neutrons and protons together in the nucleus of an atom. He believed this particle should have a mass between the mass of the electron and that of the proton. Yukawa’s theory attempted to describe how the strong force affects particle interactions, but it was not complete because it did not describe the fundamental interactions between quarks and gluons. It was, however, highly successful at describing the way protons and neutrons bond inside the nucleus. The theory predicted the existence of the pion, the meson that holds the particles in an atomic nucleus together.

When Carl Anderson and American physicist Seth Neddermeyer detected a new particle in cosmic ray experiments two years later, many thought this new particle was Yukawa’s meson. But some properties of the new particle did not match Yukawa’s theory. This dilemma appeared to be solved in 1947 when yet another particle, the pion, was found in cosmic rays. The pion’s behaviour was consistent with predictions in Yukawa’s theory. The particle that Anderson and Neddermeyer discovered was later found to be the muon, but in the beginning, no one could tell the purpose of this particle. Anderson and Neddermeyer’s muon turned out to be the first indication of a new type of lepton. Scientists detected the muon neutrino in 1962 and thereafter regarded the muon and its neutrino partner as a second generation of leptons.

In the same year that the pion was discovered, physicists detected another particle in cosmic ray experiments. This particle, now called the lambda, behaved differently than known particles. Starting in 1953, scientists found many more such unexpected particles. Because these particles were different, physicists called them ‘strange.’ These particles were eventually shown to include strange quarks, which received their name from the description of the particles they compose.

While cosmic ray experiments revealed a myriad of particles, scientists also sought ways to create unusual and unstable particles in laboratories. American physicist Ernest Lawrence invented the cyclotron, a type of circular accelerator, in 1932. The cyclotron, however, could not achieve very high energies. Lawrence’s model was improved (independently) by American physicist Edwin McMillan and Soviet physicist Vladimir Veksler in the 1940s, resulting in the synchrocyclotron. The high energies available using the synchrocyclotron led to many important particle discoveries.

By the 1960s hundreds of different ‘elementary’ particles had been seen. Physicists found they could separate these particles into two main groups: those that interacted by the strong force and those that did not. They called the strongly interacting particles hadrons, and the particles without strong interactions leptons. American physicist Murray Gell-Mann proposed in 1964 that many of these observed particles might not be elementary after all. He showed that all of the properties of hadrons could be explained if they were various combinations of three quarks. Normal matter, such as protons, neutrons, and pions, contains only up and down quarks, and strange matter (such as the lambda particles) contains one or more strange quarks along with up and down quarks. Gell-Mann was honoured for his contributions in 1969 with the Nobel Prize in physics. Gell-Mann’s quark theory was confirmed experimentally by American physicists Jerome Friedman and Henry Kendall and Canadian physicist Richard Taylor in 1969. Their experiment demonstrated that protons have internal structure. This experiment earned them the 1990 Nobel Prize in physics.

In 1964, the same year Gell-Mann introduced his quark theory, British physicist Peter Higgs proposed the existence of the Higgs boson, building on the work others had done in the early 1960s. Some scientists also predicted that same year that a fourth quark - the charm quark - should exist. Hadrons containing the charm quark were finally detected in 1976, leaving the number of quarks and the number of leptons equal at four apiece. Scientists divided the leptons and quarks into two generations, with the up and down quarks and the electron and electron neutrino in the first, and the strange and charmed quarks and muon and muon neutrino in the second.

A third generation of particles entered the scene in 1975, just a year before the charm quark was discovered. American physicist Martin Perl and his collaborators detected a third charged lepton, the tau. Scientists assumed immediately that a third neutrino accompanied the tau, but it has not yet been directly detected. Perl shared the 1995 Nobel Prize in physics with American physicist Frederick Reines for his part in discovering the tau lepton.

Physicists discovered a third generation of quarks in 1977. American physicist Leon Lederman and his collaborators discovered mesons that contained a fifth quark: the bottom quark. Scientists assumed the bottom quark should have a partner, called the top quark, and so the hunt for this particle was on. This hunt finally ended in 1995, when evidence of the top quark was detected at the Fermi National Accelerator Laboratory in Batavia, Illinois. While the existence of the top quark was no surprise, the mass of it was. The top quark is over 40 times heavier than the bottom quark, and 174 times heavier than the proton, which contains three first generation quarks (two up quarks and one down quark).

Throughout the 1960s physicists worked on a comprehensive theory to explain why different types of elementary particles exist and why they behave the way they do. Building on the work of Fermi, Dirac, Yukawa, Gell-Mann, and numerous others, three scientists developed what is now called the standard model of particle physics. American physicist Steven Weinberg and Pakistani physicist Abdus Salam extended the earlier work of American physicist Sheldon Glashow and unified the electromagnetic and weak forces in 1967. These three men shared the 1979 Nobel Prize in physics for their highly successful theory. When these scientists developed the standard model, the physics community had not yet discovered the charm quark and did not know of the third generation of particles. The theory, however, predicted the charm and worked well with the addition of a third generation.

One of the key predictions of the standard model was the existence of particles carrying the weak force. In 1983 Italian physicist Carlo Rubbia and his colleagues discovered the W and Z bosons. Rubbia and Dutch physicist Simon Van der Meer shared the 1984 Nobel Prize for their work on the discovery of the W and Z bosons.

Particle physics is not finished yet. Most of the predictions of the standard model have been verified, but physicists still seek evidence of physics beyond the standard model. They look for new particles both on Earth and throughout the cosmos. They work on theories that would explain why particles have the masses scientists have observed. In particular, they want to understand why the top quark is so much heavier than the other particles and why the second and third generations of particles exist at all. They look for connections among the four forces in the universe and continue their quest for a theory of everything.

The Atom is a tiny and unseen building blocks of matter. All the material on Earth is composed of various combinations of atoms. Atoms are the smallest particles of a chemical element that still exhibit all the chemical properties unique to that element. A row of 100 million atoms would be only about a centimetres long.

Understanding atoms is key to understanding the physical world. More than 100 different elements exist in nature, each with its own unique atomic makeup. The atoms of these elements react with one another and combine in different ways to form a virtually unlimited number of chemical compounds. When two or more atoms combine, they form a molecule. For example, two atoms of the element hydrogen (abbreviated H) combine with one atom of the element oxygen (O) to form a molecule of water (H20).

Since all matter - from its formation in the early universe to present-day biological systems - consists of atoms, understanding their structure and properties plays a vital role in physics, chemistry, and medicine. In fact, knowledge of atoms is essential to the modern scientific understanding of the complex systems that govern the physical and biological worlds. Atoms and the compounds they form play a part in almost all processes that occur on Earth and in space. All organisms rely on a set of chemical compounds and chemical reactions to digest food, transport energy, and reproduce. Stars such as the Sun rely on reactions in atomic nuclei to produce energy. Scientists duplicate these reactions in laboratories on Earth and study them to learn about processes that occur throughout the universe.

Throughout history, people have sought to explain the world in terms of its most basic parts. Ancient Greek philosophers conceived of the idea of the atom, which they defined as the smallest possible piece of a substance. The word atom comes from the Greek word meaning ‘not divisible.’ The ancient Greeks also believed this fundamental particle was indestructible. Scientists have since learned that atoms are divisible but made of smaller particles, and atoms of different elements contain different numbers of each type of these smaller particles.

Atoms are made of smaller particles, called electrons, protons, and neutrons. An atom consists of a cloud of electrons surrounding a small, dense nucleus of protons and neutrons. Electrons and protons have a property called electric charge, which affects the way they interact with each other and with other electrically charged particles. Electrons carry a negative electric charge, while protons have a positive electric charge. The negative charge is the opposite of the positive charge, and, like the opposite poles of a magnet, these opposite electric charges attract one another. Conversely, like charges (negative and negative, or positive and positive) repel one another. The attraction between an atom’s electrons and its protons holds the atom together. Normally, an atom is electrically neutral, which means that the negative charge of its electrons is exactly equalled by the positive charge of its protons.

The nucleus contains nearly all of the mass of the atom, but it occupies only a tiny fraction of the space inside the atom. The diameter of a typical nucleus is only about 1 × 10-14 m (4 × 10-13 in), or about 1/100,000 of the diameter of the entire atom. The electron cloud makes up the rest of the atom’s overall size. If an atom were magnified until it was as large as a football stadium, the nucleus would be about the size of a grape.

Electrons are tiny, negatively charged particles that form a cloud around the nucleus of an atom. Each electron carries a single fundamental unit of negative electric charge, or –1.

The electron is one of the lightest particles with a known mass. A droplet of water weighs about a billion, billion, billion times more than an electron. Physicists believe that electrons are one of the fundamental particles of physics, which means they cannot be split into anything smaller. Physicists also believe that electrons do not have any real size, but are instead true points in space - that is, an electron has a radius of zero.

Electrons act differently than everyday objects because electrons can behave as both particles and waves. Actually, all objects have this property, but the wavelike behaviour of larger objects, such as sand, marbles, or even people, is too small to measure. In very small particles wave behaviour is measurable and important. Electrons travel around the nucleus of an atom, but because they behave like waves, they do not follow a specific path like a planet orbiting the Sun does. Instead they form regions of negative electric charge around the nucleus. These regions are called orbitals, and they correspond to the space in which the electron is most likely to be found. As we will discuss later, orbitals have different sizes and shapes, depending on the energy of the electrons occupying them.

Protons carry a positive charge of +1, exactly the opposite electric charge as electrons. The number of protons in the nucleus determines the total quantity of positive charge in the atom. In an electrically neutral atom, the number of the protons and the number of electrons are equal, so that the positive and negative charges balance out to zero. The proton is very small, but it is fairly massive compared with the other particles that make up matter. A proton’s mass is about 1,840 times the mass of an electron.

Neutrons are about the same size as protons but their mass is slightly greater. Without neutrons present, the repulsion among the positively charged protons would cause the nucleus to fly apart. Consider the element helium, which has two protons in its nucleus. If the nucleus did not contain neutrons as well, it would be unstable because of the electrical repulsion between the protons. (The process by which neutrons hold the nucleus together is explained below in the Strong Force section of this article.) A helium nucleus needs either one or two neutrons to be stable. Most atoms are stable and exist for a long period of time, but some atoms are unstable and spontaneously break apart and change, or decay, into other atoms.

Unlike electrons, which are fundamental particles, protons and neutrons are made up of other, smaller particles called quarks. Physicists know of six different quarks. Neutrons and protons are made up of up quarks and down quarks - two of the six different kinds of quarks. The fanciful names of quarks have nothing to do with their properties; the names are simply labels to distinguish one quark from another.

Quarks are unique among all elementary particles in that they have electric charges that are fractions of the fundamental charge. All other particles have electric charges of zero or of whole multiples of the fundamental charge. Up quarks have electric charges of +? . Down quarks have charges of -? . A proton is made up of two up quarks and a down quark, so its electric charge is? +? -? , for a total charge of +1. A neutron is made up of an up quark and two down quarks, so its electric charge is? -? -? , for a net charge of zero. Physicists believe that quarks are true fundamental particles, so they have no internal structure and cannot be split into something smaller.

Atoms have several properties that help distinguish one type of atom from another and determine how atoms change under certain conditions.

Each element has a unique number of protons in its atoms. This number is called the atomic number (abbreviated Z). Because atoms are normally electrically neutral, the atomic number also specifies how many electrons an atom will have. The number of electrons, in turn, determines many of the chemical and physical properties of the atom. The lightest atom, hydrogen, has an atomic number equal to one, contains one proton, and (if electrically neutral) one electron. The most massive stable atom found in nature is bismuth (Z = 83). More massive unstable atoms also exist in nature, but they break apart and change into other atoms over time. Scientists have produced even more massive unstable elements in laboratories.

The total number of protons and neutrons in the nucleus of an atom is the mass number of the atom (abbreviated A). The mass number of an atom is an approximation of the mass of the atom. The electrons contribute very little mass to the atom, so they are not included in the mass number. A stable helium atom can have a mass number equal to three (two protons plus one neutron) or equal to four (two protons plus two neutrons). Bismuth, with 83 protons, requires 126 neutrons for stability, so its mass number is 209 (83 protons plus 126 neutrons).

Scientists usually measure the mass of an atom in terms of a unit called the atomic mass unit (abbreviated amu). They define an amu as exactly 1/12 the mass of an atom of carbon with six protons and six neutrons. On this scale, the mass of a proton is 1.00728 amu and the mass of a neutron is 1.00866 amu. The mass of an atom measured in amu is nearly equal to its mass number.

Scientists can use a device called a mass spectrometer to measure atomic mass. A mass spectrometer removes one or more electrons from an atom. The electrons are so light that removing them hardly changes the mass of the atom at all. The spectrometer then sends the atom through a magnetic field, a region of space that exerts a force on magnetic or electrically charged particles. Because of the missing electrons, the atom has more protons than electrons and hence a net positive charge. The magnetic field bends the path of the positively charged atom as it moves through the field. The amount of bending depends on the atom’s mass. Lighter atoms will be affected more strongly than heavier atoms. By measuring how much the atom’s path curves, a scientist can determine the atom’s mass.

The atomic mass of an atom, which depends on the number of protons and neutrons present, also relates to the atomic weight of an element. Weight usually refers to the force of gravity on an object, but atomic weight is really just another way express to mass. An element’s atomic weight is given in grams. It represents the mass of one mole (6.02 × 1023 atoms) of that element. Numerically, the atomic mass and the atomic weight of an element are the same, but the first is expressed in grams and the second is in atomic mass units. So, the atomic weight of hydrogen is 1 gram and the atomic mass of hydrogen is 1 amu.

Atoms of the same element that differ in mass number are called isotopes. Since all atoms of a given element have the same number of protons in their nucleus, isotopes must have different numbers of neutrons. Helium, for example, has an atomic number of 2 because of the two protons in its nucleus. But helium has two stable isotopes - one with one neutron in the nucleus and a mass number equal to three and another with two neutrons and a mass number equal to four.

Scientists attach the mass number to an element’s name to differentiate between isotopes. Under this convention, helium with a mass number of three is called helium-3, and helium with a mass number of four is called helium-4. Helium in its natural form on Earth is a mixture of these two isotopes. The percentage of each isotope found in nature is called the isotope’s isotopic abundance. The isotopic abundance of helium-3 is very small, only 0.00014 percent, while the abundance of helium-4 is 99.99986 percent. This means that only about one of every 1 million helium atoms is helium-3, and the rest are all helium-4. Bismuth has only one naturally occurring stable isotope, bismuth-209. Bismuth-209’s isotopic abundance is therefore 100 percent. The element with the largest number of stable isotopes found in nature is tin, which has ten stable isotopes.

All elements also have unstable isotopes, which are more susceptible to breaking down, or decaying, than are the other isotopes of an element. When atoms decay, the number of protons in their nucleus changes. Since the number of protons in the nucleus of an atom determines what element that atom belongs to, this decay changes one element into another. Different isotopes decay at different rates. One way to measure the decay rate of an isotope is to find its half-life. An isotope’s half-life is the time that passes until half of a sample of an isotope has decayed.

The various isotopes of a given element have nearly identical chemical properties and many similar physical properties. They differ, of course, in their mass. The mass of a helium-3 atom, for example, is 3.016 amu, while the mass of a helium-4 atom is 4.003 amu.

Usually scientists do not specify the atomic weight of an element in terms of one isotope or another. Instead, they express atomic weight as an average of all of the naturally occurring isotopes of the element, taking into account the isotopic abundance of each. For example, the element copper has two naturally occurring isotopes: copper-63, with a mass of 62.930 amu and an isotopic abundance of 69.2 percent, and copper-65, with a mass of 64.928 amu and an abundance of 30.8 percent. The average mass of naturally occurring copper atoms is equal to the sum of the atomic mass for each isotope multiplied by its isotopic abundance. For copper, it would be (62.930 amu x 0.692) + (64.928 amu x 0.308) = 63.545 amu. The atomic weight of copper is therefore 63.545 g.

About 300 combinations of protons and neutrons in nuclei are stable enough to exist in nature. Scientists can produce another 3,000 nuclei in the laboratory. These nuclei tend to be extremely unstable because they have too many protons or neutrons to stay in one piece for long. Unstable nuclei, whether naturally occurring or created in the laboratory, break apart or change into stable nuclei through a variety of processes known as radioactive decays.

Some nuclei with an excess of protons simply eject a proton. A similar process can occur in nuclei with an excess of neutrons. A more common process of decay is for a nucleus to eject a cluster of 2 protons and 2 neutrons simultaneously. This cluster is actually the nucleus of an atom of helium-4, and this decay process is called alpha decay. Before scientists identified the ejected particle as a helium-4 nucleus, they called it an alpha particle. Helium-4 nuclei are still sometimes called alpha particles.

The most common way for a nucleus to get rid of excess protons or neutrons is to convert a proton into a neutron or a neutron into a proton. This process is known as beta decay. The total electric charge before and after the decay must remain the same. Because protons are electrically charged and neutrons are not, the reaction must involve other charged particles. For example, a neutron can decay into a proton, an electron, and another particle called an electron antineutrino. The neutron has no charge, so the charge at the beginning of the reaction is zero. The proton has an electric charge of +1 and the electron has an electric charge of –1. The antineutrino is a tiny particle with no electric charge. The electric charges of the proton and electron cancel each other, leaving a net charge of zero. The electron is the most easily detected product of this type of beta decay, and scientists called these products beta particles before they identified them as electrons.

Beta decay also results when a proton changes to a neutron. The end result of this decay must have a charge of +1 to balance the charge of the initial proton. The proton changes into a neutron, an anti-electron (also called a positron), and an electron neutrino. A positron is identical to an electron, but the positron has an electric charge of +1. The electron neutrino is a tiny, electrically neutral particle. The difference between the antineutrino in neutron-proton beta decay and the neutrino in proton-neutron beta decay is very subtle—so subtle that scientists have yet to prove that a difference actually exists.

While scientists often create unstable nuclei in the laboratory, several radioactive isotopes also occur naturally. These atoms decay more slowly than most of the radioactive isotopes created in laboratories. If they decayed too rapidly, they wouldn’t stay around long enough for scientists to find them. The heavy radioactive isotopes found on Earth formed in the interiors of stars more than 5 billion years ago. They were part of the cloud of gas and dust that formed our solar system and, as such, are reminders of the origin of Earth and the other planets. In addition, the decay of radioactive material provides much of the energy that heats Earth’s core.

The most common naturally occurring radioactive isotopes are potassium-40, thorium-232, and uranium-238. Atoms of these isotopes last, on average, for billions of years before undergoing alpha or beta decay. The steady decay of these isotopes and other, more stable atoms allows scientists to determine the age of minerals in which these isotopes occur. Scientists begin by estimating the amount of isotope that was present when the mineral formed, then measure how much has decayed. Knowing the rate at which the isotope decays, they can determine how much time has passed. This process, known as radioactive dating, allows scientists to measure the age of Earth. The currently accepted value for Earth’s age is about 4.5 billion years. Scientists have also examined rocks from the Moon and other objects in the solar system and have found that they have similar ages.

In physics, a force is a push or pull on an object. There are four fundamental forces, three of which—the electromagnetic force, the strong force, and the weak force - are involved in keeping stable atoms in one piece and determining how unstable atoms will decay. The electromagnetic force keeps electrons attached to their atom. The strong force holds the protons and neutrons together in the nucleus. The weak force governs how atoms decay when they have excess protons or neutrons. The fourth fundamental force, gravity, only becomes apparent with objects much larger than subatomic particles.

The most familiar of the forces at work inside the atom is the electromagnetic force. This is the same force that causes people’s hair to stick to a brush or comb when they have a buildup of static electricity. The electromagnetic force causes opposite electric charges to attract each other. Because of this force, the negatively charged electrons in an atom are attracted to the positively charged protons in the atom’s nucleus. This force of attraction binds the electrons to the atom. The electromagnetic force becomes stronger as the distance between charges becomes smaller. This property usually causes oppositely charged particles to come as close to each other as possible. For many years, scientists wondered why electrons didn’t just spiral into the nucleus of an atom, getting as close as possible to the protons. Physicists eventually learned that particles as small as electrons can behave like waves, and this property keeps electrons at set distances from the atom’s nucleus. The wavelike nature of electrons is discussed below in the Quantum Atom section of this article.

The electromagnetic force also causes like charges to repel each other. The negatively charged electrons repel one another and tend to move far apart from each other, but the positively charged nucleus exerts enough electromagnetic force to keep the electrons attached to the atom. Protons in the nucleus also repel one other, but, as described below, the strong force overcomes the electromagnetic force in the nucleus to hold the protons together.

Protons and neutrons in the nuclei of atoms are held together by the strong force. This force must overcome the electromagnetic force of repulsion the protons in a nucleus exert on one another. The strong force that occurs between protons alone, however, is not enough to hold them together. Other particles that add to the strong force, but not to the electromagnetic force, must be present to make a nucleus stable. The particles that provide this additional force are neutrons. Neutrons add to the strong force of attraction but have no electric charge and so do not increase the electromagnetic repulsion.

The strong force only operates at very short range - about

2 femtometre (abbreviated fm), or 2 × 10-15 m. (8 × 10-14 in.). Physicists also use the word fermi (also abbreviated fm) for this unit in honour of Italian-born American physicist Enrico Fermi. The short - range property of the strong force makes it very different from the electromagnetic and gravitational forces. These latter forces become weaker as distance increases, but they continue to affect objects millions of light-years away from each other. Conversely, the strong force has such limited range that not even all protons and neutrons in the same nucleus feel each other’s strong force. Because the diameter of even a small nucleus is about 5 to 6 fm, protons and neutrons on opposite sides of a nucleus only feel the strong force from their nearest neighbours.

The strong force differs from electromagnetic and gravitational forces in another important way - the way it changes with distance. Electromagnetic and gravitational forces of attraction increase as particles move closer to one another, no matter how close the particles get. This increase causes particles to move as close together as possible. The strong force, on the other hand, remains roughly constant as protons and neutrons move closer together than about 2 fm. If the particles are forced much closer together, the attractive nuclear force suddenly turns repulsive. This property causes nuclei to form with the same average spacing - about 2 fm - between the protons and neutrons, no matter how many protons and neutrons there are in the nucleus.

The unique nature of the strong force determines the relative number of protons and neutrons in the nucleus. If a nucleus has too many protons, the strong force cannot overcome the electromagnetic repulsion of the protons. If the nucleus has too many neutrons, the excess strong force tries to crowd the protons and neutrons too close together. Most stable atomic nuclei fall between these extremes. Lighter nuclei, such as carbon-12 and oxygen-16, are made up of 50 percent protons and 50 percent neutrons. More massive nuclei, such as bismuth-209, contain about 40 percent protons and 60 percent neutrons.

Particle physicists explained the behaviour of the strong force by introducing another type of particle, called a pion. Protons and neutrons interact in the nucleus by exchanging pions. Exchanging pions pulls protons and neutrons together. The process is similar to two people having a game of catch with a heavy ball, but with each person attached to the ball by a spring. As one person throws the ball to the other, the spring pulls the thrower toward the ball. If the players exchange the ball rapidly enough, the ball and springs become just a blur to an observer, and it appears as if the two throwers are simply pulled toward one another. This is what occurs in the nuclei of atoms. The protons and neutrons in the nucleus are the people, pions act as the ball, and the strong force acts as the springs holding everything together.

Pions in the nucleus exist only for the briefest instant of time, no more than 1 × 10-23 seconds, but even during their short existence they can provide the attraction that holds the nucleus together. Pions can also exist as independent particles outside of the nucleus of an atom. Scientists have created them by striking high-speed protons against a target. Even though the free pions also live only for a short period of time (about 1 × 10-8 seconds), scientists have been able study their properties.

The weak force lives up to its name - it is much weaker than the electromagnetic and strong forces. Like the strong force, it only acts over a short distance, about .01 fm. Unlike these other forces, however, the weak force affects all the particles in an atom. The electromagnetic force only affects the electrons and protons, and the strong force only affects the protons and neutrons. When a nucleus has too many protons to hold together or so many neutrons that the strong force squeezes too tightly, the weak force actually changes one type of particle into another. When an atom undergoes one type of decay, for example, the weak force causes a neutron to change into a proton, an electron, and an electron antineutrino. The total electric charge and the total energy of the particles remain the same before and after the change.

Scientists of the early 20th century found they could not explain the behaviour of atoms using their current knowledge of matter. They had to develop a new view of matter and energy to describe accurately how atoms behaved. They called this theory quantum theory, or quantum mechanics. Quantum theory describes matter as acting both as a particle and as a wave. In the visible objects encountered in everyday life, the wavelike nature of matter is too small to be apparent. Wavelike nature becomes important, however, in microscopic particles such as electrons. As we have discussed, electrons in atoms behave like waves. They exist as a fuzzy cloud of negative charge around the nucleus, instead of as a particle located at a single point.

In order to understand the quantum model of the atom, we must know some basic facts about waves. Waves are vibrations that repeat regularly over and over again. A familiar example of waves occurs when one end of a rope is tied to a fixed object and someone moves the other end up and down. This action creates waves that travel along the rope. The highest point that the rope reaches is called the crest of the wave. The lowest point is called the trough of the wave. Troughs and crests follow each other in a regular sequence. The distance from one trough to the next trough, or from one crest to the next crest, is called a wavelength. The number of wavelengths that pass a certain point in a given amount of time is called the wave’s frequency.

In physics, the word wave usually means the entire pattern, which may consist of many individual troughs and crests. For example, when the person holding the loose end of the rope moves it up and down very fast, many troughs and crests occupy the rope at once. A physicist would use the word wave to describe the entire set of troughs and crests on the rope.

When two waves meet each other, they merge in a process called interference. Interference creates a new wave pattern. If two waves with the same wavelength and frequency come together, the resulting pattern depends on the relative position of the waves’ crests. If the crests and troughs of the two waves coincide, the waves are said to be in phase. Waves in phase with each other will merge to produce higher crests and lower troughs. Physicists call this type of interference constructive interference.

Sometimes waves with the same wavelength and frequency are out of phase, meaning they meet in such a way that their respective crests and troughs do not coincide. In these cases the waves produce destructive interference. If two identical waves are exactly half a wavelength out of phase, the crests of one wave line up with the troughs of the other. These waves cancel each other out completely, and no wave will appear. If two waves meet that are not exactly in phase and not exactly one-half wavelength out of phase, they will interfere constructively in some places and destructively in others, producing a complicated new wave.

Electrons behave as both particles and waves in atoms. This characteristic is called wave-particle duality. Wave-particle duality actually affects all particles and collections of particles, including protons, neutrons, and atoms themselves. But in terms of the structure of the atom, the wavelike nature of the electron is the most important.

As waves, electrons have wavelengths and frequencies. The wavelength of an electron depends on the electron’s energy. Since the energy of electrons is kinetic (energy related to motion), an electron’s wavelength depends on how fast it is moving. The more energy an electron has, the shorter its wavelength is. Electron waves can interfere with each other, just as waves along a rope do.

Because of the electron’s wave-particle duality, physicists cannot define an electron’s exact location in an atom. If the electron were just a particle, measuring its location would be relatively simple. As soon as physicists try to measure its location, however, the electron’s wavelike nature becomes apparent, and they cannot pinpoint an exact location. Instead, physicists calculate the probability that the electron is located in a certain place. Adding up all these probabilities, physicists can produce a picture of the electron that resembles a fuzzy cloud around the nucleus. The densest part of this cloud represents the place where the electron is most likely to be located.

Physicists call the region of space an electron occupies in an atom the electron’s orbital. Similar orbitals constitute groups called shells. The electrons in the orbitals of a particular shell have similar levels of energy. This energy is in the form of both kinetic energy and potential energy. Lower shells are close to the nucleus and higher shells are farther from the nucleus. Electrons occupying orbitals in higher shells generally have more energy than electrons occupying orbitals in lower shells.

The wavelike nature of electrons sets boundaries for their possible locations and determines what shape their orbital, or cloud of probability, will form. Orbitals differ from each other in size, angular momentum, and magnetic properties. In general, angular momentum is the energy an object contains based on how fast the object is revolving, the object’s mass, and the object’s distance from the axis around which it is revolving. The angular momentum of a whirling ball tied to a string, for example, would be greater if the ball were heavier, the string was longer, or the whirling was faster. In atoms, the angular momentum of an electron orbital depends on the size and shape of the orbital. Orbitals with the same size and shape all have the same angular momentum. Some orbitals, however, can differ in shape but still have the same angular momentum. The magnetic properties of an orbital describe how it would behave in a magnetic field. Magnetic properties also depend on the size and shape of the orbital, as well as on the orbital’s orientation in space.

The orbitals in an atom must occur at certain distances from the nucleus to create a stable atom. At these distances, the orbitals allow the electron wave to complete one or more half-wavelengths (y, 1, 1y, 2, 2y, and so on) as it travels around the nucleus. The electron wave can then double back on itself and constructively interfere with itself in a way that reinforces the wave. Any other distance would cause the electron to interfere with its own wave in an unpredictable and unstable way, creating an unstable atom.

Physicists call the number of half-wavelengths that an orbital allows the orbital’s principal quantum number (abbreviated n). In general, this number determines the size of the orbital. Larger orbitals allow more half-wavelengths and therefore have higher principal quantum numbers. The orbital that allows one half-wavelength has a principal quantum number of one. Only one orbital allows one half-wavelength. More than one orbital can allow two or more half-wavelengths. These orbitals may have the same principal quantum number, but they differ from each other in their angular momentum and their magnetic properties. The orbitals that allow one wavelength have a principal quantum number of 2 (n = 2), the orbitals that allow one and a half wavelengths have a principal quantum number of 3 (n = 3), and so on. The set of orbitals with the same principal quantum number make up a shell.

Physicists use a second number to describe the angular momentum of an orbital. This number is called the orbital’s secondary quantum number, or its angular momentum quantum number (abbreviated ‘l’). The number of possible values an orbital can have for its angular momentum is one less than the number of half-wavelengths it allows. This means that an orbital with a principal quantum number of ‘n’, can have –1 possible values for its secondary quantum number.

Physicists customarily use letters to indicate orbitals with certain secondary quantum numbers. In order of increasing angular momentum, the orbitals with the six lowest secondary quantum numbers are indicated by the letters s, p, d, f, g, and h. The letter s corresponds to the secondary quantum number 0, the letter p corresponds to the secondary quantum number 1, and so on. In general, the angular momentum of an orbital depends on its shape. An s-orbital, with a secondary quantum number of 0, is spherical. A p- orbital, with a secondary quantum number of 1, resembles two hemispheres, facing one another. The possible combinations of principal and secondary quantum numbers for the first five shells are listed below.

More than one orbital can allow the same number of half-wavelengths and have the same angular momentum. Physicists call orbitals in a shell that all have the same angular momentum a subshell. They designate a subshell with the subshell’s principal and secondary quantum numbers. For example, the 1s subshell is the group of orbitals in the first shell with an angular momentum described by the letter s. The 2p subshell is the group of orbitals in the second shell with an angular momentum described by the letter p.

Orbitals within a subshell differ from each other in their magnetic properties. The magnetic properties of an orbital depend on its shape and orientation in space. For example, a p- orbital can have three different orientations in space: one situated up and down, one from side to side, and a third from front to back.

Physicists describe the magnetic properties of an orbital with a third quantum number called the orbital’s magnetic quantum number (abbreviated m). The magnetic quantum number determines how orbitals with the same size and angular momentum are oriented in space. An orbital’s magnetic quantum number can only have whole number values ranging from the value of the orbital’s secondary quantum number down to the negative value of the secondary quantum number. A p- orbital, for example, has a secondary quantum number of 1 (l = 1), so the magnetic quantum number has three possible values: +1, 0, and -1. This means the p- orbital has three possible orientations in space. An s-orbital has a secondary quantum number of 0 (l = 0), so the magnetic quantum number has only one possibility: 0. This orbital is a sphere, and a sphere can only have one orientation in space. For a d-orbital, the secondary quantum number is 2 (l = 2), so the magnetic quantum number has five possible values: -2,- 1, 0, +1, and +2. A d- orbital has four possible orientations in space, as well as a fifth orbital that differs in shape from the other four. Together, the principal, secondary, and magnetic quantum numbers specify a particular orbital in an atom.

Electrons are a type of particle known as a fermion. Austrian-American physicist Wolfgang Pauli discovered that no two fermions can have the exact same quantum numbers. This principle is called the Pauli exclusion principle, which states that two or more identical electrons cannot occupy the same orbital in an atom. Scientists know, however, that each orbital can hold two electrons. Electrons had another property, called spin, that differentiates the two electrons in each orbital. An electron’s spin has two possible values: +y (called spin-up) or -y (called spin - down). These two possible values mean that two electrons can occupy the same orbital, as long as their spins are different. Physicists call spin the fourth quantum number of an electron orbital (abbreviated ms). Spin, in addition to the other three quantum numbers, uniquely describes a particular electron’s orbital.

When electrons collect around an atom’s nucleus, they fill up orbitals in a definite pattern. They seek the first available orbital that takes the least amount of energy to occupy. Generally, it takes more energy to occupy orbitals with higher quantum numbers. It takes the same energy to occupy all the orbitals in a subshell. The lowest energy orbital is the one closest to the nucleus. It has a principal quantum number of 1, a secondary quantum number of 0, and a magnetic quantum number of 0. The first two electrons - with opposite spins - occupy this orbital.

If an atom has more than two electrons, the electrons begin filling orbitals in the next subshell with one electron each until all the orbitals in the subshell have one electron. The electrons that are left then go back and fill each orbital in the subshell with a second electron with opposite spin. They follow this order because it takes less energy to add an electron to an empty orbital than to complete a pair of electrons in an orbital. The electrons fill all the subshells in a shell, then go on to the next shell. As the subshells and shells increase, the order of energy for orbitals becomes more complicated. For example, it takes slightly less energy to occupy the s-subshell in the fourth shell than it does to occupy the d- subshell in the third shell. Electrons will therefore fill the orbitals in the 4s subshell before they fill the orbitals in the 3d subshell, even though the 3d subshell is in a lower shell.

The atom’s electron cloud, that is, the arrangement of electrons around an atom, determines most of the atom’s physical and chemical properties. Scientists can therefore predict how atoms will interact with other atoms by studying their electron clouds. The electrons in the outermost shell largely determine the chemical properties of an atom. If this shell is full, meaning all the orbitals in the shell have two electrons, then the atom is stable, and it won’t react readily with other atoms. If the shell is not full, the atom will chemically react with other atoms, exchanging or sharing electrons in order to fill its outer shell. Atoms bond with other atoms to fill their outer shells because it requires less energy to exist in this bonded state. Atoms always seek to exist in the lowest energy state possible.

Physicists call the outer shell of an atom its valence shell. The valence shell determines the atom’s chemical behaviour, or how it reacts with other elements. The fullness of an atom’s valence shell affects how the atom reacts with other atoms. Atoms with valence shells that are completely full are not likely to interact with other atoms. Six gaseous elements - helium, neon, argon, krypton, xenon, and radon - have full valence shells. These six elements are often called the noble gases because they do not normally form compounds with other elements. The noble gases are chemically inert because their atoms are in a state of low energy. A full valence shell, like that of atoms of noble gases, provides the lowest and most stable energy for an atom.

Atoms that do not have a full valence shell try to lower their energy by filling up their valence shell. They can do this in several ways: Two atoms can share electrons to complete the valence shell of both atoms, an atom can shed or take on electrons to create a full valence shell, or a large number of atoms can share a common pool of electrons to complete their valence shells.

When two atoms share a pair of electrons, they form a covalent bond. When atoms bond covalently, they form molecules. A molecule can be made up of two or more atoms, all joined with covalent bonds. Each atom can share its electrons with one or more other atoms. Some molecules contain chains of thousands of covalently bonded atoms.

Carbon is an important example of an element that readily forms covalent bonds. Carbon has a total of six electrons. Two of the electrons fill up the first orbital, the 1s orbital, which is the only orbital in the first shell. The rest of the electrons partially fill carbon’s valence shell. Two fill up the next orbital, the 2s orbital, which forms the 2s subshell. Carbon’s valence shell still has the 2p subshell, containing three p- orbitals. The two remaining electrons each fill half of the two orbitals in the 2p subshell. The carbon atom thus has two half - full orbitals and one empty orbital in its valence shell. A carbon atom fills its valence shell by sharing electrons with other atoms, creating covalent bonds. The carbon atom can bond with other atoms through any of the three unfilled orbitals in its valence shell. The three available orbitals in carbon’s valence shell enable carbon to bond with other atoms in many different ways. This flexibility allows carbon to form a great variety of molecules, which can have a similarly great variety of geometrical shapes. This diversity of carbon-based molecules is responsible for the importance of carbon in molecules that form the basis for living things.

Atoms can also lose or gain electrons to complete their valence shell. An atom will tend to lose electrons if it has just a few electrons in its valence shell. After losing the electrons, the next lower shell, which is full, becomes its valence shell. An atom will tend to steal electrons away from other atoms if it only needs a few more electrons to complete the shell. Losing or gaining electrons gives an atom a net electric charge because the number of electrons in the atom is no longer the same as the number of protons. Atoms with net electric charge are called ions. Scientists call atoms with a net positive electric charge cations (pronounced CAT-eye-uhns) and atoms with a net negative electric charge anions (pronounced AN-eye-uhns).

The oppositely charged cations and anions are attracted to each other by electromagnetic force and form ionic bonds. When these ions come together, they form crystals. A crystal is a solid material made up of repeating patterns of atoms. Alternating positive and negative ions build up into a solid lattice, or framework. Crystals are also called ionic compounds, or salts.

The element sodium is an example of an atom that has a single electron in its valence shell. It will easily lose this electron and become a cation. Chlorine atoms are just one-electron away from completing their valence shell. They will tend to steal an electron away from another atom, forming an anion. When sodium and chlorine atoms come together, the sodium atoms readily give up their outer electron to the chlorine atoms. The oppositely charged ions bond with each other to form the crystal known as sodium chloride, or table salt.

Atoms can complete their valence shells in a third way: by bonding together in such a way so that all the atoms in the substance share each other’s outer electrons. This is the way metallic elements bond and fill their valence shells. Metals form crystal lattice structures similar to salts, but the outer electrons in their atoms do not belong to any atom in particular. Instead, the outer electrons belong to all the atoms in the crystal, and they are free to move throughout the crystal. This property makes metals good conductors of electricity.

The organization of the periodic table reflects the way elements fill their orbitals with electrons. Scientists first developed this chart by grouping together elements that behave similarly in order of increasing atomic number. Scientists eventually realized that the chemical and physical behaviour of elements was dependant on the electron clouds of the atoms of each element. The periodic table does not have a simple rectangular shape. Each column lists elements that share chemical properties, properties that depend on the arrangement of electrons in the orbitals of atoms. These elements have the same number of electrons in their valence shells. Different numbers of elements have similar valence shells, so the columns of the periodic table differ in height. The noble gases are all located in the rightmost column of the periodic table, labelled column 18 in the periodic table. The noble gases all have full valence shells and are extremely stable. The column labelled 11 holds the elements copper, silver, and gold. These elements are metals that have partially filled valence shells and conduct electricity well.

Each electron in an atom has a particular energy. This energy depends on the electron’s speed, the presence of other electrons, the electron’s distance from the nucleus, and the positive charge of the nucleus. For atoms with more than one electron, calculating the energy of each electron becomes too complicated to be practical. However, the order and relative energies of electrons follows the order of the electron orbitals, as discussed in the Electron Orbital and Shell section of this article. Physicists call the energy an electron has in a particular orbital the energy state of the electron. For example, the 1s orbital holds the two electrons with the lowest possible energies in an atom. These electrons are in the lowest energy state of any electrons in the atom.

When an atom gains or loses energy, it does so by adding energy to, or removing energy from, its electrons. This change in energy causes the electrons to move from one orbital, or allowed energy state, to another. Under ordinary conditions, all electrons in an atom are in their lowest possible energy states, given that only two electrons can occupy each orbital. Atoms gain energy by absorbing it from light or from a collision with another particle, or they gain it by entering an electric or magnetic field. When an atom absorbs energy, one or more of its electrons moves to a higher, or more energetic, orbital. Usually atoms can only hold energy for a very short amount of time - typically 1 × 10-12 seconds or less. When electrons drop back down to their original energy states, they release their extra energy in the form of a photon (a packet of radiation). Sometimes this radiation is in the form of visible light. The light emitted by a fluorescent lamp is an example of this process.

The outer electrons in an atom are easier to move to higher orbitals than the electrons in lower orbitals. The inner electrons require more energy to move because they are closer to the nucleus and therefore experience a stronger electromagnetic pull toward the nucleus than the outer electrons. When an inner electron absorbs energy and then falls back down, the photon it emits has more energy than the photon an outer electron would emit. The emitted energy relates directly to the wavelength of the photon. Photons with more energy are made of radiation with a shorter wavelength. When inner electrons drop down, they emit high - energy radiation, in the range of an X ray. X rays have much shorter wavelengths than visible light. When outer electrons drop down, they emit light with longer wavelengths, in the range of visible light.

Physicists and chemists first learned about the properties of atoms indirectly, by studying the way that atoms join together in molecules or how atoms and molecules make up solids, liquids, and gases. Modern devices such as electron microscopes, particle traps, spectroscopes, and particle accelerators allow scientists to perform experiments on small groups of atoms and even on individual atoms. Scientists use these experiments to study the properties of atoms more directly.

One of the most direct ways to study an object is to take its photograph. Scientists take photographs of atoms by using an electron microscope. An electron microscope imitates a normal camera, but it uses electrons instead of visible light to form an image. In photography, light reflects off of an object and is recorded on film or some other kind of detector. Taking a photograph of an atom with light is difficult because atoms are so tiny. Light, like all waves, tends to diffract, or bend around objects in its path. In order to take a sharp photograph of any object, the wavelength of the light that bounces off the object must be much smaller than the size of the object. If the object is about the same size as or smaller than the light’s wavelength, the light will bend around the object and produce a fuzzy image.

Atoms are so small that even the shortest wavelengths of visible light will diffract around them. Therefore, capturing photographic images of atoms requires the use of waves that are shorter than those of visible light. X rays are a type of electromagnetic radiation like visible light, but they have very short wavelengths - much too short to be visible to human eyes. X - ray wavelengths are small enough to prevent the waves from diffracting around atoms. X rays, however, have so much energy that when they bounce off an atom, they knock electrons away from the atom. Scientists, therefore, cannot use X rays to take a picture of an atom without changing the atom. They must use a different method to get an accurate picture.

Electron microscopes provide scientists with an alternate method. Scientists shine electrons, instead of light, on an atom. As discussed in the Electrons as Waves section of this article, electrons have wavelike properties, so they can behave like light waves. The simplest type of electron microscope focuses the electrons reflected off of an object and translates the pattern formed by the reflected electrons into a visible display. Scientists have used this technique to create images of tiny insects and even individual living cells, but they have not been able to use it to make a clear image of objects smaller than about 10 nanometres (abbreviated nm), or 1 × 10-8 m (4 × 10-7 in).

To get to the level of individual atoms, scientists must use a more powerful type of electron microscope called a scanning tunnelling microscope (STM). An STM uses a tiny probe, the tip of which can be as small as a single atom, to scan an object. An STM takes advantage of another wavelike property of electrons called tunnelling. Tunnelling allows electrons emitted from the probe of the microscope to penetrate, or tunnel into, the surface of the object being examined. The rate at which the electrons tunnel from the probe to the surface is related to the distance between the probe and the surface. These moving electrons generate a tiny electric current that the STM measures. The STM constantly adjusts the height of the probe to keep the current constant. By tracking how the height of the probe changes as the probe moves over the surface, scientists can get a detailed map of the surface. The map can be so detailed that individual atoms on the surface are visible.

Studying single atoms or small samples of atoms can help scientists understand atomic structure. However, all atoms, even atoms that are part of a solid material, are constantly in motion. This constant motion makes them difficult to examine. To study single atoms, scientists must slow the atoms down and confine them to one place. Scientists can slow and trap atoms using devices called particle traps.

Slowing down atoms is actually the same as cooling them. This is because an atom’s rate of motion is directly related to its temperature. Atoms that are moving very quickly cause a substance to have a high temperature. Atoms moving more slowly create a lower temperature. Scientists therefore build traps that cool atoms down to a very low temperature.

Several different types of particle traps exist. Some traps are designed to slow down ions, while others are designed to slow electrically neutral atoms. Traps for ions often use electric and magnetic fields to influence the movement of the particle, confining it in a small space or slowing it down. Traps for neutral atoms often use lasers, beams of light in which the light waves are uniform and consistent. Light has no mass, but it moves so quickly that it does have momentum. This property allows the light to affect other particles, or ‘bump’ into them. When laser light collides with atoms, the momentum of the light forces the atoms to change speed and direction.

Scientists use trapped and cooled atoms for a variety of experiments, including those that precisely measure the properties of individual atoms and those in which scientists construct extremely accurate atomic clocks. Atomic clocks keep track of time by counting waves of radiation emitted by atoms in traps inside the clock. Because the traps hold the atoms at low temperatures, the mechanisms inside the clock can exercise more control over the atom, reducing the possibility of error. Scientists can also use isolated atoms to measure the force of gravity in an area with extreme accuracy. These measurements are useful in oil exploration, among other things. A deposit of oil or other substance beneath earth’s surface has a different density than the material surrounding it. The strength of the pull of gravity in an area depends on the density of material in the area, so these changes in density produce changes in the local strength of gravity. Advances in the manipulation of atoms have also raised the possibility of using atoms to etch electronic circuits. This would help make the circuits smaller and thereby allow more circuits to fit in a tinier area.

In 1995 American physicists used particle traps to cool a sample of rubidium atoms to a temperature near absolute zero (-273̊C, or -459̊F). Absolute zero is the temperature at which all motion stops. When the scientists cooled the rubidium atoms to such a low temperature, the atoms slowed almost to a stop. The scientists knew that the momentum of the atoms, which is related to their speed, was close to zero. At this point, a special rule of quantum physics, called the uncertainty principle, greatly affected the positions of the atoms. This rule states that the momentum and position of a particle both cannot have precise values at the same time. The scientists had a fairly precise value for the atom’s momentum (nearly zero), so the positions of the atoms became very imprecise. The position of each atom could be described as a large, fuzzy cloud of probability. The atoms were very close together in the trap, so the probability clouds of many atoms overlapped one another. It was impossible for the scientists to tell where one atom ended and another began. In effect, the atoms formed one huge particle. This new state of matter is called a Bose-Einstein condensate.

Spectroscopy is the study of the radiation, or energy, that atoms, ions, molecules, and atomic nuclei emit. This emitted energy is usually in the form of electromagnetic radiation, vibrating electric and magnetic waves. Electromagnetic waves can have a variety of wavelengths, including those of visible light. X rays, ultraviolet radiation, and infrared radiation are also forms of electromagnetic radiation. Scientists use spectroscopes to measure this emitted radiation.

Atoms emit radiation when their electrons lose energy and drop down to lower orbitals, or energy states, as described in the Electron Energy Levels section above. The difference in energy between the orbitals determines the wavelength of the emitted radiation. This radiation can be in the form of visible light for outer electrons, or it can be radiation of shorter wavelengths, such as X - ray radiation, for inner electrons. Because the energies of the orbitals are strictly defined and differ from element to element, atoms of a particular element can only emit certain wavelengths of radiation. By studying the wavelengths of radiation emitted by a substance, scientists can identify the element or elements comprising the substance. For example, the outer electrons in a sodium atom emit a characteristic yellow light when they return to lower orbitals. This is why street lamps that use sodium vapour have a yellowish glow.

Chemists often use a procedure called a flame test to identify elements. In a flame test, the chemist burns a sample of the element. The heat excites the outer electrons in the element’s atoms, making the electrons jump to higher energy orbitals. When the electrons drop back down to their original orbitals, they emit light characteristic of that element. This light colours the flame and allows the chemist to identify the element.

The inner electrons of atoms also emit radiation that can help scientists identify elements. The energy it takes to boost an inner electron to a higher orbital is directly related to the positive charge of the nucleus and the pull this charge exerts on the electron. When the electron drops back to its original level, it emits the same amount of energy it absorbed, so the emitted energy is also related to the nucleus’s charge. The charge on the nucleus is equal to the atom’s atomic number.

Scientists measure the energy of the emitted radiation by measuring the radiation’s wavelength. The radiation’s energy is directly related to its wavelength, which usually resembles that of an X ray for the inner electrons. By measuring the wavelength of the radiation that an atom’s inner electron emits, scientists can identify the atom by its atomic number. Scientists used this method in the 1910s to identify the atomic number of the elements and to place the elements in their correct order in the periodic table. The method is still used today to identify particularly heavy elements (those with atomic numbers greater than 100) that are produced a few atoms at a time in large accelerators.

Atomic nuclei emit radiation when they undergo radioactive decay, and nuclei usually emit radiation with very short wavelengths (and therefore high energy) when they decay. Often this radiation is in the form of gamma rays, a form of electromagnetic radiation with wavelengths even shorter than X rays. Once again, nuclei of different elements emit radiation of characteristic wavelengths. Scientists can identify nuclei by measuring this radiation. This method is especially useful in neutron activation analysis, a technique scientists use for identifying the presence of tiny amounts of elements. Scientists bombard samples that they wish to identify with neutrons. Some of the neutrons join the nuclei, making them radioactive. When the nuclei decay, they emit radiation that allows the scientists to identify the substance. Environmental scientists use neutron activation analysis in studying air and water pollution. Forensic scientists, who study evidence related to crimes, use this technique to identify gunshot residue and traces of poisons.

Particle accelerators are devices that increase the speed of a beam of elementary particles such as protons and electrons. Scientists use the accelerated beam to study collisions between particles. The beam can collide with a target of stationary particles, or it can collide with another accelerated beam of particles moving in the opposite direction. If physicists use the nucleus of an atom as the target, the particles and radiation produced in the collision can help them learn about the nucleus. The faster the particles move, the higher the energy they contain. If collisions occur at very high energy, it is possible to create particles never before detected. In certain circumstances, energy can be converted to matter, resulting in heavier particles after the collision.

Cyclotrons and linear accelerators are two of the most important kinds of particle accelerators. In a cyclotron, a magnetic field holds a beam of charged particles in a circular path. An electric field interacts with the particles’ electric charge to give them a boost of energy and speed each time the beam goes around. In linear accelerators, charged particles move in a straight line. They receive many small boosts of energy from electric fields as they move through the accelerator.

Bombarding nuclei with beams of neutrons forces the nuclei to absorb some of the neutrons and become unstable. The unstable nuclei then decay radioactively. The way atoms decay tells scientists about the original structure of the atom. Scientists can also deduce the size and shape of nuclei from the way particles scatter from nuclei when they collide. Another use of particle accelerators is to create new and exotic isotopes, including atoms of elements with very high atomic numbers that are not found in nature.

At higher energy levels, using particles moving at much higher speeds, scientists can use accelerators to look inside protons and neutrons to examine their internal structure. At these energy levels, accelerators can produce new types of particles. Some of these particles are similar to protons or neutrons but have larger masses and are very unstable. Others have a structure similar to the pion, the particle that is exchanged between the proton and neutron as part of the strong force that binds the nucleus together. By creating new particles and studying their properties, physicists have been able to deduce their common internal structure and to classify them using the theory of quarks. High - energy collisions between one particle and another often produce hundreds of particles. Experimenters have the challenging task of identifying and measuring all of these particles, some of which exist for only the tiniest fraction of a second.

Beginning with Democritus, who lived during the late 5th and early 4th centuries Bc, Greek philosophers developed a theory of matter that was not based on experimental evidence, but on their attempts to understand the universe in philosophical terms. According to this theory, all matter was composed of tiny, indivisible particles called atoms (from the Greek word atomos, meaning ‘indivisible’). If a sample of a pure element was divided into smaller and smaller parts, eventually a point would be reached at which no further cutting would be possible: this was the atom of that element, the smallest possible bit of that element.

According to the ancient Greeks, atoms were all made of the same basic material, but atoms of different elements had different sizes and shapes. The sizes, shapes, and arrangements of a material’s atoms determined the material’s properties. For example, the atoms of a fluid were smooth so that they could easily slide over one another, while the atoms of a solid were rough and jagged so that they could attach to one another. Other than the atoms, matter was empty space. Atoms and empty space were believed to be the ultimate reality.

Although the notion of atoms as tiny bits of elemental matter is consistent with modern atomic theory, the researchers of prior eras did not understand the nature of atoms or their interactions in materials. For centuries scientists did not have the methods or technology to test their theories about the basic structure of matter, so people accepted the ancient Greek view.

The work of British chemist John Dalton at the beginning of the 19th century revealed some of the first clues about the true nature of atoms. Dalton studied how quantities of different elements, such as hydrogen and oxygen, could combine to make other substances, such as water. In his book A New System of Chemical Philosophy (1808), Dalton made two assertions about atoms: (1) atoms of each element are all identical to one another but different from the atoms of all other elements, and

(2) atoms of different elements can combine to form more complex substances.

Dalton’s idea that different elements had different atoms was unlike the Greek idea of atoms. The characteristics of Dalton’s atoms determined the chemical and physical properties of a substance, no matter what the substance’s form. For example, carbon atoms can form both hard diamonds and soft graphite. In the Greek theory of atoms, diamond atoms would be very different from graphite atoms. In Dalton’s theory, diamond atoms would be very similar to graphite atoms because both substances are composed of the same chemical element.

While developing his theory of atoms, Dalton observed that two elements can combine in more than one way. For example, modern scientists know that carbon monoxide (CO) and carbon dioxide (CO2) are both compounds of carbon and oxygen. According to Dalton’s experiments, the quantities of an element needed to form different compounds are always whole-number multiples of one another. For example, two times as much oxygen is needed to form a litre of CO2 than is needed to form a litre of CO. Dalton correctly concluded that compounds were created when atoms of pure elements joined together in fixed proportions to form units that scientists today call molecules.

Scientists in the early 19th century struggled in another area of atomic theory. They tried to understand how atoms of a single element could exist in solid, liquid, and gaseous forms. Scientists correctly proposed that atoms in a solid attract each other with enough force to hold the solid together, but they did not understand why the atoms of liquids and gases did not attract each other as strongly. Some scientists theorized that the forces between atoms were attractive at short distances (such as when the atoms were packed very close together to form a solid) and repulsive at larger distances (such as in a gas, where the atoms are on the average relatively far apart).

Scientists had difficulty solving the problem of states of matter because they did not adequately understand the nature of heat. Today scientists recognize that heat is a form of energy, and that different amounts of this energy in a substance lead to different states of matter. In the 19th century, however, people believed that heat was a material substance, called caloric, that could be transferred from one object to another. This explanation of heat was called the caloric theory. Dalton used the caloric theory to propose that each molecule of a gas be surrounded by caloric, which exerts a repulsive force on other molecules. According to Dalton’s theory, as a gas is heated, more caloric is added to the gas, which increases the repulsive force between the molecules. More caloric would also cause the gas to exert a greater pressure on the walls of its container, in accordance with scientists’ experiments.

This early explanation of heat and states of matter broke down when experiments in the middle of the 19th century showed that heat could change into energy of motion. The laws of physics state that the amount of energy in a system cannot increase, so scientists had to accept that heat must be energy, not a substance. This revelation required a new theory of how atoms in different states of matter behave.

In the early 19th century Italian chemist Amedeo Avogadro made an important advance in the understanding of how atoms and molecules in a gas behave. Avogadro began his work from a theory developed by Dalton. Dalton’s theory proposed that a gaseous compound, formed by combining equal numbers of atoms of two elements, should have the same number of molecules as the atoms in one of the original elements. For example, ten atoms of the element hydrogen (H) combine with ten atoms of chlorine (Cl) to form ten gaseous hydrogen chloride (HCl) molecules.

In 1811 Avogadro developed a law of physics that seemed to contradict Dalton’s theory of a mixture of gasses that are equal to the sum of the partial pressures that would be exerted by the gasses if they were present separately in the container. As to Avogadro’s law that states that equal volumes of different gases contain the same number of particles (atoms or molecules) if both gases are at the same temperature and pressure. Equal volumes of all gasses measured at the same temperature and pressure contain the same number of molecules, i.e., the volume occupied at a given temperature and pressure by a mole of a gas is the same for all gasses (22.4 x 10-3 m3 at STP). Furthermore, a gas defined for the purpose of thermodynamics as one that obeys Boyle’s Law and that, in addition, has an internal energy independence of the volume occupied, i.e., it obeys Joule’s Law of internal energy. These two requirements are, from the point of view of the kinetic theory, both equivalent to saying that the intermolecular attractions are to be negligible, but the first requires also that the molecules shall be of negligible volume. An ideal gas in fact obeys Boyle’s law, Joule’s law of internal energy, Dalton’s law, and Avogadro’s hypothesis exactly, whereas real gasses obey them only as their pressure tends to zero.

The equation of state for one mole of an ideal gas is given by:

pV = RT,

‘R’ being the molar gas constant. The isothermals of a perfect gas on a p/V graph therefore will form a family of rectangular hyperbolas.

In Dalton’s experiment, the volume of the original vessels containing the hydrogen or chlorine gases was the same as the volume of the vessel containing the hydrogen chloride gas. The pressures of the original hydrogen and chlorine gases were equal, but the pressure of the hydrochloric gas was twice as great as either of the original gases. According to Avogadro’s law, this doubled pressure would mean that there were twice as many hydrogen chloride gas particles than there had been chlorine particles prior to their combination.

To reconcile the results of Dalton’s experiment with his new rule, Avogadro was forced to conclude that the original vessels of hydrogen or chlorine contained only half as many particles as Dalton had thought. Dalton, however, knew the total weight of each gas in the vessels, as well as the weight of an individual atom of each gas, so he knew the total number of atoms of each gas that was present in the vessels. Avogadro reconciled the fact that there were twice as many atoms as there were particles in the vessels by proposing that gases such as hydrogen and chlorine are really made up of molecules of hydrogen and chlorine, with two atoms in each molecule. Today scientists write the chemical symbols for hydrogen and chlorine as H2 and Cl2, respectively, indicating that there are two atoms in each molecule. One molecule of hydrogen and one molecule of chlorine combine to form two molecules of hydrogen chlorine (H2 + Cl2 →? 2HCl). The sample of hydrogen chloride contains twice the number of particles as either the hydrogen or chlorine because two molecules of hydrogen chloride form when a molecule of hydrogen combines with a molecule of chlorine.

The work of Dalton and Avogadro led to a consistent view of the quantities of different gases that could be combined to form compounds, but scientists still did not understand the nature of the forces that attracted the atoms to one another in compounds and molecules. Scientists suspected that electrical forces might have something to do with that attraction, but they found it difficult to understand how electrical forces could allow two identical, neutral hydrogen atoms to attract one another to form a hydrogen molecule.

In the 1830s, British physicist Michael Faraday took the first significant step toward appreciating the importance of electrical forces in compounds. Faraday placed two electrodes connected to opposite terminals of a battery into a solution of water containing a dissolved compound. As the electric current flowed through the solution, Faraday observed that one of the elements that comprised the dissolved compound became deposited on one electrode while the other element became deposited on the other electrode. The electric current provided by the electrodes undid the coupling of atoms in the compound. Faraday also observed that the quantity of each element deposited on an electrode was directly proportional to the total quantity of current that flowed through the solution, the stronger the current, the more material became deposited on the electrode. This discovery made it clear that electrical forces must be in some way responsible for the joining of atoms in compounds.

Despite these significant discoveries, most scientists did not immediately accept that atoms as described by Dalton, Faraday, and Avogadro were responsible for the chemical and physical behaviour of substances. Before the end of the 19th century, many scientists believed that all chemical and physical properties could be determined by the rules of heat, an understanding of atoms closer to that of the Greek philosophers. The development of the science of thermodynamics (the scientific study of heat) and the recognition that heat was a form of energy eliminated the role of caloric in atomic theory and made atomic theory more acceptable. The new theory of heat, called the kinetic theory, said that the atoms or molecules of a substance move faster, or gain kinetic energy, as heat energy is added to the substance. Nevertheless, a small but powerful group of scientists still did not accept the existence of atoms - they regarded atoms as convenient mathematical devices that explained the chemistry of compounds, not as real entities.

In 1905 French chemist Jean-Baptiste Perrin performed the final experiments that helped prove the atomic theory of matter. Perrin observed the irregular wiggling of pollen grains suspended in a liquid (a phenomenon called Brownian motion) and correctly explained that the wiggling was the result of atoms of the fluid colliding with the pollen grains. This experiment showed that the idea that materials were composed of real atoms in thermal motion was in fact correct.

As scientists began to accept atomic theory, researchers turned their efforts to understanding the electrical properties of the atom. Several scientists, most notably British scientist Sir William Crookes, studied the effects of sending electric current through a gas. The scientists placed a very small amount of gas in a sealed glass tube. The tube had electrodes at either end. When an electric current was applied to the gas, a stream of electrically charged particles flowed from one of the electrodes. This electrode was called the cathode, and the particles were called cathode rays.

At first scientists believed that the rays were composed of charged atoms or molecules, but experiments showed that the cathode rays could penetrate thin sheets of material, which would not be possible for a particle as large as an atom or a molecule. British physicist Sir Joseph John Thomson measured the velocity of the cathode rays and showed that they were much too fast to be atoms or molecules. No known force could accelerate a particle as heavy as an atom or a molecule to such a high speed. Thomson also measured the ratio of the charge of a cathode ray to the mass of the cathode ray. The value he measured was about 1,000 times larger than any previous measurement associated with charged atoms or molecules, indicating that within cathode rays particularly tiny masses carried relatively large amounts of charge. Thomson studied different gases and always found the same value for the charge-to-mass ratio. He concluded that he was observing a new type of particle, which carried a negative electric charge but was about a thousand times less massive than the lightest known atom. He also concluded that these particles were constituents of all atoms. Today scientists know these particles as electrons, and Thomson is credited with their discovery.

Scientists realized that if all atoms contain electrons but are electrically neutral, atoms must also contain an equal quantity of positive charge to balance the electrons’ negative charge. Furthermore, if electrons are indeed much less massive than even the lightest atom, then this positive charge must account for most of the mass of the atom. Thomson proposed a model by which this phenomenon could occur: He suggested that the atom was a sphere of positive charge into which the negative electrons were imbedded, like raisins in a loaf of raisin bread. In 1911 British scientist Ernest Rutherford set out to test Thomson’s proposal by firing a beam of charged particles at atoms.

Rutherford chose alpha particles for his beam. Alpha particles are heavy particles with twice the positive charge of a proton. Alpha particles are now known to be the nuclei of helium atoms, which contain two protons and two neutrons. If Thomson’s model of the atom was correct, Rutherford theorized that the electric charge and the mass of the atoms would be too spread out significantly to deflect the alpha particles. Rutherford was quite surprised to observe something very different. Most of the alpha particles did indeed change their paths by a small angle, and occasionally an alpha particle bounced back in the opposite direction. The alpha particles that bounced back must have struck something at least as heavy as themselves. This led Rutherford to propose a very different model for the atom. Instead of supposing that the positive charge and mass were spread throughout the volume of the atom, he theorized that it was concentrated in the centre of the atom. Rutherford called this concentrated region of electric charge the nucleus of the atom.

In the span of 100 years, from Dalton to Rutherford, the basic ideas of atomic structure evolved from very primitive concepts of how atoms combined with one another to an understanding of the constituents of atoms - a positively charged nucleus surrounded by negatively charged electrons. The interactions between the nucleus and the electrons still required study. It was natural for physicists to model the atom, in which tiny electrons orbit a much more massive nucleus, after a familiar structure such as the solar system, in which planets orbit around a much more massive Sun. Rutherford’s model of the atom did indeed resemble a tiny solar system. The only difference between early models of the nuclear atom and the solar system was that atoms were held together by electromagnetic force, while gravitational force holds together the solar system.

Danish physicist Niels Bohr used new knowledge about the radiation emitted from atoms to develop a model of the atom significantly different from Rutherford’s model. Scientists of the 19th century discovered that when an electrical discharge passes through a small quantity of a gas in a glass tube, the atoms in the gas emit light. This radiation occurs only at certain discrete wavelengths, and different elements and compounds emit different wavelengths. Bohr, working in Rutherford’s laboratory, set out to understand the emission of radiation at these wavelengths based on the nuclear model of the atom.

Using Rutherford’s model of the atom as a miniature solar system, Bohr developed a theory by which he could predict the same wavelengths scientists had measured radiating from atoms with a single electron. However, when conceiving this theory, Bohr was forced to make some startling conclusions. He concluded that because atoms emit light only at discrete wavelengths, electrons could only orbit at certain designated radii, and light could be emitted only when an electron jumped from one of these designated orbits to another. Both of these conclusions were in disagreement with classical physics, which imposed no strict rules on the size of orbits. To make his theory work, Bohr had to propose special rules that violated the rules of classical physics. He concluded that, on the atomic scale, certain preferred states of motion were especially stable. In these states of motion an orbiting electron (contrary to the laws of electromagnetism) would not radiate energy.

At the same time that Bohr and Rutherford were developing the nuclear model of the atom, other experiments indicated similar failures of classical physics. These experiments included the emission of radiation from hot, glowing objects (called thermal radiation) and the release of electrons from metal surfaces illuminated with ultraviolet light (the photoelectric effect). Classical physics could not account for these observations, and scientists began to realize that they needed to take a new approach. They called this new approach quantum mechanics, and they developed a mathematical basis for it in the 1920s. The laws of classical physics work perfectly well on the scale of everyday objects, but on the tiny atomic scale, the laws of quantum mechanics apply.

The quantum mechanical view of atomic structure maintains some of Rutherford and Bohr’s ideas. The nucleus is still at the centre of the atom and provides the electrical attraction that binds the electrons to the atom. Contrary to Bohr’s theory, however, the electrons do not circulate in definite planet-like orbits. The quantum-mechanical approach acknowledges the wavelike character of electrons and provides the framework for viewing the electrons as fuzzy clouds of negative charge. Electrons still have assigned states of motion, but these states of motion do not correspond to fixed orbits. Instead, they tell us something about the geometry of the electron cloud - its size and shape and whether it is spherical or bunched in lobes like a figure eight. Physicists called these states of motion orbitals. Quantum mechanics also provides the mathematical basis for understanding how atoms that join together in molecules share electrons. Nearly 100 years after Faraday’s pioneering experiments, the quantum theory confirmed that it is indeed electrical forces that are responsible for the structure of molecules.

Two of the rules of quantum theory that are most important to explaining the atom are the idea of wave-particle duality and the exclusion principle. French physicist Louis de Broglie first suggested that particles could be described as waves in 1924. In the same decade, Austrian physicist Erwin Schrödinger and German physicist Werner Heisenberg expanded de Broglie’s ideas into formal, mathematical descriptions of quantum mechanics. The exclusion principle was developed by Austrian-born American physicist Wolfgang Pauli in 1925. The Pauli exclusion principle states that no two electrons in an atom can have exactly the same characteristics.

The combination of wave-particle duality and the Pauli exclusion principle sets up the rules for filling electron orbitals in atoms. The way electrons fill up orbitals determines the number of electrons that end up in the atom’s valence shell. This in turn determines an atom’s chemical and physical properties, such as how it reacts with other atoms and how well it conducts electricity. These rules explain why atoms with similar numbers of electrons can have very different properties, and why chemical properties reappear again and again in a regular pattern among the elements.

High - energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called super-strings. Superstring may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, super-strings may tell us about the largest event of all time: the big bang, and the creation of the universe.

These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself.

The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging slightly more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.

The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element, that is how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.

In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.

Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best - known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.

Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. And so is everything else.

It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is actually the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed. What differentiates radio waves, visible light, and X rays is their energy. This energy is directly related to the wave’s length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.

How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.

Really to understand what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.

While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.

Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is rather ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated!

The proposal and acceptance of quarks was a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.

Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.

Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultra reliable lasers and transistors.

The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. But they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. And physicists cannot yet provide a satisfactory description of gravity.

The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton’s explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity?

Quantum gravity may as a theory of gravitation that is consistent with quantum mechanics. The subject, still in its infancy, has no completely satisfactory theory. In conventional quantum gravity, the gravitational force is mediated by a massive spin-2 particle, called the graviton. The internal degrees of freedom of the graviton requires it to be the quantum of a set of ten fields hij(χ) with hij(χ) = hij(χ), and I, j = 1, . . . 4. In general relativity, the curvature of space-time is described by the ten components of a ‘metric tensor’. The field components hij(χ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a Quantum Field Theory, which has a regrettable tendency to produce infinities for measurable quantities. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalization procedures to make sense of these infinitives. It has been shown that renormalization procedures fail for theories such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity in the Planck constant:

Lp = (Gh/c3)½ = 10‒35 m.

Supersymmetry has been suggested as a structure that could be free from these pathological infinitives. Many theorists believe that an effective supergravity field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appear only as the low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstrings) is an extension of the ideas of Supersymmetry to one - dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The ‘normal modes’ of superstrings represent an infinite set of ‘normal’ elementary particles whose masses and spins are related in a special way. Thus the graviton is only one of the string modes: when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of ‘normal’ particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimension being tightly ‘curled up’ in a small circle presumably of Planck length size.

No one as yet, has proposed a satisfactory answer to quantum gravitation or graviton gravitation, but physicists have been trying to find one for a long time.

At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn’t we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.

There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics are subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology - for example, how did the universe get to be the way it is, and why did galaxies form?

Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.

We can't build an accelerator to probe that distance using today’s technology, because the proportions of size and energy show that it would stretch from here to the stars. But we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity.

These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model - such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicist’s questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.

Among the most promising new theories is the idea that everything is made of fundamental ‘strings’, rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle - at quantum-length distances - we would see a tiny, vibrating loop.

In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.

Accountably, for reasons that it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.

Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.

Finally, least that we mention, that the popularization of science is nothing else than an endeavour to image scientific ideas in such a way that everyone (especially non-scientists) can grasp the fundamental concepts and have an idea of what science in essence is. Of course, no one really knows what 'science' is, not even the scientists themselves. Philosophers trying to describe what the scientific method could be and others trying to put down what the scientific method should be, found out (it took them a lot of time) that there is nothing like the 'one and only' scientific approach. The impossibility to give a distinct and unique definition follows. Nevertheless, the phenomenon 'science' and its results do exist. Although nobody can tell exactly what 'science' is all about, everyone should have an idea anyway. The question at stake here is whether this is possible and, if so, to what extent.

Take to consider the following into consideration. The best map one can make is, evidently, a scale 1:1 parallel projection of the surface one wants to chart. But such a map is clearly lumpish to handle and quite superfluous. In extremis, the most accurate image of an object is the object itself. Mutatis mutandis, an attempt to popularize science should then present science as it is. Ilya Prigogine strongly adheres this position (Prigogine, 1996). A 'good' book on general relativity theory is then nothing else than a complete historical overview (in extenso) of all papers on the topic. However, the reader is then supposed to be a scientist (already), which makes the whole enterprise unneeded.

So, one has to take the confined capabilities of the possible reader into consideration. Since the reader is not a scientist, a 'translation' has to be made, making science more accessible. Besides this, also a selection is imperative, because the scientific domain is quite vast. Inevitably, a (major?) part of the information to get a reasonable complete view of science is lost in the process. One cannot give a full account of science.

However, according to Carl Sagan (1996), it is possible to popularize science to a great extent, by means of a comparison of science with baseball. All fields of interest-from Newtonian mechanics to sociological models-can be effectively explained in this way. But doing so, the difference between the disciplines tends to disappear. The reader might well understand Newtonian mechanics and several sociological models by means of an analogy, but will certainly fail to understand science itself. There is more to understanding science, than merely the comprehension of the respective contents of the disciplines.

Out of all this, free interpretation results. In two ways, popularization implies necessary interpretation. Firstly, since there is no 'standard interpretation' on hand, the author of a vulgarizing book has to write down his/her view on science and scientific matters alike. As Dennis Dieks wrote regarding the popularization of quantum mechanics "For the science populariser that there is no 'scientific picture' which he can attempt to represent with as little distortion as possible" (Dieks, 1996). Secondly, the reader makes an interpretation of his/her own. The distance between 'science' and the layperson seems to get bigger and bigger.

Still, popularization seems to be possible. It is possible to popularize all specific scientific ideas (one at a time), it is possible to show how a particular specialist works (one at a time). Ideal popularization is therefore possible, taken that the reader has enough time (to compensate the loss by selection) and taken that enough popularisers can be found to present the scientific content adequately (to compensate the loss by translation). The key to all this is time indeed. Which should not be surprising, since every learning process takes lots of time. Hence, popularization of science does not have to be utopian. But are we on the right track with the existing attempts?

In 1991 David Lerner published his "The Big Bang Never Happened," a first class example of a wrong type of popularization. Lerner, a science journalist, came up with a strange but viable idea, seconded by a couple of astronomers. However, without any sense of self-critique, the thesis that standard theory was completely crazy was presented by means of a popular work on astronomy. The title was-from a publisher's point of view-brilliantly chosen. It is always exciting to everyone to hear that a standard theory is becoming overthrown (another example is Boslough's "Masters of Time. How Wormholes, snakewood and assaults on the big bang have brought mystery back to the cosmos."). 'Science', i.e., standard science, seems to fail. Hence, presenting non-scientific thoughts as scientific ones in a popular manner and at the same time scorching a standard view, the layperson is harshly misled. Furthermore, with one particular standard view down the drain, other established theories (in other fields of science) could loose their convincing power as well, just because they are 'standard' theories. In this context, it is good to know that standard big bang cosmology is not what most of the people think it is. It is a rather humble theory, based on the Hubble-relation (taking into account several parameters, which means that the redshift does not necessarily imply an overall expansion). The theory tries to fit this and a few other empirical facts (like the cosmic background radiation) into a consistent whole. The parametrization makes it a very flexible theory (standard theory, for example, does not predict a specific age for the universe), still easy to falsify.

"A Brief History of Time," Stephen Hawking's 1988 best-seller, is an example of an excellent populariser. Following the advice of his publisher, Hawking only kept one formula in the book, since every mathematical expression would have halved the sales (White, 1992). In view of popularization, it is, of course, very important to reach a public as broad as possible. Hawking succeeded in this: in 1997 a twenty first revised edition appeared.

Do the sales make good popularizations? Yes and no. Pseudo-science has even better sales-figures and can hardly be recognized as good popularization of science. It is equally important that the content of the book is unbiased, non-speculative and clear, and regards standard science. Hawking's 1988 attempt makes a high score. It is objective, perspicuous and objective. With one exception: Hawking does not state that his thoughts on the arrow of time are speculative.

In 1994, the CD-ROM titled "A Brief History of Time. An Interactive Adventure" appeared. Without effort, one can now discover Hawking's universe. The complete text is online and animated graphics make the most complex concepts crystal-clear. But two disadvantages must be considered. For one, the CD-ROM is not available to everyone. And secondly, the majority of the users will read the book only partially. As far as the content goes, it is an superb populariser. It's the medium that doesn't satisfy.

The twentieth edition of "A Brief History of Time" was a revised edition. Hawking includes the latest theoretical ideas and empirical results in this version (which is no longer a pocketbook edition). Most importantly, though, the book is illustrated with beautiful photographs and lucid diagrams. The benefit is evidently twofold. Pictures magnetize the layman and, after the book is bought and one starts reading it, they make understanding very easy. The illustrations of the first edition seem to be very cheap in comparison with the latest publication. The new text though doesn't differ much from the original edition and the style is still the same: very transparent and fluent.

All this contrasts sharply with Hawking's "The Nature of Space and Time," a book he wrote with his close friend and colleague Roger Penrose. Actually, the book is a compilation of a series of talks, a debate between the two authors, held in 1994 at the University of Cambridge. The result is unfortunate example of bad popularization. Needless to say it didn't serve the popularization of science, nor its audience. The text is obscured by ambiguous, sometimes even shabby illustrations and very complex formulae (tensors and differential forms). Only graduates in mathematics or physics can grasp the whole story. This shouldn't be a problem if the book wasn't presented as another 'best-seller' written by the most famous populariser of science.

Lerner's publication shows that popularization is hazardous: pseudo-science lurks. The layman, intrigued by science and eager to learn, doesn't know and should be protected. But how? Popularisers should take an effort to present science objectively, making sure that the distinction between science and pseudo-science (if we don't know what science is, we still could tell what it isn't) is clear. Any confusion should be avoided, so too with George Gale when he writes that "We should avoid wherever possible the more speculative aspects of today's cosmology. Although it is possible that it is just these aspects of cosmology that the public most desires, we have here a case in which forbearance is the best policy. This, because, cosmological speculation is just that: speculation. And speculation is not science; indeed, it is not even popular science. We serve our audience and ourselves badly when we mislead them by presenting speculation within a scientific context." (Gale 1996).

However, from a methodological point of view, speculation is fundamental in cosmology. According to Ilya Prigogine, it is necessary to present science as adequately as possible (Prigogine 1996): ideally presenting science as it is done, in its own language. Therefore, speculation cannot be left out of the picture. When Hawking described in "A Brief History of Time" his own particular view on time, a view which still remains very controversial, he showed the layman that science is still developing and depending on speculation. But the layman couldn't make the difference. So, although the book is certainly a very good populariser (at least because of its distribution), Hawking should have flagged clearly his latest views as highly speculative. Anyway, Hawking's "Brief History" shows that 'scientifically correct', 'ideologically acceptable', 'effective' or 'objective' vulgarization of science is indeed a reachable ideal. It is most important, though, that the authors are clear about their own views and don't use popularization for their own purposes.